Here comes the 5th episode. I have relied on Claude in formatting the text and doing some research. In the very end, I also provide a list of all major university rankings, prepared with the help of Perplexity.
Episode 1- The Performance of Transparency
Episode 2 – The Theater of Scholarly Collegiality
Episode 3- The Q1 Journal Fetish
Episode 4- The Administrator Problem
Episode 4: The Rankings Game—Measuring Everything, Understanding Nothing
I recently heard that a university administration dismissed a book chapter published by New York University Press as essentially worthless for faculty evaluation. The reason? URAP rankings (University Ranking by Academic Performance) do not specifically include book-based publications in their methodology. Let that sink in: a contribution to scholarly knowledge published by one of the world’s most prestigious academic presses—subjected to rigorous peer review, edited by leading scholars, likely to be read and cited for years—counts for nothing because it doesn’t appear in the right database.

Meanwhile, a hastily written article in a questionable Scopus-indexed journal—perhaps one of those publications that will accept almost anything for a processing fee—counts. It generates a data point. It moves the needle. It helps the ranking.
This is madness. But it’s not anomalous madness. It’s systematic madness, the logical endpoint of our collective obsession with university rankings.
I’m not arguing that we need no comparative information about universities whatsoever. Prospective students, faculty considering positions, funding agencies making decisions—all need some way to evaluate institutions. The problem isn’t that rankings exist. The problem is that they’ve become ends in themselves rather than imperfect tools, that we’ve built entire systems of academic evaluation around metrics that are fundamentally flawed, easily manipulated, and hostile to the actual work of scholarship.
Let’s see some of the significant issues with university rankings:
The Fundamental Problem: What Are We Actually Measuring?
A recent methodological review of college ranking systems identified the core issue: lack of construct validity [1]. Put simply, ranking agencies can’t agree on what they’re measuring. Is it “quality”? “Excellence”? “Performance”? “Value”? “Impact”? Each ranking uses different definitions, different indicators, different weights—and produces radically different results.
The same university can be ranked 50th globally by one system and 150th by another [2]. When US News changed its methodology in 2023—what they called “the most significant methodological change in the rankings’ history”—25% of ranked universities moved 30 places or more in a single year [3]. These massive swings don’t reflect actual changes in educational quality. They reflect arbitrary decisions about indicator weights and calculation methods.
Why should research count for 60% of a university’s score and teaching for 30%? Why not 50-50? Or 40-60? There’s no scientific justification for these weights [4]. They’re essentially arbitrary—which means the entire ranking order is essentially arbitrary.

Disciplinary Bias: If It’s Not Science, It Doesn’t Count
Rankings systematically favor STEM fields while marginalizing arts, humanities, and social sciences. The Academic Ranking of World Universities (Shanghai Ranking) awards 30% of its total score based on Nobel Prizes and Fields Medals—both exclusively science and mathematics awards [5].
The Leiden University Ranking excludes all publications in books, conference proceedings, and journals that are not indexed in Web of Science [6]. For humanities scholars, this is devastating. In many humanities fields, the most important scholarship appears in books, not articles. Citation patterns are completely different—slower, more cumulative, often in local languages rather than English.
The URAP example I opened with exemplifies this perfectly. Book chapters—the primary mode of humanities scholarship—simply don’t exist in the methodology. A scholar could write the most important contribution to their field in a decade, published by Oxford or Cambridge or Princeton University Press, peer-reviewed by the leading experts globally, and it would count for exactly nothing.
But a minimal contribution to a Scopus-indexed journal—perhaps one of the thousands of questionable publications that have proliferated precisely because of ranking pressures—counts. Because it generates the right kind of data point.
Gaming the System: When Metrics Become Targets
This brings us to perhaps the most serious problem: systematic manipulation. Goodhart’s Law states that “when a measure becomes a target, it ceases to be a good measure” [7]. Campbell’s Law adds that “the more social consequences associated with a quantitative indicator, the more likely the indicator itself will become corrupted” [8].
Both are operating at full force in university rankings.
A recent study examined universities with extraordinary research growth, selecting 18 institutions in India, Lebanon, Saudi Arabia, and the United Arab Emirates from among the world’s 1,000 most-publishing institutions [9]. These universities exhibited publication surges of up to 965% in just five years. They showed sharp declines in first and corresponding authorship—suggesting faculty weren’t actually leading the research. They demonstrated proliferation of “hyper-prolific” authors publishing hundreds of papers annually, dense reciprocal co-authorship and citation networks, elevated shares of publications in delisted journals, and rising retraction rates.
These are patterns consistent with strategic metric optimization, not genuine scholarly advancement [10].
The manipulations take many forms:
Citation manipulation: Groups of authors or journals forming “citation rings” to inflate each other’s metrics [11]. Some universities showed excessive self-citation rates far beyond disciplinary norms [12].
Publication inflation: “Salami slicing”—splitting research into minimum publishable units to maximize publication counts [13]. Publishing in predatory or low-quality journals that are indexed but have minimal review standards [14].
Peer review rigging: Authors submitting suggested reviewer email addresses that are actually their own, then reviewing their own papers positively [15]. The website Retraction Watch has documented over 600 cases of rigged peer review [16].
Authorship manipulation: Adding authors who contributed minimally to boost institutional publication counts. Dense internal co-authorship networks that exist primarily to game metrics [17].
Administrative manipulation: US universities capping class enrollment at 19 students to fit US News’s definition of “small class,” which is rewarded in rankings [18]. Manipulating definitions of “successful student” or “peer-reviewed publication” to improve metrics [19].
One particularly egregious case: a voting syndicate in the Times Higher Education Arab University Rankings was proven to be manipulating reputation scores. The consequence? Neutralized votes. Not institutional exclusion, not penalties—just neutralization [20]. The tolerance for manipulation is built into the system.
Perverse Incentives and Mission Distortion
These aren’t isolated incidents by rogue institutions. They’re rational responses to perverse incentives created by the ranking systems themselves. Rankings encourage behavior aimed at improving metric performance rather than strengthening teaching quality, local relevance, or public service [21].
Universities reallocate resources from education to research that “counts” in rankings. Hire faculty based on publication metrics rather than teaching ability or disciplinary need. Pressure scholars to publish in “high-impact” journals regardless of whether those are the right venues for their work. Discourage long-term projects—books, comprehensive studies—in favor of quick articles that generate immediate data points.
A recent statement by an Independent Expert Group convened by the United Nations University described how “rankings perversely incentivize universities to prioritize short-term and sometimes unethical interventions to improve their rankings, rather than the needs of their students, staff, local communities, or of society more generally” [22]. The constant, short-sighted obsession with annual rankings comes at the cost of long-term and broader goals.
Subjectivity Masquerading as Objectivity
Rankings present themselves as objective, data-driven assessments. But massive portions depend on subjective opinion surveys:
- QS World University Rankings: 50% of total score based on surveys of subjective opinions from anonymous individuals [23]
- Times Higher Education: 33% based on subjective opinions [23]
- US News Best Global Universities: 25% based on subjective opinions [23]
These reputation metrics don’t measure current quality—they measure historical prestige. They’re slow to reflect actual changes, further disadvantaging emerging universities and non-Western institutions [24]. They entrench existing hierarchies rather than challenging them.
Who exactly are these anonymous survey respondents? What are their potential conflicts of interest? How representative are they? The methodologies are often opaque, proprietary, impossible to independently verify [25].
Global Inequality and Structural Bias
Perhaps the most fundamental flaw: rankings make no adjustment for resources available to universities. They compare Harvard—with its $50+ billion endowment—directly with universities in developing countries operating on a fraction of that budget [26].
This inevitably advantages historically privileged institutions and helps perpetuate global inequalities in higher education instead of raising academic standards equitably and universally [27]. As Professor Akosua Adomako Ampofo of the University of Ghana stated: “It is not appropriate for universities from historically exploited and disadvantaged regions to feel compelled to compete on an un-level playing field with a set of rules that are biased in favour of the Global North” [28].
Rankings also show strong Anglophone bias, heavily favoring English-language publications [29]. For Turkish scholars, for instance, this creates impossible choices: publish in Turkish for local impact but sacrifice rankings, or publish in English for rankings but sacrifice local relevance.

The Commercial Dimension
We should remember that major ranking agencies are for-profit businesses, not neutral evaluators [30]. Their fundamental mission is producing profits, not improving higher education. They sell consulting services to universities seeking to improve their rankings. They sell data and analytics. They sell advertising and sponsorships at ranking-related events.
This creates obvious conflicts of interest. Rankings need to change regularly—otherwise there’s no news, no clicks, no revenue. Methodological changes ensure dramatic ranking swings that generate media coverage and drive traffic. The extractive nature diverts university resources from core academic functions to data submission, ranking consultants, and “strategic initiatives” aimed at gaming indicators [31].
The Exodus Begins
Recognizing these problems, leading universities are withdrawing from rankings:
- University of Zurich publicly confirmed it won’t provide data to THE, citing that rankings “create false incentives” and cannot reflect the wide range of university missions [32]
- Utrecht University opted out in 2023 to avoid overreliance on quantitative metrics [32]
- Six leading Indian Institutes of Technology (including IIT Bombay, IIT Delhi, IIT Madras) withdrew from THE in 2020, explicitly criticizing methodological opacity and Anglophone bias [32]
- University of Otago withdrew from THE Impact Rankings in 2024, saying methodology didn’t adequately reflect their sustainability work [32]
Evidence suggests these withdrawals caused little or no reputational harm [33]. Universities continue participating in other rankings or publish transparent metrics locally, reallocating staff time and resources to internal priorities rather than surrendering to opaque external measures.
The Turkish Context: More Royalist Than the King
And where does Turkey fit in this picture? We’ve adopted ranking obsession even more rigidly than the countries where these systems originated. Turkish university administrators—many lacking international research profiles themselves—zealously declare to enforce international metrics on faculty (but can they really enforce- this is yet another issue)
A university administration’s dismissal of NYU Press book chapters is symptomatic. We’ve created evaluation systems that reduce scholarship to database entries. Quality, impact, innovation—none of this matters if it doesn’t generate the right data point in the right index.
Turkish universities implement strategic plans copied from rankings-obsessed institutions without considering local context. They speak endlessly of becoming “world-class” while undermining the conditions that make genuine scholarship possible. They pressure faculty to publish in Q1 journals (Episode 3’s obsession) that meet ranking criteria rather than in journals appropriate to their research.
What’s the Alternative?
The San Francisco Declaration on Research Assessment (DORA) and similar initiatives call for eliminating use of journal-based metrics in funding, appointment, and promotion decisions [34]. Open assessment frameworks prioritize transparency, context-sensitivity, and multiple forms of evidence over simplistic numerical rankings.
Some suggest we need rankings, but better ones. I’m skeptical. The fundamental problems—attempting to reduce complex, multidimensional institutions to single numerical scores; the inevitable gaming that follows; the perverse incentives; the commercial conflicts of interest—seem inherent to ranking exercises themselves.
What we need instead: transparent, context-specific information that acknowledges universities’ multiple missions and diverse contexts. Recognition that a technical university in Ankara serves different purposes than Harvard and shouldn’t be evaluated by the same criteria. Understanding that book scholarship in the humanities requires a different assessment than STEM journal articles.
Most fundamentally, we need to stop treating rankings as measures of truth and start seeing them for what they are: commercial products produced by for-profit companies using questionable methodologies to generate controversial results that drive web traffic and consulting revenue.
References
[1] NORC at the University of Chicago (2024). “College Ranking Systems: A Methodological Review.” Examines construct validity problems across major ranking systems.
[2] Fauzi, M.A., et al. (2020). “University rankings: A review of methodological flaws.” Issues in Educational Research, 30(1). Documents inconsistencies in university placings across different ranking systems.
[3] Diermeier, D. (2023). “Why the new U.S. News rankings are flawed.” Inside Higher Ed, October 9. Analysis of dramatic ranking changes from methodological shifts.
[4] NORC (2024). Report on lack of scientific justification for indicator weighting in ranking methodologies.
[5] Fauzi et al. (2020). Documents ARWU’s reliance on Nobel Prizes and Fields Medals, both science/mathematics-focused awards.
[6] Fauzi et al. (2020). Details Leiden Ranking’s exclusion of books, conference proceedings, and non-Web of Science journals.
[7] Goodhart, C. (1984). Multiple sources on Goodhart’s Law and its application to metrics gaming.
[8] Campbell, D.T. (1979). “Assessing the Impact of Planned Social Change.” On how social consequences corrupt indicators.
[9] Meho, L.I. (2025). “Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings.” bioRxiv preprint examining metric manipulation.
[10] Meho (2025). Documents patterns consistent with strategic metric optimization at multiple institutions.
[11] Biagioli, M. & Lippman, A. (Eds.) (2020). Gaming the Metrics: Misconduct and Manipulation in Academic Research. MIT Press.
[12] Meho (2025). Analysis of abnormal self-citation patterns in institutions with rapid ranking growth.
[13] Biagioli & Lippman (2020). Discussion of “salami slicing” as metric optimization strategy.
[14] Multiple sources on publication in questionable Scopus-indexed journals as ranking strategy.
[15] Biagioli & Lippman (2020). Documents rigged peer review practices.
[16] Ferguson, C., et al. (2014). “Publishing: The Peer-Review Scam.” Nature 515, documenting over 600 cases of rigged reviews.
[17] Meho (2025). Analysis of dense internal co-authorship networks for metric optimization.
[18] Biagioli & Lippman (2020). Examples of US universities manipulating class sizes for ranking metrics.
[19] Biagioli & Lippman (2020). Discussion of definitional manipulation in metrics gaming.
[20] Multiple sources on THE Arab University Rankings voting syndicate scandal and inadequate response.
[21] Sinar Daily (2024). “Universities push back: Global exodus from Times Higher Education rankings.” Analysis of perverse incentives.
[22] United Nations University (2023). “Rethinking Quality: UNU-convened Experts Challenge the Harmful Influence of Global University Rankings.” Independent Expert Group statement.
[23] UNU Expert Group (2023). Data on subjective survey components in QS, THE, and US News rankings.
[24] Sinar Daily (2024). Analysis of how reputation metrics entrench historical prestige.
[25] Multiple sources on opacity and proprietary nature of ranking methodologies.
[26] UNU Expert Group (2023). Critique of lack of resource adjustment in comparative rankings.
[27] UNU Expert Group (2023). Statement on how rankings perpetuate global inequalities.
[28] UNU Expert Group (2023). Quote from Professor Akosua Adomako Ampofo on unequal playing field.
[29] Multiple sources documenting Anglophone bias in major ranking systems.
[30] UNU Expert Group (2023). Analysis of extractive commercial practices of ranking industry.
[31] McCoy, D. (UNU-IIGH) quoted in UNU statement on resource diversion from academic functions.
[32] Sinar Daily (2024). Documentation of university withdrawals from THE rankings.
[33] Sinar Daily (2024). Evidence on limited reputational harm from ranking withdrawal.
[34] San Francisco Declaration on Research Assessment (DORA, 2013). Calls for eliminating journal-based metrics in evaluation.
A list of University Rankings
Now I have comprehensive information about major university rankings. Let me compile this into a well-structured list with information about ownership, scope, and methodology for each ranking system.
University Rankings: A Comprehensive Overview
University rankings have become influential instruments in higher education, shaping institutional strategies, student choices, and national policies. Below is a detailed examination of the major ranking systems, including their ownership, scope, and methodological approaches.
QS World University Rankings
Ownership: Quacquarelli Symonds (QS), a for-profit higher education analytics company founded in 1990 by Nunzio Quacquarelli and headquartered in London. The company reported a turnover of £50.5 million in 2023.wikipedia+1
Scope: Global ranking covering over 1,500 universities across 105 higher education systems. QS is the most widely-read ranking globally with a 65% market share of global media mentions. The rankings include overall university rankings, subject-specific rankings (55 subjects across five broad areas), regional rankings, and specialized rankings.qschina+1
Methodology: QS employs six key indicators reflecting four missions of world-class universities: excellence in research, producing employable graduates, commitment to high-quality teaching, and internationalization. The methodology includes Academic Reputation (40%), derived from a global survey of academics; Employer Reputation (10%), based on employer surveys; Faculty/Student Ratio (20%), serving as a proxy for teaching quality; Citations per Faculty (20%); International Faculty Ratio (5%); and International Student Ratio (5%). The rankings place significant emphasis on reputation surveys, distinguishing them from more bibliometric-focused systems.roofat+1
Times Higher Education (THE) World University Rankings
Ownership: Times Higher Education is owned by Inflexion Private Equity Partners, a UK-based private equity firm that acquired THE from TPG Capital in March 2019. This marked the fourth change of ownership in 15 years for the publication. THE has expanded through acquisitions, including Inside Higher Ed (2022) and Poets&Quants (2023).wikipedia+1
Scope: Global ranking that has evolved significantly since its launch. The 2025 edition employs 18 performance indicators (increased from 13 in previous editions) to evaluate research-intensive universities worldwide.timeshighereducation+1
Methodology: The framework groups indicators into five areas with the following weights: Teaching (learning environment, 29.5%); Research Environment (volume, income, and reputation, 29%); Research Quality (citation impact, research strength, excellence, and influence, 30%); International Outlook (staff, students, and research, 7.5%); and Industry (income and patents, 4%). THE partners with Elsevier for data analytics and uses a combination of institutional data submissions, bibliometric data, and reputation surveys. The methodology employs Z-scoring for standardization, with exponential components for certain metrics. THE’s data repository, DataPoints, contains 9 million data points from 3,500 institutions across over 100 countries.ihe.bc+1
Academic Ranking of World Universities (ARWU) – Shanghai Ranking
Ownership: Initially published by Shanghai Jiao Tong University’s Center for World-Class Universities in 2003, ARWU has been published and copyrighted by ShanghaiRanking Consultancy since 2009. ShanghaiRanking Consultancy is described as a fully independent organization focused on higher education intelligence and consultation, not legally subordinated to any universities or government agencies.linkedin+2
Scope: The first global university ranking of its kind, ARWU ranks the top 1,000 universities worldwide. It is considered one of the “big three” global rankings alongside QS and THE.universityguru
Methodology: ARWU is distinguished by its purely objective methodology that does not rely on surveys or self-reported institutional data. The ranking uses six indicators: Alumni winning Nobel Prizes and Fields Medals (10%); Staff winning Nobel Prizes and Fields Medals (20%); Highly Cited Researchers in 21 broad subject categories (20%); Papers published in Nature and Science (20%); Papers indexed in Science Citation Index-Expanded and Social Science Citation Index (20%); and Per Capita Academic Performance (10%). This methodology heavily favors research output and prestigious awards, making it particularly strong for evaluating research-intensive institutions in the sciences while potentially undervaluing universities specializing in humanities or social sciences.topuniversities+2
U.S. News & World Report Best Global Universities
Ownership: U.S. News & World Report is a privately held company owned by media proprietor Mortimer Zuckerman. Zuckerman acquired the magazine in 1984 for $176.3 million. The company has editorial headquarters in Washington, D.C., with advertising and corporate offices in New York City and New Jersey.wikipedia+2
Scope: The 2024-2025 ranking evaluates 2,271 institutions globally, ranking the top 2,250 universities. The ranking pool includes the top 250 universities from Clarivate’s global reputation survey plus institutions meeting a minimum threshold of 1,250 papers published from 2018 to 2022.usnews
Methodology: The ranking uses 13 indicators focused on global research performance, with methodology developed in partnership with Clarivate Analytics. Key indicators include Global Research Reputation (12.5%), Regional Research Reputation (12.5%), Publications (10%), Books (2.5%), Conferences (2.5%), Normalized Citation Impact (10%), Total Citations (7.5%), Number of Highly Cited Papers (12.5%), Percentage of Highly Cited Papers (10%), International Collaboration (10%), International Collaboration Relative to Country (5%), and Number of Highly Cited Papers in Top 1% (5%). The methodology employs Z-scores for standardization and logarithmic transformations for highly skewed indicators. Results from Clarivate’s Academic Reputation Survey, aggregated over five years (2019-2023), inform the reputation indicators.usnews
Center for World University Rankings (CWUR)
Ownership: CWUR is a consulting organization that started in Jeddah, Saudi Arabia, in 2012. Since 2016, it has been headquartered in the United Arab Emirates. It operates as an independent consulting firm providing policy advice and strategic insights to governments and universities.cwur+1
Scope: CWUR publishes the largest academic ranking, initially covering the top 100 universities and expanding by 2019 to rank the top 2,000 out of nearly 20,000 universities worldwide.factcards
Methodology: CWUR claims to be the only ranking assessing universities without relying on surveys or university data submissions. The methodology uses seven objective indicators grouped into four areas: Education (based on alumni academic success relative to university size, 25%); Employability (based on alumni professional success relative to university size, 25%); Faculty (measured by faculty members who have received top academic distinctions, 10%); and Research (40%), which includes Research Output (total number of research articles, 10%), High-Quality Publications (research articles in top-tier journals, 10%), Influence (research articles in highly-influential journals, 10%), and Citations (highly cited research papers, 10%). The methodology emphasizes quality over quantity, particularly in measuring publications in prestigious journals.cwur+2
Leiden Ranking
Ownership: The Leiden Ranking is produced by the Centre for Science and Technology Studies (CWTS) at Leiden University in the Netherlands. It is a fully public, non-commercial academic initiative staffed by researchers and academics, distinguishing it from commercially-operated rankings.metricas.usp+1
Scope: The 2025 edition includes over 1,500 universities worldwide. The ranking focuses exclusively on scholarly impact and collaboration, with a much more limited scope than comprehensive rankings.leidenranking+1
Methodology: Unlike other rankings, Leiden avoids over-aggregation by leaving metrics user-defined rather than producing a single composite score. The ranking uses bibliometric data exclusively from Clarivate’s Web of Science database. Key indicators are grouped into categories: Citation Impact (measuring publications in top 1%, 10%, and 50% most cited; total and mean citation scores; normalized citation scores); Collaboration (measuring collaborative publications, international collaboration, industry collaboration, and geographical collaboration distance). The methodology applies field normalization to correct for differences in citation behavior across disciplines and uses fractional counting to avoid double-counting collaborative publications. Only “core publications” are included—English-language papers in international scientific journals suitable for citation analysis.qs+3
SCImago Institutions Rankings (SIR)
Ownership: SCImago is produced by the SCImago Research Group, associated with Spain’s CSIC (Consejo Superior de Investigaciones Científicas). It operates as a research initiative rather than a commercial enterprise.scimagoir+1
Scope: SIR is a comprehensive classification covering academic and research institutions worldwide, including universities, government research centers, health institutions, and companies. The 2023 edition evaluated thousands of institutions globally.scimagoir
Methodology: SIR uses a composite indicator combining three weighted sets of indicators based on a five-year rolling period (ending two years before the ranking publication). The methodology includes Research (50%), subdivided into Normalized Impact (13%), Excellence with Leadership (8%), Output (8%), Scientific Leadership (5%), journal quality metrics, International Collaboration (2%), Open Access (2%), and Scientific Talent Pool (2%); Innovation (30%), including Innovative Knowledge (10%), Patents (10%), and Technological Impact (10%); and Societal Impact (20%), incorporating Altmetrics (3%), Web Size (3%), Authority Score (3%), Sustainable Development Goals (5%), Female Scientific Talent Pool (3%), and Impact on Public Policy via Overton (3%). Data comes primarily from Scopus bibliometric database. The methodology normalizes final scores on a scale of 0 to 100 and includes both size-dependent and size-independent indicators.scimagoir
National Institutional Ranking Framework (NIRF)
Ownership: NIRF is an official initiative of the Ministry of Education, Government of India, approved and launched in September 2015.nirfindia+1
Scope: NIRF provides rankings specifically for Indian higher education institutions. The framework has expanded from four categories in 2016 to 16 categories in 2024, including Overall, University, Engineering, College, Management, Pharmacy, Law, Architecture, Medical, Dental, Research Institutions, Open Universities, State Public Universities, and Skill Universities.kpmg+1
Methodology: The ranking methodology uses 19 parameters organized into five broad groups with different weightages depending on category: Teaching, Learning and Resources (TLR, 30%); Research and Professional Practices (30%); Graduation Outcomes (20%); Outreach and Inclusivity (10%); and Perception (10%). Recent additions include parameters evaluating sustainable development goals, multiple entry-exit options, Indian Knowledge Systems courses, and programs in multiple Indian regional languages. Institutions are grouped into Category A (Institutions of National Importance, State Universities, Deemed-To-be-Universities, Private Universities, and Autonomous institutions) and Category B (affiliated institutions), with tailored methodologies for each.studyinindia+3
Round University Ranking (RUR)
Ownership: RUR Ranking Agency is a Moscow-based (now relocated to Tbilisi, Georgia) Russian company founded in 2013. The agency serves as the official representative of Times Higher Education in Russia and CIS countries.wikipedia+2
Scope: RUR evaluates approximately 1,200 universities globally, with just over 1,100 included in published rankings. It produces four annual rankings: overall world university ranking, six-part subject ranking, reputation ranking, and academic research performance ranking.universityguru
Methodology: RUR bases its rankings entirely on InCites data from Clarivate Analytics, collected through the Global Institutional Profiles Project (GIPP). Universities must request inclusion and submit raw data to participate. The methodology uses 20 indicators divided equally into four groups: Teaching (40%), including academic staff per students (8%), academic staff per bachelor degrees (8%), doctoral degrees per academic staff (8%), doctoral degrees per bachelor degrees (8%), and world/national teaching reputation (8% each); Research (40%), covering citations metrics, papers, research income, and reputation; International Diversity (10%), measuring international staff, students, and co-authored papers; and Financial Sustainability (10%), evaluating research income metrics. The methodology employs mostly size-independent metrics and includes all indicators used in THE rankings except the “industry innovation: income” indicator.librarylearningspace+2
University Ranking by Academic Performance (URAP)
Ownership: URAP is developed and published by the Informatics Institute of Middle East Technical University (METU) in Ankara, Turkey. It operates as a research laboratory within a public university.wikipedia
Scope: Since 2010, URAP has published annual national and global rankings for the top 2,000 institutions. It also provides field-based rankings across 23 fields based on Australia ERA classification.wikipedia
Methodology: URAP gathers scientometric data from Web of Science and InCites databases provided by the Institute for Scientific Information. The ranking uses six main indicators with a total score of 600 points distributed among them: Number of Articles (21%), Citation (21%), Total Documents (10%), Article Impact Total (18%), Citation Impact Total (15%), and International Collaboration (15%). The methodology uses median values to address the highly skewed distribution of raw bibliometric data, and weightings were assigned through Delphi system consultation with experts. For Turkish university rankings, URAP incorporates additional indicators including number of students and faculty members from ÖSYM (Center of Measuring, Selection and Placement).wikipedia
Webometrics Ranking of World Universities
Ownership: Webometrics is produced by the Cybermetrics Lab, a research group within Spain’s Consejo Superior de Investigaciones Científicas (CSIC), the country’s largest public research institute. Established in 2004, it operates as an academic research initiative.umn+1
Scope: Webometrics covers over 31,000 higher education institutions in more than 200 countries worldwide. Rankings are published twice annually (January and July) and are categorized by region: World, Americas, Asia/Pacific, Europe, Africa, and Arab World, with World further classified into BRIC and CIVETS.umn
Methodology: As of July 2022, Webometrics uses three main indicators (previously four): Visibility (50%), measuring the number of external networks (subnets) connecting to the institution’s web pages using data from Ahrefs and Majestic; Transparency/Openness (10%), based on the number of citations by top authors (top 210 authors) using Google Scholar Profiles data; and Excellence (35%), representing the top 10% most cited papers in respective fields over the last five years, with data collected from Scimago. The Presence indicator (5%), evaluating the number of web pages in the institution’s domain according to Google search, was part of earlier versions. The methodology aims to promote web presence, open access, and transparency of academic activities.uvt+2
NTU Ranking (Performance Ranking of Scientific Papers)
Ownership: The NTU Ranking is published by National Taiwan University. It represents an institutional initiative by a leading Asian research university.universityguru+1
Scope: NTU Ranking provides overall rankings, rankings by six fields, and rankings by 14 selected subjects. It evaluates universities globally with a focus on scientific publication performance.mit
Methodology: The methodology emphasizes bibliometric indicators grouped into two main categories: Research Productivity (25%), which includes number of articles over the last 11 years (10%) and number of articles in the current year (15%); and Research Impact (35%), evaluating citations and impact metrics. Additional indicators assess research excellence and other dimensions of scientific performance. The ranking uses data from major citation databases to evaluate publication quantity and quality across different time periods.universityguru
Financial Times Business School Rankings
Ownership: The Financial Times rankings are published by the Financial Times newspaper, a major international business publication.ft+1
Scope: FT publishes seven rankings annually, focusing exclusively on business education: Global MBA, Executive MBA (EMBA), Masters in Management (MiM), Masters in Finance, Online MBA, and European Business Schools rankings.shiksha+1
Methodology: The FT MBA ranking methodology is comprehensive, using 21 different criteria. Alumni feedback influences eight criteria, contributing 56% to the overall score, while institutional data accounts for 34%, and research rank constitutes 10%. Key criteria include Weighted Salary (16%), Salary Increase (16%), Value for Money (5%), Career Progress (3%), Aims Achieved (4%), Alumni Network Rank (4%), Career Services (3%), and Employment at Three Months (2%). Alumni are surveyed three years after MBA completion. For European Business Schools rankings, the methodology combines scores from individual program rankings (MBA, EMBA, MiM, Masters in Finance) using weighted aggregation, with Z-scores employed for individual criteria to demonstrate range between highest and lowest-ranked schools. The methodology recently reduced salary criteria weighting from 40% to 32% to emphasize other factors.ft+2
Discover more from Erkan's Field Diary
Subscribe to get the latest posts sent to your email.
