In 1983, U.S. News & World Report published its first ranking of American universities. What was originally nothing more than a gimmick to boost magazine sales has, four decades later, fundamentally transformed the global higher education landscape. Today, QS, THE (Times Higher Education), and ARWU (Academic Ranking of World Universities by Shanghai Jiao Tong University) dominate how universities allocate resources, design admissions strategies, and even shape national education policies. Yet an increasingly undeniable reality is emerging: the pursuit of rankings is destroying the very purpose for which universities exist.

I. Goodhart's Law: When Metrics Become Targets

In 1975, British economist Charles Goodhart, while studying UK monetary policy, made a famous observation that later became known as "Goodhart's Law": "When a measure becomes a target, it ceases to be a good measure."[1]

The logic behind this law is straightforward: a metric is useful because it correlates with something we genuinely care about. But once that metric becomes a target, people begin optimizing for the metric directly rather than for the substance it is meant to represent. The result is a decoupling of the metric from the substance — the metric improves, but the underlying reality may remain unchanged or even deteriorate.[2]

University rankings are perhaps the most perfect case study of Goodhart's Law in action. Rankings were originally designed to provide information about institutional "quality," but once universities began allocating resources with the explicit goal of improving their rankings, the rankings could no longer reflect quality — they could only reflect "how well a university performs on ranking indicators."

Sociologist Donald Campbell articulated a stronger version of this insight, known as "Campbell's Law": "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."[3] The history of university rankings is a textbook illustration of this law.

II. Methodological Flaws in Rankings

Before analyzing the impact of rankings in depth, we must first understand the fundamental problems with their methodology. Consider the three most influential rankings as examples:[4]

QS World University Rankings

  • Academic reputation (40%): based on scholar surveys
  • Employer reputation (10%): based on employer surveys
  • Faculty-student ratio (20%)
  • Citations per faculty member (20%)
  • International faculty ratio (5%)
  • International student ratio (5%)

THE World University Rankings

  • Teaching (30%): includes reputation surveys, faculty-student ratio, etc.
  • Research (30%): includes reputation surveys, research income, etc.
  • Citations (30%)
  • International outlook (7.5%)
  • Industry income (2.5%)

ARWU (Shanghai Ranking)

  • Nobel Prize/Fields Medal recipients (alumni 10%, faculty 20%)
  • Highly cited researchers (20%)
  • Papers in Nature/Science (20%)
  • Papers in SCIE/SSCI (20%)
  • Per capita academic performance (10%)

These methodologies suffer from several fundamental problems:[5]

First, measurability bias. Rankings can only incorporate quantifiable indicators, yet the most important functions of a university — inspiring ideas, cultivating character, serving society — are precisely those that are most difficult to quantify. Rankings measure "what can be measured" rather than "what matters."

Second, disciplinary bias. Citation metrics are heavily skewed toward the natural sciences and medicine, because these fields produce far more papers and citations than the humanities and social sciences. A top-tier liberal arts college has virtually no standing in these rankings.[6]

Third, the self-fulfilling nature of reputation surveys. Both QS and THE rely heavily on "reputation surveys," yet reputation is often simply a reflection of past rankings. This creates a feedback loop: high-ranked institutions enjoy strong reputations, and institutions with strong reputations rank highly. Emerging universities find it nearly impossible to break this cycle.[7]

Fourth, the arbitrariness of weightings. Why does "faculty-student ratio" account for 20% rather than 15% or 25%? Why does "international student ratio" account for 5%? These weightings have no theoretical or empirical foundation; they are entirely subjective decisions made by ranking agencies. Minor adjustments to weightings can dramatically alter ranking outcomes.[8]

III. How Rankings Distort Admissions

The impact of rankings on university admissions is the most direct. For students and parents, rankings provide a simple "quality signal" — an understandable simplification in the face of information asymmetry. But problems arise when universities begin designing their admissions strategies around rankings.[9]

Case 1: The selectivity game. U.S. News rankings include "acceptance rate" as a metric — the lower the acceptance rate, the more "selective" the institution, and the higher the ranking. Consequently, many American universities began aggressively encouraging applications (including from clearly unqualified applicants), only to reject them in order to drive down acceptance rates. This provides no benefit to students whatsoever, yet consumes vast administrative resources.[10]

Case 2: The commodification of international students. Both QS and THE include "international student ratio" as an indicator. This has driven many universities — particularly in the United Kingdom and Australia — to recruit international students in large numbers based not on educational quality considerations but on ranking incentives. In 2023, a master's program at a leading UK business school had over 85% of its students from China, creating what amounted to a "monoculture" classroom that entirely contradicted the very spirit of "internationalization."[11]

Case 3: The crowding out of local students. Because international students typically pay higher tuition fees and contribute to ranking indicators, public universities face a tension between "internationalization" and "serving the local community." The University of California system was criticized by the state legislature for admitting large numbers of out-of-state and international students, arguing that it had departed from the mission of a public university.[12]

IV. How Rankings Distort Research

The impact of rankings on academic research may be the most far-reaching and destructive.[13]

Quantity over quality. When universities evaluate professors based on "number of publications" and "citation counts," the rational response for scholars is to publish more papers rather than better ones. This has led to an "inflation" in academic publishing — an explosive growth in the number of papers, while the proportion of truly groundbreaking research declines.[14]

Safe bets over innovation. Citation metrics favor "mainstream" research — work that operates within established paradigms and is readily cited by peers. Truly innovative research — the kind that challenges existing paradigms and opens new fields — often receives fewer citations in the short term because peers do not yet know how to evaluate it. The result is that the pressure to pursue rankings encourages scholars to choose "safe" research topics.[15]

The prevalence of short-termism. Rankings are updated annually, which means universities need to "deliver results" every year. Yet major academic breakthroughs often require long-term, uncertain exploration. When professors must demonstrate "output" every year, they have no room for the kind of research that might fail but, if successful, could change the world.[16]

Citation manipulation. More troublingly, some institutions have begun to systematically manipulate citations. In 2019, a university in Saudi Arabia was found to have recruited "affiliated" scholars through high salaries, allowing them to list the university as their institutional address in order to inflate publication counts and citations. This kind of "academic arbitrage" severely undermines scholarly integrity.[17]

V. How Rankings Distort Teaching

Ironically, teaching — the most fundamental function of a university — is barely measured directly in major rankings. In the THE ranking's "teaching" category, the only component genuinely related to teaching quality is the faculty-student ratio; the rest are proxy variables for research-related indicators.[18]

This measurement gap leads to severe distortions in resource allocation:

Research eclipses teaching. At ranking-driven universities, faculty promotion and rewards are based almost entirely on research output rather than teaching quality. This is a rational response — if rankings do not measure teaching, why invest in it? The result is that the most talented scholars channel their energy into research, treating teaching as a "necessary evil."[19]

The rise of large-class instruction. To free up professors' time for research, many universities have expanded class sizes, reduced small-group discussions, and increased online courses. These practices may lower teaching costs, but they sacrifice educational quality. Research at Stanford University has shown that faculty-student interaction is the most valuable element in higher education — and it is precisely what ranking pressures sacrifice first.[20]

The "vocationalization" of curricula. To improve "employer reputation" scores and graduate employment rates, many universities have designed increasingly "career-oriented" curricula at the expense of the liberal arts education tradition. Departments such as philosophy, history, and literature face funding cuts because they contribute little to ranking indicators. Yet it is precisely these disciplines that cultivate critical thinking, moral judgment, and cross-cultural understanding — capabilities that may prove more valuable in the AI era than any "vocational skill."[21]

VI. The Global Race: When Nations Become Prisoners of Rankings

The influence of rankings extends beyond individual universities; it has reshaped higher education policy at the national level.[22]

China's "Double First-Class" initiative. In 2015, the Chinese government launched the "Double First-Class" construction plan, with the explicit goal of securing more top positions in world university rankings. The initiative has invested tens of billions of yuan, prioritizing universities and disciplines with "the potential to enter the world's top tier." Critics argue that this concentration of resources may exacerbate inequality within domestic higher education and reinforce a "rankings-above-all" culture.[23]

Japan's "Top Global University" project. In 2014, the Japanese government launched a similar initiative, selecting 37 universities for targeted support with the goal of "enhancing international competitiveness." However, evaluations reveal that these investments were primarily channeled into increasing the proportion of English-medium courses and the number of international students — ranking indicators — rather than substantive improvements in educational quality.[24]

Europe's merger wave. Beginning in 2007, France promoted university mergers, consolidating multiple small institutions into "super universities" to improve their visibility in rankings. Université Paris-Saclay is a product of this strategy. Such mergers may yield economies of scale, but they also risk undermining diversity and academic autonomy.[25]

The logic of this "global race" is self-reinforcing: when other nations invest resources in pursuing rankings, countries that do not participate in the competition fall behind. This is a prisoner's dilemma — the collectively rational choice may be to reduce the emphasis on rankings, but no country is willing to be the first to withdraw.[26]

VII. Alternatives: Redefining the "Good University"

Criticizing rankings is easy; proposing alternatives is hard. We need some way to evaluate the quality of higher education — the question is how to do it better.[27]

1. Learning outcomes assessment. Rather than measuring "inputs" (faculty qualifications, research funding), why not measure "outputs" (student learning outcomes)? The CLA+ (Collegiate Learning Assessment) in the United States and the EU's AHELO initiative have attempted to directly measure students' critical thinking, writing, and problem-solving abilities. This approach better reflects the substantive value of education, though it faces challenges of cross-cultural comparability.[28]

2. Multidimensional evaluation. The EU-backed U-Multirank project refuses to provide a single ranking. Instead, it allows users to select indicators based on their own needs — research, teaching, internationalization, regional engagement, knowledge transfer — and generate personalized comparisons. This approach acknowledges the diverse missions of universities, but struggles to gain media attention because it lacks a "single number."[29]

3. Process transparency. Rather than chasing the "best" university, we should provide more transparent information to help students make choices that suit them. This includes information on curriculum content, teaching methods, graduate career development, and student satisfaction. The focus should not be "who is best" but rather "who is the best fit for me."[30]

4. Peer review. Academia already has a well-established tradition of peer review — why not apply it to institutional evaluation? Regular external review panels (composed of scholars from other universities) could provide assessments that are more nuanced than quantitative indicators. This approach is more costly, but it better captures the complexity of quality.[31]

VIII. The Mission of the University: A Return to Fundamentals

Amid the clamor of rankings, perhaps we need to return to a fundamental question: What are universities actually for?[32]

When Wilhelm von Humboldt, the nineteenth-century German educator, founded the University of Berlin, he articulated the profoundly influential "Humboldtian model": the university should be a place of intellectual pursuit conducted in "solitude and freedom" (Einsamkeit und Freiheit), where research and teaching are inseparable and academic freedom is the highest value.[33]

Cardinal John Henry Newman, in The Idea of a University, emphasized that the purpose of a university is to cultivate "a philosophical habit of mind" — to produce whole persons capable of thinking, judging, and understanding, rather than to serve as training facilities for specific skills.[34]

Twentieth-century America developed the "multiversity" model, emphasizing the university's multiple functions in service to society — research, teaching, community engagement, and economic development.[35]

These traditions differ in their understanding of the university's mission, but they agree on one point: a university is not merely a production line for "human capital," nor simply a factory for "knowledge products." A university is a place where society reflects upon itself, explores the unknown, and transmits civilization. These functions are precisely the ones that ranking indicators find most difficult to measure.

Conclusion: The Courage to Move Beyond Rankings

In 2023, Yale Law School announced its withdrawal from the U.S. News & World Report rankings, and Harvard, Berkeley, Stanford, and other top law schools followed suit. Their reasoning was clear: the ranking methodology was incompatible with their educational mission, and continued participation would only reinforce a harmful system.[36]

These elite institutions have the capital to "opt out" — their reputations do not depend on rankings. But for most universities, exiting the rankings game requires enormous courage and risk tolerance. This is precisely the problem: rankings have created a "prisoner's dilemma" that makes it extremely difficult for universities to disengage unilaterally.

Change must occur simultaneously on multiple fronts:[37]

  • Ranking agencies need to improve their methodologies by incorporating more indicators for learning outcomes and social impact
  • Universities need to resist the temptation to optimize purely for rankings and uphold their educational missions
  • Governments need to stop allocating resources based on ranking positions
  • Students and parents need to look beyond a single number and appreciate the complexity of educational quality
  • Media need to reduce sensationalized coverage of "ranking changes"

This is not a wholesale rejection of assessment. Proper evaluation can promote accountability, identify problems, and drive improvement. But assessment should be a means, not an end; it should serve the educational mission, not supplant it.[38]

When we ask "which university is best," perhaps the better question is: "Best for whom? Best for what purpose? Best according to what values?" These questions have no single answer — and that is precisely where the richness of education lies.

References

  1. Goodhart, C. A. E. (1984). Problems of Monetary Management: The U.K. Experience. In C. A. E. Goodhart, Monetary Theory and Practice (pp. 91-121). Macmillan. doi:10.1007/978-1-349-17295-5_4
  2. Strathern, M. (1997). 'Improving Ratings': Audit in the British University System. European Review, 5(3), 305-321. doi:10.1002/(SICI)1234-981X(199707)
  3. Campbell, D. T. (1979). Assessing the Impact of Planned Social Change. Evaluation and Program Planning, 2(1), 67-90. doi:10.1016/0149-7189(79)90048-X
  4. Hazelkorn, E. (2015). Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence (2nd ed.). Palgrave Macmillan.
  5. Marginson, S. (2014). University Rankings and Social Science. European Journal of Education, 49(1), 45-59. doi:10.1111/ejed.12061
  6. Pusser, B., & Marginson, S. (2013). University Rankings in Critical Perspective. The Journal of Higher Education, 84(4), 544-568. doi:10.1080/00221546.2013.11777301
  7. Bowman, N. A., & Bastedo, M. N. (2011). Anchoring Effects in World University Rankings: Exploring Biases in Reputation Scores. Higher Education, 61(4), 431-444. doi:10.1007/s10734-010-9339-1
  8. Soh, K. (2015). What the Overall Doesn't Tell about World University Rankings: Examples from ARWU, QSWUR, and THEWUR in 2013. Journal of Higher Education Policy and Management, 37(4), 404-414. doi:10.1080/1360080X.2015.1056601
  9. Espeland, W. N., & Sauder, M. (2007). Rankings and Reactivity: How Public Measures Recreate Social Worlds. American Journal of Sociology, 113(1), 1-40. doi:10.1086/517897
  10. Luca, M., & Smith, J. (2013). Salience in Quality Disclosure: Evidence from the U.S. News College Rankings. Journal of Economics & Management Strategy, 22(1), 58-77. doi:10.1111/jems.12003
  11. Bound, J., Braga, B., Khanna, G., & Turner, S. (2020). A Passage to America: University Funding and International Students. American Economic Journal: Economic Policy, 12(1), 97-126. doi:10.1257/pol.20170620
  12. Jaquette, O., & Curs, B. R. (2015). Creating the Out-of-State University: Do Public Universities Increase Nonresident Freshman Enrollment in Response to Declining State Appropriations? Research in Higher Education, 56(6), 535-565. doi:10.1007/s11162-015-9362-2
  13. Bornmann, L., & Mutz, R. (2015). Growth Rates of Modern Science: A Bibliometric Analysis Based on the Number of Publications and Cited References. Journal of the Association for Information Science and Technology, 66(11), 2215-2222. doi:10.1002/asi.23329
  14. Fanelli, D., & Larivière, V. (2016). Researchers' Individual Publication Rate Has Not Increased in a Century. PLOS ONE, 11(3), e0149504. doi:10.1371/journal.pone.0149504
  15. Wang, J., Veugelers, R., & Stephan, P. (2017). Bias against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators. Research Policy, 46(8), 1416-1436. doi:10.1016/j.respol.2017.06.006
  16. Stephan, P. (2012). How Economics Shapes Science. Harvard University Press.
  17. Bhattacharjee, Y. (2011). Saudi Universities Offer Cash in Exchange for Academic Prestige. Science, 334(6061), 1344-1345. doi:10.1126/science.334.6061.1344
  18. Johnes, J. (2018). University Rankings: What Do They Really Show? Scientometrics, 115(1), 585-606. doi:10.1007/s11192-018-2666-1
  19. Fairweather, J. S. (2005). Beyond the Rhetoric: Trends in the Relative Value of Teaching and Research in Faculty Salaries. The Journal of Higher Education, 76(4), 401-422. doi:10.1080/00221546.2005.11772290
  20. Arum, R., & Roksa, J. (2011). Academically Adrift: Limited Learning on College Campuses. University of Chicago Press.
  21. Nussbaum, M. C. (2010). Not for Profit: Why Democracy Needs the Humanities. Princeton University Press.
  22. Mok, K. H., & Chan, Y. (2008). International Benchmarking with the Best Universities: Policy and Practice in Mainland China and Taiwan. Higher Education Policy, 21(4), 469-486. doi:10.1057/hep.2008.21
  23. Yang, R., & Welch, A. (2012). A World-Class University in China? The Case of Tsinghua. Higher Education, 63(5), 645-666. doi:10.1007/s10734-011-9465-4
  24. Yonezawa, A., & Shimmi, Y. (2015). Transformation of University Governance through Internationalization: Challenges for Top Universities and Government Policies in Japan. Higher Education, 70(2), 173-186. doi:10.1007/s10734-015-9863-0
  25. Musselin, C. (2017). La Grande Course des Universités. Presses de Sciences Po.
  26. Marginson, S., & van der Wende, M. (2007). To Rank or To Be Ranked: The Impact of Global Rankings in Higher Education. Journal of Studies in International Education, 11(3-4), 306-329. doi:10.1177/1028315307303544
  27. Altbach, P. G. (2012). The Globalization of College and University Rankings. Change: The Magazine of Higher Learning, 44(1), 26-31. doi:10.1080/00091383.2012.636001
  28. Shavelson, R. J. (2010). Measuring College Learning Responsibly: Accountability in a New Era. Stanford University Press.
  29. van Vught, F. A., & Ziegele, F. (Eds.). (2012). Multidimensional Ranking: The Design and Development of U-Multirank. Springer. doi:10.1007/978-94-007-3005-2
  30. Dill, D. D., & Soo, M. (2005). Academic Quality, League Tables, and Public Policy: A Cross-National Analysis of University Ranking Systems. Higher Education, 49(4), 495-533. doi:10.1007/s10734-004-1746-8
  31. Harvey, L. (2008). Rankings of Higher Education Institutions: A Critical Review. Quality in Higher Education, 14(3), 187-207. doi:10.1080/13538320802507711
  32. Scott, P. (2006). The Academic Profession in a Knowledge Society. In U. Teichler (Ed.), The Formative Years of Scholars (pp. 19-30). Portland Press.
  33. Humboldt, W. von. (1810/1970). On the Spirit and the Organisational Framework of Intellectual Institutions in Berlin. Minerva, 8(2), 242-250.
  34. Newman, J. H. (1852/1996). The Idea of a University. Yale University Press.
  35. Kerr, C. (2001). The Uses of the University (5th ed.). Harvard University Press.
  36. Garland, M. (2022). Yale Law School Will No Longer Participate in U.S. News Rankings. Yale Law School News. law.yale.edu
  37. Hazelkorn, E. (2017). Rankings and Higher Education: Reframing Relationships within and between States. In R. King, S. Marginson, & R. Naidoo (Eds.), The Globalization of Higher Education (pp. 183-206). Edward Elgar.
  38. Christensen, C. M., & Eyring, H. J. (2011). The Innovative University: Changing the DNA of Higher Education from the Inside Out. Jossey-Bass.
Back to Insights