International Rankings

International rankings: debates, aspects, examples

by Georgina Kasza

Experts principally explain the increasing number of rankings over recent decades by an increasing interest in performances expressed in numbers and the changing decision-making mechanisms, and by the role of the media. Alongside the appearance of various lists, debates concerning their interpretation have become more frequent, too. The characterisation of international and foreign (national) rankings and the exchange of opinions about certain rankings also contribute to the correct interpretation of rankings.[1]

Debates on rankings: methodological credibility and reliability in focus

At the time of first publishing nationwide university rankings in independent publications in Hungary in 2004, CNN reported a study which discussed the methodological issues of “rankings”. Four American economists published their joint study on one of the most influential forum of economists, supported by the National Bureau of Economic Research.[2] The authors suggested a new approach to measuring higher education institutions by the results they achieve in fighting for the students who were admitted to more than one institution simultaneously and thus have the chance to choose between them.

This example of the 25 years of ranking debates[3] clearly illustrates how much attention is paid to assessments comparing universities and colleges in the United States. Rankings have been prepared in the USA since the 1970s, and the wider public was reached by the wave of rankings in 1983 when U.S. News and World Report started publishing the list of undergraduate institutions.

Of course the growing number of increasingly influential rankings excited great debate, as well as professional analyses in higher education.[4] Credibility and methodological reliability is a key issue in these to date. They make efforts to meet expectations through different solutions, which also show the differences in the concepts on the perception of higher education performance.

There are great differences among rankings in what they intend to assess: institutions, professional fields, educational levels, educational levels by professional fields. There may be great differences in the objectives and the corresponding ranking aspects chosen by the people preparing them: the standards of education and research, quantitative indexes, or rather the efficiency of education are in focus.

Ranking by professional fields and educational levels is considered to be much more reliable and sensible in the technical literature (or the standards of research in case of research universities[5]), while efficiency is considered very important in the United Kingdom, which in practice means the number of students who get a degree in how much time from the point of starting their studies. Experts also emphasise the application of indexes indicating the quality of education.

In the Anglo-Saxon world (or in systems with an extended network of private institutions, like Poland) the basic methodological problems arise from the higher education system itself, namely from the different nature of state sponsored universities and private universities (colleges, etc.). Supply of funding, the amount of money spent on infrastructure, lecturers, research and student sponsorship are among the important factors in rankings, however in each country where there are state-sponsored and private institutions, the latter are consequently differentiated from the former by the abundance of (or at least “greater abundance”) resources. This results in a higher standard of lecturer and student, as well as research activity in most cases. Private and state-sponsored institutions sometimes pursue different objectives: although both types try to involve public funds, private donations and market sources to various extents, the state-sponsored universities usually lay much greater emphasis on spreading higher education, providing access for as wide a sphere of the population as possible, especially for disadvantaged groups.

Thanks to the professional debates and analytic work on rankings, today all major rankings pay attention to presenting their methodology in addition to publishing the list of institutions. With debates, rising credibility and reliability in the focus there is constant encouragement for researchers to review and make rankings more accurate. It is also increasingly obvious that the debate created seems to be one of the most important effects of the rankings.

Who are targeted by rankings?

There are major differences between rankings in terms of their target groups: financiers, students, higher education decision-makers or the administration. The so-called Berlin Principles formulating the cornerstones of ranking preparation lay special emphasis on the rankings having clear, well-determined objectives and target groups, and that the index numbers established should agree with these.[6]

Multi-level assessment with mixed factors based on centrally collected and checked data requiring complex data processing—as in the English system—is primarily important for financiers, and it is worth conducting such research for the financiers, too. The methodology of the guides for students is often simple, with somewhat accidental aspects; the market decides which of them becomes popular.[7] Ranking guides however provide information for the widest possible public without intending to be fully comprehensive. They are often published in journals or produced at the instigation of journals, and consider only a few aspects.

Rankings targeting the whole of the university community, including leaders, lecturers, financing organisations and students are relatively rare even abroad, but significantly more prestigious due to their complex methodology. They always publish their methods, data collection technique, aspects, weighting techniques, and thus, amongst other things, they also contribute to the target group having to rethink the appropriate aspects of assessing higher education institutions over and over again.[8] The mutual prestige-building (the ranking and the world of universities authenticate each other) mechanism is an important characteristic of the ranking game; some perceive this as a danger, while others think it ensures the quality of the rankings. Either way, the system of ranking aspects and the self-assessment of institutions cannot be sharply separated, even in the case of completely private rankings.

Best known international rankings

Looking over the best known national and international rankings we can point out, that the answers given to methodological questions are directly related to the characteristics of the given higher education systems, the objectives established by the preparers of the ranking, and the presumptions about the expectations of the users.

THE World University Ranking—a multidisciplinary ranking published between 2004 and 2009 by the Times Higher Education Supplement with Quacquarelli Symonds, but which will be published in cooperation with Thomson Reuters from 2010 instead—ranks the higher education institutions of the world in 200 positions. The ranking published since 1971 is not only prepared on the basis of self-collected data, but they also cooperate with the information service of other national, central higher education organisations.

Those preparing the ranking admit that it is difficult to provide a credible picture of the order of institutions due to the different interpretation of concepts, the potential inaccuracies of data collections, and the methodological uncertainties. They deem constant “methodological learning” to be important, in order to provide an increasingly comprehensive and reliable picture of higher education institutions.

The rankings of the 200 best higher education institutions between 2004 and 2009 are presently available on the Internet. The index numbers were based on the following four cornerstones on preparing the ranking: the quality of research, the quality of training, the employability of graduates and international comparison. Index numbers used in 2009: scholarly personnel (lecturers, researchers) opinions: a complex measure based on peer review research conducted among scholarly personnel (lecturers, researchers) (rate of weighting: 40%); Employer opinions: The score depends on the replies obtained in the survey conducted among employers in the given year (10%); lecturer/student rate (20%); rate of quotations: The proportion of quotations is estimated using the Scopus database, which is index-rated to the size of the given institution (20%); Rate of foreign lecturers (5%); Rate of foreign students (5%). The picture of the individual institutions may be deemed objective on the basis of the indexes; however, the paper expresses its own viewpoints by the weighting, which are increasingly focusing on efficiency. Although the top 200 list of this ranking is the best-known, they have also prepared thematic and regional rankings in recent years.

After the Millennium, new and more comprehensive initiatives were commenced, among which the ranking of the Shanghai Jiao Tong University[9] has raised the most interest. The Jiao Tong ranking tries to cover the whole world on the basis of statistical indexes. The quality of lecturers and education, research performance and the size of the institution are all used as determining factors. The quality of education is measured by Nobel and Fields prize winning scientists, the quality of lecturers is marked by the publications in internationally prestigious journals and quotations in this ranking. However these indexes have their “disadvantages” too, says Ádám Török,[10] since a research institute dubbed “university” may also perform well because there is no educational criterion. Publication indexes distort, since the way of calculating publications with co-authors and assessing “multiplicative” publications is important. So Jiao Tong’s ranking build strongly on scientific performance, with past scientific performance and works weighted heavily with user-friendly indexes for lay people. Their method favours large universities with complex structures and broad profiles, and while financial background and ownership is ignored and the superiority of American institutions is prominent, the concept and international dimension of “competitiveness” remains in the background.

In addition to all the above, one of the weakest points of the Shanghai ranking is its one sided character in terms of professional fields. It is evident though, that the institutions occupying a leading position in natural and technical sciences and some fields of quantitative social sciences have a better chance of achieving higher results compared to those performing better in humanities. As the authors stated: “We tried very hard, but did not succeed in finding special criteria and internationally comparable data from the field of social sciences and humanities.”[11]

Due to opinions about and criticism of[12] the methodology they employed they are constantly conducting slight modifications in the indexes constituting the basis of the rankings, the weighting of those and the use of the databases and other sources serving their establishment. Despite constant criticism the “Shanghai List” was still considered the most widely used international ranking in 2005.[13]

The most remarkable project within the European Union was started by the Center for Higher Education Development in Germany. Compared to the rankings described above, this list targets one professional field and ranks higher education institutions on the basis of that. The most significant ranking of the institute, which has prepared rankings since 1998, is the so-called CHE Excellence Ranking. The CHE ranked the graduate programmes in natural sciences and mathematics of European higher education institutions on the basis of four quality indexes. The ranking focuses on international “presence” and research performance. A so-called two-step survey was applied in preparing the list; the institutions received gold, silver and bronze qualifications in the preliminary selection on the basis of four general quality indexes (number of publications, number of references, number of projects started in the Marie-Curie program), Then a questionnaire survey was conducted among the most excellent institutions on the basis of these factors, including further aspects. In the questionnaire, students were asked about the quality of education and educational services, while the institutional data collection referred to the international composition of lecturers, the proportion of female lecturers, international students and female students in the PhD and masters’ level programmes. These institutions formed the so-called excellence group. The ranking has been transformed and expanded in recent years. Political sciences, psychology and economics were included among the professional fields in 2009. Along with this a strongly reshaped methodology was used: the indexes used in the preliminary selection and the aspects of the questionnaire survey changed, as well as the ranking technique used in the preliminary selection and according to the final indexes.[14]

Among the international rankings, mainly (and naturally) those including Hungarian institutions receive more attention in Hungary. In addition to the international rankings mentioned above, the Financial Times ranking also includes Hungarian universities. The international list that ranks different higher education study programmes in economics presents the ranking of 5 program types annually: MBA, EMBA, executive education programs, master level management programs and the European business schools. Indexes primarily reflect international presence, educational quality and the value of the degree on the labour market. In recent years Hungarian institutions have been listed in the so-called Leiden ranking and elsewhere, which ranks institutions on the basis of so-called bibliometric indexes, as well as in the list of qualifying as “Internet presence”.

The initiative started by the European Committee in December 2008 is greatly awaited. The European Union announced an application in which they provide an opportunity to develop and test a multi-dimensional international ranking. The express objective of the development of the new ranking is primarily to prepare a multi-dimensional ranking that better takes cultural and historical characteristic into consideration, in addition to the rankings resulting in the predominance of Anglo-Saxon and Asian universities. The winning consortium[15] started the work entitled the CHERPA Project, the results of which will be published in 2011.[16]

International rankings receive considerable media attention; however, their professional value is a moot point. The problems arising in connection with the national rankings are even more obviously present if universities formed in greatly different cultures, fulfilling varied economic and social demands, are compared on the basis of data often interpreted in different ways.

National rankings: worldwide examples

In the following chapter we try to describe the more well-known rankings of individual countries, as well as certain parts of the debates formed around them.

The most widely known American ranking is the U.S. News and World Report which has been published since 1983. It uses a very complex system of aspects but then weighs and summarises the rankings according to individual factors. The annual ranking within the types of institutions is produced in this way. In the first years following the turn of the Millennium, the data providing the basis for the rankings were collected exclusively by self-filled questionnaires sent to the institutions and data and information available on the websites of the institutions. Their aspects present the subjects of academic prestige, admission chances and the professional quality of applicants, and efficiency (time required to obtain a degree). In the cases of graduate and PhD programmes research performance receives greater attention, while the assessment of prestige is more important in humanities. The sharpest criticism against US News is that the ranking is unpredictable, the method of calculation is modified too frequently,[17] thus the modified position of an institution in the ranking does not mean that any change has taken place in the quality of education at the institution. The group criticising the numerical ranking of institutions[18] includes the presidents of several leading American universities such as the leaders of Stanford and Cornell, who even refused to provide data based on subjective opinions related to other higher education institutions. Student groups organised on elite American universities similarly expressed doubts about the validity of this ranking. Experts also questioned the factors used by the U.S. News. According to critics, the value of the degree, “reputation, economy and exclusivity”, has too much role in the ranking instead of focusing on education itself.[19]  U.S. News ranked close to 1200 study programmes in 2009; on that basis, close to 11 thousand questionnaires were received. There is a long series of rankings listing American universities: the ranking of the United States National Research Council, the list of Forbes and Washington Monthly including college institutions, or the ranking of “Top American Research Universities”–however the detailed description of these is beyond the scope of the present chapter.

Two-level education was uninterpretable in the continental higher education system before the Bologna transition, consequently the most read forum of German ranking around the Millennium, the Der Spiegel Ranking preferred to put emphasis on the student and lecturer assessment of educational standards.

 A simple technique is used in order to measure this: the prestige of universities is determined on the rankings of selected university lecturers and reputed professors, and is published as separated by professional field. The Der Spiegel ranking published in cooperation with McKinsey and AOL received a lot of criticism in 2004, especially due to the deficiencies of the methodology and the inaccuracy of data processing.

State sponsored organisations like Deutsche Forschungsgemeinschaft[20] measuring research performance use more objective indicators, and the assessment represents high stakes since its results are taken into account in financing. The ranking of the Humboldt Foundation[21] does not have such consequences, but clearly presents the attractiveness of  German universities for international students. The Humboldt Ranking also assesses how attractive German higher education institutions are to researchers.[22]

The more known rankings of the German-speaking areas were prepared by, the Deutsche Rektorkonferencz,[23] and CHE (Center for Higher Education Development) founded by a private company in the 90s. Their greatest programme known as CHE University Ranking covers the entire German-speaking area, and it has been published since 2005 annually, in cooperation with Die Zeit newspaper.[24] The objective of the institute is to prepare a comprehensive and detailed ranking of German higher education institutions, and to provide a reliable and professional source of information for students. One of its other well-known lists is the CHE Research Ranking which focuses on the research performance of institutions, assessing 16 professional fields. The CHE rankings are often criticised for the details of the description of the methodology, the inaccuracy of data collection and for ignoring the disproportionality arising from the higher education system of Germany. The objectives and methodologies of the CHE rankings are changed annually, although in a detectable manner, it does, however, question the credibility of the ranking.

The SwissUp Project[25] started in Switzerland in 2003 is an interesting enterprise in terms of its methodology, which produces data in accordance with general, major directions: they examine the attraction and educational infrastructure of institutions, the applicability of the programmes on the labour market, the quality of courses and lecturers, the ability to raise resources, and the general student satisfaction and attachment to institution via central databases (Swiss Central Statistics Office, Swiss House for National Research, Committee for Technology and Innovation) and student questionnaires. It is characterised by its modelling of the interest of users, through which it creates three student profiles: research oriented—those who look for institutions encouraging and supporting scientific research; labour market focused—those who approach their studies from a practical point of view, i.e. whose main aim is to obtain a profitable and satisfying job; and those who are looking for education itself and who consider human aspects important in teaching and learning. The project orders ranking composition options to the profiles and more widely to the individual aspects, thus grasping several determining dimensions of higher education; however, these are not based on absolutely valid surveys, and “qualitative constant” can not be expected from them.

The supplement Rzeczpospolita Uniwersytet [26] of the Polish newspaper Perspektywy was making efforts to answer a European ranking dilemma when they compared the participants of the private higher education market flourishing after 1989 with traditional state-sponsored higher education. The two sectors operating among significantly different regulatory and financing conditions were presented on two different lists, they then created common rankings within the professional fields on the basis of the opinions of employees and lecturers who obtained their graduate degrees in 2000 and 2001.[27]

Of course, looking over the Australian, New-Zealand, Slovakian, French, etc. examples, we can find a variety of solutions that we cannot describe in detail here.[28] They clearly conclude that we are now experiencing assessments of quality and performance in higher education seeking out their place.

Meanwhile higher education ranking has also become an international professional field: there is greater and greater demand for closing up on methodologies. In order to do so, a conference was held in Berlin in 2006, where those preparing rankings arriving from many countries formulated the principles which they wished to be followed in the development of rankings. They found it important to recognise the diverse nature of higher education services and institutions, and to enforce that in ranking assessment. The transparency of the methodology and the basis of information, the smooth and traceable changes of measurement solutions applied are also very significant requirements. Preparing rankings should be self-reflexive too: the assessment, critical review and constant renewal of ranking preparations are in both the public and the professional interest.[29]


Berlin Principles on Ranking of Higher Education Institutions. Download from:

Caroline Hoxby – Christopher Avery – Andrew Metrick – Mark Glickman:: A Revealed Preference Ranking of U.S. Colleges and Universities. NBER Working Paper no. 10803, September 2004.

Chuanfu Ding – Terrance Jalbert – Steven P. Landry: The Relationship Between University Rankings And Outcomes Measurement. College Teaching Methods & Styles Journal – Second Quarter 2007 Volume 3, Number 2, p. 1-10.

Denise Gater: U.S. News & World Report Changes in Methodology by Year. In The Center. 2000.

Elizabeth F. Farrell – Martin Van Der Werf: Playing the Rankings Game. Chronicle of Higher Education, vol. 53, no. 38, May 25 2007.

György Fábri (ed.) Egyetemek mérlegen Budapest, Educatio LLC., 2004.

Jack Gourman: Gourman Report: Undergraduate Programs. NY, Princeton Review Publishing.

Jerzy Woźnicki – Roman Z. Morawski: Public and Private Higher Education Institutions – Joint or Separate Evaluation and Ranking: The Polish Perspective. 2002.

Lynn C. Hattendorf: College and University Rankings: An Annotated Bibliography of Analysis, Criticism, and Evaluation. RQ, volumes 25-29 (1986-1990): Parts 1-5.

M. Sauder – R. Lancaster: Do Rankings Matter? The Effects of U.S. News & World Report Rankings on the Admissions Process of Law Schools. In Law and Society Review 40(1), 2006, p. 105-134.

Marguerite Clarke: The Impact of Higher Education Rankings on Student Access, Choice, and Opportunity. In College and University Ranking SystemsGlobal Perspectives and American Challenges. IHEG 2007.

Nancy Diamond – Hugh Davis Graham: How should we rate research universities? Download from:

Robert Stevens: University to Uni: The Politics of Higher education in England since 1944. London, Politico’s, 2004, XIII.

Shanghai Jiao Tong University, Institute of Higher Education, Academic Ranking of World Universities–2004; Download from:

Ádám Török: Az európai felsőoktatás versenyképessége és a lisszaboni célkitűzések. Mennyire hihetünk a nemzetközi egyetemi rangsoroknak? In Közgazdasági Szemle 53, 2006, p. 310-329.

William G. Bowen – Martin A. Kurzweil – Eugene M. Tobin: Equity and Excellence in American Higher Education. Charlottesville – London, University of Virginia Press, 2005, p. 63–67.

Răzvan V. Florian: Irreproducibility of the results of the Shanghai academic ranking of world universities. In. Scientometrics. 2007. July.


[1]  In the following chapter we greatly rely on the chapters of the habilitation paper on international rankings by György Fábri, entitled: A felsőoktatás és tudomány társadalmi percepciója az ezredfordulós Magyarországon – tudásátadás és tudománykommunikáció az egyetemi rangsoroktól a Mindentudás Egyeteméig [The Social Perception of  Higher Education and Science in Hungary at the turn of the Millennium—transfer of knowledge and the communication of science from university rankings to Mindentudás Egyeteme].

[2] Caroline Hoxby – Christopher Avery (Harvard) – Andrew Metrick (Wharton School of the University of Pennsylvania) – Mark Glickman (Boston University): A Revealed Preference Ranking of U.S. Colleges and Universities. NBER Working Paper no. 24th September 10803.

[3] Lynn C. Hattendorf: College and University Rankings: An Annotated Bibliography of Analysis, Criticism, and Evaluation. RQ, volumes 25-29 (1986-1990): Parts 1-5.

[4] For more details see: György Fábri (ed.) Egyetemek mérlegen [Universities on the Scales] Budapest, Educatio LLC., 2004.

[5] Nancy Diamond – Hugh Davis Graham: How should we rate research universities?


[7] The majority of the most frequently used guides do not make rankings, only inform and include a few assessment notes at most (the most famous and most often used guide of this type in the United States is the five-volume Peterson’s).

[8] It is typical that one of the oldest American rankings, the so-called Gourman Report[8] receives the most criticism (so much so, that its significance has virtually been lost over the last decade since the appearance of the US News and World Report ranking) because the staff of the Gourman Report are unwilling to publish or discuss the aspects and methodology in detail. Although the highly reputable Princeton Review, the most important centre of SAT and GRE tests, took the Gourman Report, it is not viewed as credible by almost anyone in the profession, consequently its prestige is declining.

[9] Academic Ranking of World Universities published since 2003 by the Shanghai University.

[10] Ádám Török: Az európai felsőoktatás versenyképessége és a lisszaboni célkitűzések. Mennyire hihetünk a nemzetközi egyetemi rangsoroknak? [The Competitiveness of European Higher education and the Aims of Lisbon. How much do we believe the in the international university rankings] Közgazdasági Szemle 53, 2006, p. 310-329.

[11] Shanghai Jiao Tong University, Institute of Higher Education, Academic Ranking of World Universities–2004;

[12] Răzvan V. Florian: Irreproducibility of the results of the Shanghai academic ranking of world universities. In. Scientometrics. 2007. July.

[13] The Brains Business. In The Economist, 2005.


[15] Members of the consortium: CHE – Centre for Higher Education Development (Germany), Center for Higher Education Policy Studies (CHEPS, University of Twente, Netherlands), Centre for Science and Technology Studies (CWTS Leiden University, Netherlands), Research division INCENTIM (Catholic University of Leuven, Belgium), Observatoire des Sciences et des Techniques (OST Paris, France), European Federation of National Engineering Associations (FEANI), European Foundation for Management Development (EFMD).


[17] Denise Gater: U.S. News & World Report Changes in Methodology by Year. In The Center. 2000.

[18] M. Sauder – R. Lancaster: Do Rankings Matter? The Effects of U.S. News & World Report Rankings on the Admissions Process of Law Schools. Law and Society Review 40(1), 2006, p. 105-134.

[19] Kevin Carey: College rankings reformed: The Case for new Order in Higher Education. 2006 September








[27] Jerzy Woźnicki & Roman Z. Morawski: Public and Private Higher Education Institutions— Joint or Separate Evaluation and Ranking: the Polish Perspective. 2002.

[28] A number of case studies prepared by UnivPress atelier are available at and

[29] Berlin Principles on Ranking of Higher Education Institutions.