A general lesson to be learnt from the effects and forms of utilisation described above is that no matter how well-founded the methodological scruples are, rankings are part of the world of higher education and the discourse thereof. Still, it was not aimless to review the critiques and the way this communication works as it does entail some very practical consequences, mainly that it enables us to better read rankings from the rankings. The appropriate interpretation of rankings and their use in the international arena is also becoming more and more accentuated: in 2015 IREG published guidelines for the stakeholders on the higher educational and scientific rankings. The document formulates the following general recommendations:
- Users need to be clear on what a particular ranking is measuring: it is necessary to interpret the aim, main target group, and the various indices of particular rankings.
- Rankings should be used as a source of information. In order to appropriately interpret the contents of rankings, other information and data need to be used as well.
- Long term processes are to be examined with less attention paid to positions and annual fluctuations.
- The methodology of rankings need to be read and understood carefully.
In the following section rankings will be summarized according to what they express and the strength with which they provide explanations/information. This is necessary not because the domestic ranking perception is marked by antagonisms, far from it: compared to examples in other countries, the history of Hungarian rankings appears to be almost uneventful. Except for a few institutional complaints regarding methodology and the amendments of incorrect data, there have not been any heated debates, no protests either on institutional or on higher educational policy level. Characteristically, methodological coordination was initiated by the ranking makers themselves: connected to the UnivPress Ranking, institution leaders annually presented the results, bases and dilemmas of rankings to higher education policy and the public. Except for those who “ex officio” deal with higher education, the use of rankings has hardly become characteristic. Empirical studies show (as it has previously been pointed out) that those wishing to continue their studies in higher education refer to rankings to a limited extent and in small numbers only. One of the reasons for this is that the institutions leading the domestic rankings are those which are already the best known and most highly regarded ones, thus their reputation is further stabilised rather than improved by their ranking position. The situation is similar in the case of employers who also make decisions based on existing general opinion about the institutions or information coming from personal-professional relations. Neither do the by now decade-old retrospections show a correlation between applications to higher education and ranking positions. The latter have only changed to a limited extent, while due to some hectic higher educational policies (such as the rapid changing of quotas), as well as some basic social and labour market trends (for example, increased attractiveness of engineering careers), the application numbers to the different institutions and/or fields have changed considerably. If we compare the educational and infrastructural capacities, regional specificities, and the economic and social embeddedness of the higher education institutions, these prove to be far more decisive with regard to their position than their rankings.
The two most important rules for reading rankings are systematicity and perspectivity (patience). The general ranking of a given year does not say anything about the performance and quality of an institution. The reading of a ranking should never begin with the numbers – the methodology and the indicators used should always be studied first. If we fail to do so, we will never know in what the given institution came first, last or otherwise. A particularly important part of the methodology is the identification of indicators. This shows us exactly what the ranking is about and what can be expected from it. Unfortunately, the title of rankings, their self-promotion almost always promise more than what they can actually provide in content by means of their indicators. In this respect Webometrics was the most trustworthy as in its name it refers to Internet presence and until just recently, it did keep this up (although the inclusion of a publication indicator resulted in deviation from the above). By using the adjective ‘academic’, ARWU orientates its readers, even though the publication focus is narrower here, too. The relatively new CWUR calls itself unique referring to the measurement of educational values, while all it does is that it uses teaching staff and student numbers and alumni data.
Perspectivity and patience therefore are indispensable when reading rankings both in their interpretation and evaluation. A current ranking position in itself does not tell much – changes in the position over time, however, can be informative provided the changes in the indicators used by the rankings examined are also assigned to them. However, in the case of rankings which alter weighing and the denomination of their indicators relatively frequently, such alternations in the positions do not tell anything about the real performance or quality of institutions. Their ranking position will be different however, and interpreting the changes in the indicators can help “survive” (Holmes) this.
When reading global rankings, due to the limited validity and in addition to identifying indicators, it is advisable to adhere to certain factors. First of all, is it sensible – bearing in mind realistic objectives – to make comparisons in the evaluation of the position of own institutions (or institution groups, e.g. Hungarian universities). By no means should Hungarian universities be expected to be in the group of the first 300 of global rankings. Achieving a position in the first 400 too, is more or less illusory. From the Hungarian point of view, a correct reading of rankings therefore cannot begin by taking the lead pack as a point of reference. Instead, real competitor universities should be defined be it individual institutions or national higher education systems. In defining the above, we suggest considering the following points:
- commensurability of the financial and control criteria;
- historical and economic/social background;
- research and education offer, field profile;
- student numbers.
From general rankings, however, one needs to turn, on the one hand to the segmentation of indicators and on the other hand, to the list of fields. If the ranking presents or reveals the orders by indicators, those carry substantive information about the position of an institution in the competitive arena. The group of institutions represented on the lists compiled by research or educational fields is more manageable, scientific performance or student attendance is measured on a similar platform, therefore the ranking position gives an interpretable feedback.
This too shows that the use of rankings requires a methodical approach, but since their target groups have very different expectations, it is practical to distinguish who and with what perceptive horizon belong to these groups.
For the majority of those continuing their studies in higher education (and their parents) the national and global rankings, which present the complete institutional sphere and level, are practically meaningless. As we have seen earlier on, field and institution preferences are in most of the cases determined, so wider comparisons do not add any substantial information. It is advisable to create “private rankings” which include institutions they realistically can apply to (some rankings offer on their websites well-manageable IT options for this purpose). For students entering a partial upgrading training, choosing an institution has a lower stake while choices are determined almost as strongly by the fields, professional relations, equivalences and language skills. Therefore, although they are more informed, they tend to consider institutional prestige as a decisive factor in international rankings. Generally speaking, students view the good ranking position of their own institution as a prestige increasing factor, so they can be involved as partners in processing rankings, or to collect additional information and experiences from within their circle (for example to measure institution prestige amongst the students). For those wishing to enter the various levels of training, it is important to review the information available in rankings that are particularly significant for a given training. When choosing a Bachelor programme, indicators related to the quality and sources of training are worth checking. Those entering a Master’s programme should primarily weigh indicators referring to professions and careers, while in the case of Doctor’s programmes, indicators on the quality of research and the quality of the Doctor’s programme as well as data on scientific development should be reviewed by students considering further education.
The heads of the higher education institutions are the most involved readers of rankings. In their case, the most important rule when using rankings is to be able (and sober enough) to separate the real professional information of rankings, their communicative effect and the higher educational policy reflections to be expected. It is particularly important for benchmarking to identify in line with the above, the competitor and reference institutions, and to compare the positions they have obtained and their indicator values with one’s own data. Substantive professional comparisons can best be made on the research/training level, this is the area where enough information is available. When processing rankings, it is advisable to carry out internal inspections and make use of institutional knowhow either through data collection or interviews for qualitative-comparative case studies. The institutional PR department needs to react differently in the case of a negative change in the ranking position of the given institution: this can be a communication move, or such a tendency may require changes in the positioning of the institution. But it can expressly be damaging if this entails institution organisational or internal resource allocation steps if it has no real content-related relevance (or relevance realistically influenced by content). Higher educational institutions can make use of rankings in tasks and activities such as strategic planning quality development, and it can also be useful to get information from global rankings when developing international cooperation. It would also be important if a cooperation of the institutions and the organisations preparing rankings could support the development of rankings.
The leaders of higher education policies also show a keen interest in rankings, mainly because of their effect on general political decision-making. They can make good use of the many results of the many kinds of rankings for their existing strategies and those being developed, their institutional preferences and intentions to administer changes: However, they cannot use the ranking positions to measure real performance; at best, the indicators can be used for this purpose. In addition, ranking positions need definitely be compared to the following data: institutional profiles and training structures (which are also dependent on accreditation and quotas/capacities); budget conditions; geographic / socio-geographic position; research infrastructure.
The information provided by rankings is even harder to interpret outside the world of higher education. The so-called users (recruiters of graduates, employers, enterprises employing the research-development of universities or expert services) only build the general lists of media-rankings as supplemental source of information into their human resources strategies or partner search. What characterises the way they read rankings is field selection: except for a general impression, cumulate lists do not give much. Interpretable indicators on training related to their fields and research potential are the scientific quality and quantity of training resources, the student/teacher ratios on the intensity of training, and incomes from research.
The largest user of rankings is the media. When it comes to the topic of higher education, journalists are in a difficult situation for two reasons: except for a few exceptions, they do not have suitably subtle and deep understanding of the characteristics of university performance, and their readers are even less informed about this world. Moreover, rankings, and especially global rankings “were invented for the media”, so temptation is rather strong for journalists to directly refer to them and consider them as a primary source. They actually do this quite often. If they provided information on higher education and not on the rankings, similarly to higher education policy-makers, they would have to reflect on the methodology, indicators, etc. of rankings.
The following table shows the type of information that can be expected from reading rankings, in relation to their target groups and types:
|
all institutions global |
global regional | global field | all institutions national |
national training fields |
|||||||||||||||||
| applicants to higher education | |||||||||||||||||||||
| entering partial upgrading training | |||||||||||||||||||||
| institution managements | |||||||||||||||||||||
| institute managements, research teams, trainers | |||||||||||||||||||||
| (institutional PR dept.s) | |||||||||||||||||||||
| higher education policy-makers | |||||||||||||||||||||
| media | |||||||||||||||||||||
| employers | |||||||||||||||||||||
| factors | quality of education | quality of research | individual or institutional communicationió | raising interest, attention | quality of training | quality of researchl | individual or institutional communication | raising inrerset, attention | quality of training | quality of research | individual or institutional communication | raising interest, attentiongyelemfelketés | qualtity of training | qualtity of research | individual or institution communication | raising interest, attention | quality of training | quality of research | individual or institutional communication | raising interest, attention | |
From all the4 above it follows that the appropriate reading of rankings can mostly counterbalance the methodological problems introduced earlier on. Such reading however, is limited by what can actually be seen from the data published or sent to the institutions.
In reality, rankings which specifically boast about a presentational ideology (“we show institutional strengths and weaknesses”), make very little public of their information base. In most cases readers of both the general and field lists can only see positions, or points achieved with various calculations forming the background of indicators. This is suitable for the comparison of an institution’s own position with that of the reference institution, to follow the changes over time, and to learn the sub-orders created from the various indicators. All global rankings work this way. The solution offered by the counter-example UnivPress ranking shows that this is merely a matter of decision. In most cases, the Hungarian ranking also publishes indicator values, in other words the reader gets substantive information about the institution, about its characteristics based on specific factors.
On the other hand, real information obtained from global rankings is rather scanty. Not even do we get a real picture about the correlating positions of institutions, because they either use z scoring (standard scoring) or calculate from 100% (these methods have previously been explained), the real distance between institutions can only partially be derived due to the imperviously even segmenting rhythmicity of orders. Targeted data search can help find a part of the primary data (publication data bases, institutional websites, etc.), however, this is practically none else but the re-creation of the ranking, in other words, this marks the lack of information provision. Revealing institutional characteristics is a less demanding work (type, size, profile), but these only enable us to group the published orders. If the ranking reveals the calculation method applied, its effect on the ranking can be modelled to a certain extent, yet, they may be based on hypothetical data.
A special form of the reading of rankings amongst institutions is when a better placement is the aim, which means that they look for ways of achieving better indicator values. In longer term institutional strategies a tool in this case can be to single out the formal elements of quality improvement (teacher recruitment, student policy, increase of publication activity on the global level, extended research areas, and resource concentration). On the short term, administrative steps (for example, re-classification of hospital staff, excluding practical trainers from university staff) may bring spectacular results, however the scope of these measures is limited.
Reading rankings therefore seems more difficult than making them – yet, the universities are forced to learn this kind of reading, because currently university rankings are unavoidable, high-impact phenomena shaping the perception of higher education.
(Parts of volume by Gyorgy Fabri: Measured or Communicated? University Rankings as the Media Representations of Higher Education. 2018.)
Dr. habil György Fábri (1964) is an habilitated associate professor (Institute of research on Adult Education and Knowledge Management, Faculty of Education and Psychology of Eötvös Loránd University), head of the Social Communication Research Group. Areas of research: university philosophy, sociology of higher education and science, science communication, social communication, church sociology. His monograph was published on the transformation of Hungarian higher education during the change of regime (1992 Wien) and on university rankings (2017 Budapest). He has edited several scientific journals, and his university courses and publications cover communication theory, university philosophy, science communication, social representation, media and social philosophy, ethics, and church sociology.
Dr. Mircea Dumitru is a Professor of Philosophy at the University of Bucharest (since 2004). Rector of the University of Bucharest (since 2011). President of the European Society of Analytic Philosophy (2011 – 2014). Corresponding Fellow of the Romanian Academy (since 2014). Minister of Education and Scientific Research (July 2016 – January 2017). Visiting Professor at Beijing Normal University (2017 – 2022). President of the International Institute of Philosophy (2017 – 2020). President of Balkan Universities Association (2019 – 2020). He holds a PhD in Philosophy at Tulane University, New Orleans, USA (1998) with a topic in modal logic and philosophy of mathematics, and another PhD in Philosophy at the University of Bucharest (1998) with a topic in philosophy of language. Invited Professor at Tulsa University (USA), CUNY (USA), NYU (USA), Lyon 3, ENS Lyon, University of Helsinki, CUPL (Beijing, China), Pekin University (Beijing, China). Main area of research: philosophical logic, metaphysics, and philosophy of language. Main publications: Modality and Incompleteness (UMI, Ann Arbor, 1998); Modalitate si incompletitudine, (Paideia Publishing House, 2001, in Romanian; the book received the Mircea Florian Prize of the Romanian Academy); Logic and Philosophical Explorations (Humanitas, Bucharest, 2004, in Romanian); Words, Theories, and Things. Quine in Focus (ed.) (Pelican, 2009); Truth (ed.) (Bucharest University Publishing House, 2013); article on the Philosophy of Kit Fine, in The Cambridge Dictionary of Philosophy, the Third Edition, Robert Audi (ed.) (Cambridge University Press, 2015), Metaphysics, Meaning, and Modality. Themes from Kit Fine (ed.) (Oxford University Press, forthcoming).
Mr. Degli Esposti is Full Professor at the Department of Computer Science and Engineering, Deputy Rector Alma Mater Studiorum Università di Bologna, Dean of Biblioteca Universitaria di Bologna, Head of Service for the health and safety of people in the workplace, President of the Alma Mater Foundation and Delegate for Rankings.

Ben joined QS in 2002 and has led institutional performance insights function of QS since its emergence following the early success of the QS World University Rankings®. His team is, today, responsible for the operational management of all major QS research projects including the QS World University Rankings® and variants by region and subject. Comprising over 60 people in five international locations, the team also operate a widely adopted university rating system – QS Stars – and a range of commissioned business intelligence and strategic advisory services.Ben has travelled to over 50 countries and spoken on his research in almost 40. He has personally visited over 50 of the world’s top 100 universities amongst countless others and is a regular and sought after speaker on the conference circuit.Ben is married and has two sons; if he had any free time it would be spent reading, watching movies and skiing.
Anna Urbanovics is a PhD student at Doctoral School of Public Administration Sciences of the University of Public Service, and studies Sociology Master of Arts at the Corvinus University of Budapest. She is graduated in International Security Studies Master of Arts at the University of Public Service. She does research in Scientometrics and International Relations.


Since 1 February 2019 Minister Palkovics as Government Commissioner has been responsible for the coordination of the tasks prescribed in Act XXIV of 2016 on the promulgation of the Agreement between the Government of Hungary and the Government of the People’s Republic of China on the development, implementation and financing of the Hungarian section of the Budapest-Belgrade Railway Reconstruction Project.


He is the past President of the Health and Health Care Economics Section of the Hungarian Economics Association.

Based in Berlin, Zuzanna Gorenstein is Head of Project of the German Rectors’ Conference (HRK) service project “International University Rankings” since 2019. Her work at HRK encompasses the conceptual development and implementation of targeted advisory, networking, and communication measures for German universities’ ranking officers. Before joining the HRK, Zuzanna Gorenstein herself served as ranking officer of Freie Universität Berlin.
His books on mathematical modeling of chemical, biological, and other complex systems have been published by Princeton University Press, MIT Press, Springer Publishing house. His new book RANKING: The Unwritten Rules of the Social Game We All Play was published recently by the Oxford University Press, and is already under translation for several languages.
