A brief overview of the International Ranking-Workshop at the Faculty of Pedagogy and Psychology of Eötvös Loránd University (ELTE PPK) on the 31st March, 2017
The event was promoted by TEMPUS Public Foundation
At the event titled „Universities in Crossroads of National and Global Rankings” university leaders from Romania, Serbia and Hungary, representatives of the International Ranking Expert Group, as well as Hungarian and foreign ranking-experts discussed the correspondence between national and global rankings. Around fifty participants disputed lectures on the incompetence of global rankings to measure the performance of universities. They also discussed the fact, that while the position of the very same universities on the national ranking scales and the international ones are hardly comparable due to indicator differences, the best universities nevertheless are at the top of any ranking scales.
According to György Fábri, Associate Professor of ELTE PPK, initiating the event, one of the most important ideas is that the methodological problem of rankings as well as the demand for rankings decreases the value of global rankings, therefore, rankings by discipline and regional comparisons of institutions become widespread.
Participants welcomed the idea to organize an international ranking-conference at ELTE aiming to examine the rankings based on the performance of individual disciplines, and to discuss the launch of regional European rankings.
Here are some of the main issues of the ‘ranking-phenomenon’.
Universities are assessed in global and national rankings at the same time, and the use of different frames of references adds a high degree of uncertainty to the relevance of rankings. The media, decision makers and the universities themselves use national and global rankings, with quite confusing results.
Our project offers a frame of interpretation to recognize and analyze this crossover-position of universities.
Rankings do not implement the measurement of the performance of the institutions, but are the currently most efficient media communication tools of higher education. Their power and the dynamics of their spread is primarily a result of the media and social communication environment surrounding higher education at present. Therefore, they tend to stimulate rather than inform their target groups, that is, students interested in entering higher education, decision makers, and the institutions themselves.
Considering all the above, the types of indicators used in a lot of rankings can be included in a single coordinate system only with great care – and conclusions cannot be drawn on the various rankings or regarding the importance of specific indicators.
The greatest weakness is the ignorance of diversity by considering institutions without regard to their missions, objectives, and structures, as well as featuring mainly institutions, while relevant data are much more accessible on specific training programmes, departments and insititutes. Publication routines, possibilities and genres greatly differ by the various disciplines, so rankings that use such indicators present a lopsided picture of higher education institutions.
To add to the above, the efficiency of indicators is not examined, the methodologies used are incongruent, neither are they apt for being implemented in different countries nor do they respond to the issue of compatibility.
The data obtained from surveys are very sensitive to sociological-statistical validity, yet empirical surveys often fail to meet such expectations. In the case of certain specific indicators (like reputation indicator) the outcome raises doubts anyway due to the halo-effect. Availability of data often overwrites validity in the use of indicators.
The logic of the composition of the various indicators is also attacked by many. Summation of indices from differing factors seems less legitimate. Summated indicators have doubtful results from the users’ point of view due to the manifold preferences of future students. Rankings, formed by creating weightings and summated indices are sensible to small deviations, so if elite institutions were not at the top, nobody would take ranking makers seriously. In addition, commercial ranking publications are accused of being interested in publishing novelties year by year for no one would otherwise be interested in the new publications.
Rankings, therefore, are inadequate to provide relevant information on higher education. The aggregated data characterize institutions as a whole, failing to satisfy the interest of students and university management in training programmes or individual organizational units, and this is particularly true for global rankings. Rankings urge universities through the media to improve their positions on the lists, often resulting in an autotelic drive for a better placement which, depending on arbitrary indicators instead of the complex developments serving real needs, force institutions to make distorted strategic decisions.
The inspiring workshop titled “Universities of East – European Region: Competition or/ and Cooperation at the Ranking Process” chaired by Paszkál Kiss, associate professor of Károli Gaspár University of Reformed Church in Hungary took place with the participation of twenty professionals including directors of universities, ranking analysts, educators and policy makers from all over Europe.
The discussion focused on rankings as a tool for universities in Eastern Europe to position themselves and influence stakeholders’ decisions, as well as on best practices of co-operations in the region. As main conclusions of the meeting, a more proactive approach in reaching out for prospective students and awareness of the altered attitudes of students’ in making choices about their studies (by viewing it more as an investment as opposed to accomplishment) is necessary. In small countries reputation is still more important than rankings, whereas in larger ones, rankings may play a greater role. Quality of education and research should be raised. The concept of “good school” may be interpreted differently, e.g. excellent teachers and/or high quality research work and/ or providing great possibilities for students in the applied field. Rankings by discipline provide younger institutions with the opportunity to stand out. The Leiden ranking gives the freedom to select the various criteria. Stipendium Hungaricum, EU tenders, co-authorships and both informal and formal research co-operations are best practices in the region, educational fares, rankings and non-educational factors are effective ways to attract foreign students. Librarians mean a great resource to increase the visibility of research and publications. Politics and the media have strong impact on rankings. Finally, the strategic question for universities of the East – European region is whether to compete or/ and co-operate in the ranking process.
Here follow some notices by the participants:
Ivanka Popovic (vice rector of University Beograd): suggested to discuss the Bologna ‘painful’ transitions in view of improvement of education and its controversial results, the mass production of degrees, adding that there is a real chance for co-operation and networking.
Péter Szalay (vice rector of ELTE): there is the need for a proactive way in the future to reach to-be students well before they finish high school. He said the focus – e.g. on having excellent teachers and scientific results, or on application – alters the interpretation of the concept of ‘goodness’ adding, that while each university wishes to be considered a good one, due to the lack of an accurate definition of the term ‘good university’ and ’good’ as such, there are different ways ‘to be a good university’.
Kazimierz Bilanow: the topic of the session – co-operation such as the Visegrád cooperation, Crossboarder co-operations, Age 2020 tender, co-authorships and common research projects, had not been touched upon, adding, that these are happening informally and not necessarily institutionally.
The conclusions of the event were the following: issues are to be reconsidered on how students should learn to use global and national rankings in their university choice, the ways national higher education institutions use national rankings of the “target countries” for student recruitment, the features in the field of HE internationalization and relationship-building for the university and government management, the role of decision-makers and the press (that place emphasis on and give significance to ranking positions), and last, but not least on the ways rankings can help universities to use them for benchmarking.
Further disputes are necessary to discuss the issues of the methods national and global rankings of universities are compared, indicators of scientific productivity on global and national levels and of teaching, measurability of the third mission from a global perspective, the differences in reactions of politicians and the media to national and global rankings as well as the diverse reactions and strategies from universities to national and global positions.
Several initiatives were launched, including the edition of a workbook of this workshop, publication of background papers of ranking-research on “www.ranking.elte.hu” website, publishing the English version of György Fábri’s ranking-monograph in June, organizing a ranking-training also in June to improve data-provision, analysis and best practice of ranking-usage, as well as preparing an international ranking conference at ELTE with emphasis on the specialties of East-European universities and on the reflection from different disciplines on the main points, methods and effects of rankings, e.g. scientometrics, sociology, social psychology, statistics, communication, philosophy of university etc.