What’s behind the rankings? A series of webinars on higher education rankings has been launched

Over the past nearly twenty years, university rankings have become a popular form of communication in higher education institutional performance worldwide, constantly focusing on the dialogue on higher education. The webinar series, which was organized by the Eötvös Loránd University PPK Social Communication Research Group in early February, would also like to contribute to this.

What’s behind the rankings? was asked at the professional webinar, which was the first time in a series on university rankings. The aim of the series is to review the international and domestic rankings, as well as to further develop the Hungarian UnivPress Ranking.

The online webinar, held on 3 February, was attended by nearly 50 people representing a wide range of higher education. Following the introduction of Fruzsina Szabó (HVG / Eduline), three short lectures were given in connection with the UnivPress Ranking and the international higher education rankings.

Dora Czirfusz: How do rankings measure? – Changes in indicators, their effects on rankings

Four indicators are used in UnivPress Ranking to measure lecturer excellence, each of the indicators has a weight of 25%:

  • proportion of academically qualified lecturers
  • proportion of lecturers with MTA titles (MTA=Hungarian Academy of Sciences)
  • total number of qualified lecturers
  • number of students per academically qualified lecturers.

For the indicator “number of students per academically qualified lecturers” the weighting has changed since 2018 – the weight of 50 percent, dropped to 25 percent, balancing the importance of the four indicators.

The example of some faculties showed that there are institutions that have maintained their position stable over the last six years despite the change in weighting, while other institutions have shown significant fluctuations over the years that are not clearly related to methodological changes, but rather can be traced back to changes in the underlying data. Analysing the relationship between the basic data and the ranking, it was found that the number of qualified lecturers can advance the ranking position the most, according to the current methodology.

György Fábri: University performance in light of rankings

The presentation began with an analysis of the interpretive constraints of higher education ranking indicators. In his view, rankings cannot capture several aspects of higher education, such as certain elements of professional quality or student socialization, or differences coming from scientific diversity. The presentation also raised the question of how to expand the focus through which rankings interpret higher education. For example, labour market feedback, prestige, and student satisfaction often appear as potential new interpretations in this area – but their application can be seriously methodologically hampered. The second half of the presentation analysed the changes taking place in the higher education environment and at system level (eg. changes in the lecturers and students) and their impact on rankings. Among the changes and challenges facing higher education in Hungary, the presentation mentioned, for example, the model change, the resulting narrowing competitive situation, the remaining centrality of Budapest, and the growing need for international visibility – these changes also call for a rethinking of rankings.

Sándor Soós: Trends in university performance evaluation

His presentation was based on the premise that scientific performance appears to a lesser extent in national rankings than in international rankings. The presentation considers four issues important in measuring scientific performance: indicators, comparability, data sources (eg. WOS, Scopus, MTMT (Hungarian)) and the validity of indicators. There were also various aspects of measuring publication performance (such as productivity, impact, quality, achievement) and the limitations of comparability, which stem primarily from the different characteristics of institutions. It was argued that comparability can be improved by choosing indicators that address interdisciplinary differences, as well as by comparisons at the discipline level and by comparing them in groups based on institutional profiles. In connection with the data sources, there were differences in the publication databases, and their information contents, which may result in different answers to the same question. Regarding the validity of the indicators, it was stated that although there is no significant difference in the actual publication effect between the individual institutions, small differences will be magnified as a result of the ranking. He concluded his presentation with the idea that even with careful choice of indicators, the interpretation of rankings should be treated with caution.

Invited commenters from higher education institutes also shared their thoughts, stressing that higher education rankings should be seen primarily as a marketing tool in the development of institutional strategy and not as an indicator. At the same time, the comments also shew the differences in how and according to which aspects each institution interprets the rankings, and to what extent they consider appearing in international and domestic rankings to be a strategic issue.

The next webinar, February 25, 2021, deals with the topic The Real Competition Area: Hungarian Universities in the International Ranking Field.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.