Prof. Fábri: “Rankings are media phenomenon of the postmodern mass democracy.”
Mr. President, Dear Colleagues!
I welcome you in name of Hungary’s oldest and most competitivness (based on ranking positions too) research university.
It is a great honor to us that the we should be member of the International Ranking Expert Group – Observatory, one of the most important global organization of higher education. This membership offer for us a chance to learn about new viewpoints, methods, impacts of university rankings. I hope, you can also use results of our own research and experiments about the ranking-questions.
My institution not only observes and evaluates it’s ranking positions, but also analyzes such results trough the work of our special research group established for this purpose. The most well-known Hungarian university ranking are also the achievement of this team’s work.
Our research on fields of social psychology, sociology, philosophy of sciences and social communication serve for us the theoretical background under which we interpret rankings and their effects on universities.
This approach goes back to the first institutional evaluation ranking publications on the turn of the 20th century, by psychologist James Cattell, editor of Science magazine. But, an international “race” of universities in a form of a ranking isn’t a new phenomenon: it has been a way of comparing higher education and scientific institutions ever since we can remember. The pioneer of such rankings was developed in 1863 by Carl Koristka, professor of the Prague Polytechnic Institution.
We obviously neither have the time nor want to debate over each and every methodological problem related to creating global higher education rankings, since conferences held these past five years, (mainly the ones organized by IREG in Shanghai, London, or Vienna just to name a few) as well as publications submitted in this topic have been dissected quite thoroughly. Bias stemming from a scientific perspective or from a certain language used, the use of indicators and weighing at one’s discretion, seem to be obvious factors in ranking related critiques in literature.
In our interpretation and context, rankings show society’s media centered nature and its effect on the university world. They are media phenomenon of the postmodern mass democracy. Because of the traditional authority of university and the scientific knowledge declined, and everything looks like countable with numbers, the media and the public opinion seem to be competent “voting” about higher education.
● They are considered postmodern because scientific and instructional performance as well as social and economical influences are tangible things. They provide a self-explanatory setting for themselves in which we may interpret them.
● They predominate through mass-democracy, since everything is interpreted through plurality and cardinality.
● It is a media phenomenon because their power stems from journalists translating and interpreting the envied and hard to understand language and intellectual productivity of a higher education environment into an easily digestible, “instantly” available form.
But, there is a special problem for East-European region.
Nature magazine has recently published an instructive compilation on the fall of the Berlin Wall with regards to its quarter century anniversary. It is clear that the division of Europe in the field of science hasn’t decreased at all. We observe an stunningly one-sided distribution of ERC Grants scholarships, which is an initiative of great significance in the European Union. While scientists from the former socialist block won 74, Western scientists have received 3.014 altogether (the outstanding ratio of Hungarian recipients shouldn’t let us forget the inequality in these regions). The situation isn’t better with regards to supported research projects receiving funding. This recalls the validity of the Matthew effect, a world renowned thesis of Robert K. Merton: those people who already have a name and standing in the scientific community will encounter more opportunities compared to those who have nothing, therefore they will be left gradually more behind.
Of course, the main question lies rather in the globalization of scientific research, and our place in this global competition than in the rankings. To deal with the frustration stemming from this matter, we shouldn’t turn to rankings, we should turn to national science politics. We need increasing financial support to accelerate competition and performance, establishing autonomous scientific organizations that fully support academic values, and maintaining the appropriate levels of ethical values within the scientific community.
Universities and ranking developers should find solutions together on how it would be possible to provide a more realistic picture on the situation of regional universities as well as their competitiveness. But, generally, three new aspects should be introduced in the ranking-games:
● Recognizing the performance of universities on fields that aren’t visible in rankings, such as preparing educators, measuring the cultural and economical impacts of schools, and so on. These cannot be measured with the usual “ranking methodologies”: we need to initiate innovation in this topic. U-Multirank has been making an effort to initiate change but it is not yet clear to decide whether it will be used as a ranking tool as well. Let me bring up ELTE’s ranking position as an example. You see on the graph the indicators these university rankings are using. Since ELTE is mainly responsible for educating Hungarian teachers and instructors, social workers, as well as lawyers and legal professionals educated for living up to Hungarian national standards and regulations, these achievements barely appear in international publication lists or can barely be used to attract instructors and students from an international pool. Only 30 percent of ELTE’s scientific and scholarly work is considerent relevant based on the indicators. In other words, only 30 percent of ELTE’s work is actually visible on global rankings. It would be a huge mistake and an unforgivably treacherous act to underrate fields of expertise that don’t have an official ranking chart, including professors and students belonging to these fields.
● We are still teaching at universities. Users’ opinions who fund us, especially students’ and there families, should be a primary factor. Universities’ standards and quality are established in lecture halls and during seminars, and not by statistical numbers or indicators.
● The only thing that matters in the Olympic Games is who gets to the finish line first: this only ranks one’s speed, without attention on financial background or circumstances of the training. But, if a professor is able to achieve great results even with modest support, he or she probably has such an adaptive ability and resourcefulness that might become quite handy in education or in materializing research projects. Therefore, from the university rankings target groups’ point of view, we should bring funding and the topic of finances into our attention. The illustration shown above introduces us to a calculation on this matter.
Rankings will continuously evolve, and their relevance aren’t guaranteed enough by the Berlin Principles. Because of this, universities, especially institutions in worse positions, will only be able to move in rankings if they are able to establish a media communication channel without and within rankings.
But, as it was drew up at an earlier IREG-Conference: against sharp criticism, rankings stay with us. They will continuously evolve, and our common mission is to improve these for real social perception of universities and meritocratic autonomy of academic values both. So, universities and ranking developers should find solutions together on how it would be possible to provide a more realistic picture on the situation and excellence of universities.
Thank you for your attention, and I hope a usefully cooperation in the frame of IREG too!