University Rankings as a Special and Highly Effective Form of Communicating Scientific Excellence*

Excellent scientists or scientific products represented scientific excellence for lay audiences and the media traditionally. The Zeitgeistof accountability and affordability has amplified the role of various rankings in this. Among the lists of citations, impact factors, publication indexes, etc., the university rankings raise the highest interest in the media and among science policy makers as well. The actors of science need to be able to recognize the nature of these rankings, beyond the methodological dynamics and problems. Because rankings are not mainly about higher education performance. These rankings show society’s media centered nature and its effect on the academic world. In this context there is a direct lesson for science and university communicationa: rankings are the media phenomenon of a postmodern mass democracy. They are considered postmodern, because scientific and instructional performance as well as social and economic influences are tangible things. They provide a self-explanatory setting for themselves in which we may interpret them. They predominate through mass democracy, since everything is interpreted through plurality and cardinality. It is a media phenomenon, because their power stems from newsmakers translating and interpreting the envied and hard to understand language and intellectual productivity of a higher education environment into an easily digestible, “instantly” available form.

*Draft, based on monographs Gyorgy Fabri: Measured or Communicated? University Rankings as the Media Representations of Higher Education. (The publishing in progress)

1. University Rankings and the Higher Education

The great ranking boom following the millennia did not only strengthen the professional debates, but extended them from the rather localised (limited to North America and England) circle to international level, and has made the topic interesting to social science research.  Thus the discourse is happening in three intertwined levels: the experts of the ranking industry, social researchers and the stakeholder (institutional or macro-level) decision makers of higher education are processing the ranking phenomenon with theoretical and practical conclusions/consequences. We have seen the manifestation of this analytical and at the same time ranking developer reception in the large projects of U-Multiranking[1], which provides answer to the basic methodological and higher educational policy problems of national, but especially global university rankings following quite exhaustive professional preparation work[2]. The solution presented above is unique: it simply removes their resource, as U-Multirank, despite its name, is in realtiy no longer a ranking.  This is not only a radical step, but a clear message about the fact that – after seriously accounting for the ranking dilemmas – no professionally valid media ranking can be made.

Naturally, I do not believe the question whether the existence of rankings is useful or harmful in higher education to be irrelevant, but rather that it has alreay been answered. The critical analyses of rankings lead to the phenomenon that sometimes even national rankings (which often demonstrate professionalism and higher educational compatibility) discredit the relevance of developing rankings. And this is even truer to global rnakings. They promise quality evaluation, but instead they narrow down and by this manipulate the notion of quality. They suggest objectivity by using numbers, but instead their weighings and choices of indicators are arbitrary. They call the orders established commeasurements, while they erect differences and distances between institutions. They advertise impartiality, instead through their indicators and data collection, they are laden with torsions related to fields, language, culture and institution type. They refer to the need for transparency and accountability, but instead they blur the operational characteristics of insitutions with obscure data collection techniques and shallowness. According to their self-definition they hold up a mirror to higher education, while they say practically nothing to society about education or student and society services.

The critiques summarised above greatly support these conclusions in the light of which what appears to be interesting is not so much what the rankings say about higher education, but what the phenomenonof the wide spread and obvious influence of rankings saysabout higher education.

 

2. Democratisation and medialisation of universities

The past fifty years have witnessed an opening of higher education, a radical increase in student numbers, which is a phenomenon that has been registered and studied in numerous ways.[3]One of the aspects of this phenomenon is directly related to the issue of rankings, namely the new information market and communication situation established in the wake of higher educational democratisation (or as it is sometimes negatively referred to: its “massification”).

This entails that studying further is becoming increasingly a natural part of the life strategy of upcoming generations, and the traditional recruitment base of white collar jobs and their training facilities has widened considerably. The well-known phenomenon of social mobility also resulted in more subtle motivations behind applications to higher education, as many of the applicants esteem values and have perspectives that are considerably different from those having more obvious white collar career objectives. More and more people face further education decisions who, due to their family, social and academic background, are “far” from higher education as far as information is concerned. At the same time, in their case, the choice of higher education weighs just as much, and is just as important a decision with personal and financial consequences as for those who are more informed. Their information resources are probably narrower, and the routine they follow in obtaining information is also different – which made them a logical target group to the talented information market actors who came up with new information products.

In other words, the democratisation of access entailed the democratisation of information which was accompanied by a considerable change on the supply side as well. Accelerating and radicalised technological changes, their adaptation to economy and everyday life in an ever shorter time, the explosion-like changes of social innovations and services required new skills and knowledge, which also appeared in the extended training portfolio of higher education, and the diversification of training structure. The training directions, contents and forms thus started were hitherto unknown even to the upper-middle class, the traditional public of the universities, and this increased their information needs too. All this happened gradually in the internationalisation of higher education, a tendency the general data of which are also known (although, as I will expand on this in more detail in the following, these are to be handled with a lot more care and need a subtler interpretation than usual), and which happens in an even larger information vacuum.

The need for information, information market, and the lack of information – this is the new context in which the world of universities met with the users and financers of higher education. By this time, the higher educational environment was jointly made up of the distributors of state resources, the students investing in further education (including more and more of those investing in retraining and further training), and the business users with specific training and development needs. It is their attention and with it, their resources that the universities and colleges need to fight for in the everyday world of info dump. They formulate newer and newer performance expectations together with the need for newer forms demonstrating performance.

From this perspective, the worldwide practice of the comparative performance evaluation of universities, i.e. rankings are none else but the response of the universities to a new publicity challenge.

As to its immediate effect, this response is surely a positive one, since rankings open up universities for the public and to media-fit self-definition.[4]The publication of materials having media-value is attracting public attention to the operation of universities and colleges both within and outside higher education, and may contribute to informing a wide circle about the basic notions of higher educational operation and characteristics. A new opportunity arises related to the interest in and information about higher education which promotes and ensures the continuity and reliability of both information provision and the presentation of higher educational characteristics. The institutions’ own information channels are continuously extended and reference to the ranking results plays an important role in them. The increasingly informative websites, PR actions and image development are becoming part of a true information medium for young people.

However, due to the newest developments in social perception, publicity orientedness is becoming far more decisive in the life of universities. The insufficiencies of traditional information channels with regard to the relationship between higher education and the public mentioned above happened parallel to the social phenomenon literature calls medialization,[5]and as such, it permeates the entirety of the institutions and functions[6], as well as individual lives. This means more than just obtaining information from the media for our private and public decisions.[7]“Being in the media” has become a form of existence, in other words, the boundaries between “the thing itself” and the communication about it are dissolving.

As we have seen earlier on, universities do not only appear on ranking lists as a brief piece of news while they carry on their business as usual. This appearance begins to permeate their operation, their financing and regulatory environment, what is more, their very self-image. And this happens with relation to medialization in a way that the media experience overrides direct experience. Of course, the decision-making politician meets with universities in proposals or in person during visits etc., yet as a politician, he is influenced primarily by the knowledge he has obtained about higher education from the media – since this knowledge is relevant to his electorate too, who also get their information from the same source. For those truly choosing institutions to study in higher education (since real choices are made by a relatively small group as we have shown above), ranking positions supplement personal acquaintances and information while allowing to get the feel for the institution’s identifying strength. At the same time, in their institutional strategies the universities themselves take on the evaluation form used by the media (a form easily grasped by decision-makers and by the assumed applicants).

Individual, corporate or public investments are defined by the image the resource-holder have formed about the higher educational system and the various institutions within. The same applies to considerably more tangible products with a more obvious use than higher educational services: we also choose consumer goods, real estate and cultural products with the help of “information”, that is preferences, tastes, subjective expectations – it is natural therefore that we form our opinion about the quality and usefulness of knowledge capital to be obtained via communication mediators.

In the case of the academic world, being in the media raises serious difficulties because of the limited media-suitability of scientific and knowledge transfer activities: the system of meritocracy considerably differs from public understanding, or the usual evaluation mechanisms and time frames. Yet, the rankings of the end of the millenary are themselves entirely media-products, not only because some of them have per definitionembeen prepared by media-products (newspapers, magazines), but also because they fulfil the most important semantic function[8]of medialization when they transform the otherwise rather complex academic achievement and university quality into  a sign easily interpretable  for the public. So they try to grasp the characteristics of various institutions in the form of simple indices. Based on the above, their objective is to show their evaluations in the most understandable and accessible way possible to the lay public, in other words in a way manageable for the media. They have left the academic decision-making circle and have become the most influential mass-media projects of the discourse on higher education. Thus the professional, methodological and public life dimensions have also become mixed up, and the force of academic values and responsibilities have considerably weakened.

The real question of rankings is actually this: to what extend do they, as forms of communication, transform university operation itself in the interpenetrating, medializing process of mediation, and to what extent is this effect stimulating, and promoting research and training, or as it may be the case, does it have a distortive effect on its true mission? In this respect the intentions of ranking makers become quickly detached from the actual effects of rankings. When such a publication summarising various opinions appears, the public (the press, the readers, the decision-makers and the institutions themselves), instead of seeing how an actual group viewsthe institutions,sees what they are like(how good they are, compared to each other which is better, etc.) in reality. Paradoxically, a ranking fails to fulfil its informative function properly by democratising the social perception of higher education, in other words, it overwrites or simply neglects mediocratic values.

 

3. World of Indicators

Rankings provide statements about higher education in two ways: on the one hand, by what they measure, on the other hand, by where they position the given institution through that measurement. Their content is therefore outlined by the indicators they use, and the choice and definition of the latter is a choice of values that also anchors an image of higher education at the same time. The way they use these indicators is predominantly a matter of methodological choice, which is less about higher education, and just results in a distortion of the reality of higher education to varying degree.

 

a; Indicators of Global Rankings

This distorsion is particularly striking in the case of the indicators of global rankings. If we were to accept the credibility of the picture that the rankings paint, we would consider higher education to be a system of insitutions dominated by research in the (natural) sciences, in which education or the humanities were just complementary activities.

Source: own calculations

The polarised character of indicators is even more apparent if we sum them up (undifferentiated, but with a good indication of the tendencies).

The aggregate indicators of the global rankings show that publication activity and international recognition weigh the most.

Source: own calculations

For the leading national universities this means that global rankings stay blind to about two-thirds of their activties. Whether we look at the student and staff composition, the scientific and training profiles, or the economic, social and community mission, we find that Hungarian institutions do not match these criteria unless they give up the social expectations raised for them, in order to meet, for example the need for teacher training, the maintenance and mediation of cultural values, the regional requirements and the maintenance of contact with local economic players. Cultivating national themes is also a priority task, which also draws resources from participation in the international natural sciences competition.

So, the majority of the non-Anglo-Saxon arts faculty and social science products (except those from traditional cultural leader countries like Germany, French, Spain and Italy) has and will have no substantial effect on the publication indicators of global rankings. The choice of subject matter, the linguistic and professional-geographical distance makes an international publication presence difficult a priori in these areas.

Moreover, the definition of „educational activity” is also quite rough, as it basically means the number (ratio) of instructors/students and the number/ratio of students participating in PhD programs.

This rough and in many ways irrelevant image of higher education that global rankings draw cannot handle the specific higher education environment. However,the usefulness of certain criteria is fundamentally defined by the general financial structure of higher education, the extent of social mobility, the levels and forms of higher education and their permeability, the type and special needs of the population participating in higher education, and the relation of the research-development centres to higher education. There are certain indices which can be extremely misleading (for example, the amount of donation to the alma mater per graduate, or the amount of central research money obtained, or, perhaps, the number of post graduate scolarships), as they are not relevant to every higher education institutional system. On the other hand, the most generally useful cirteria – such as expenditure per student, the level of infrastructure, student/teacher ratios, support given for finding a job, etc.  – also have completely different meanings in the various disciplines and the broader professional fields.

 

b; Indicators of National Rankings

Upon a survey of some seventy national rankings, we have observed significant differences from the logic of global rankings, while we can see that some major tendencies are prevalent in most places due to the general characteristics of the workings of higher education.

Based on analysis of national rankings, the most important special indicators are in them:

  • percentage of (undergraduate, graduate, PhD) students
  • infrastructure, facilities
  • social inclusion
  • community links
  • retention
  • student statisfaction
  • national student surveys
  • application, admission
  • student entry scores

 

c; Typology of Indicators

Considering all the above, the types of indicators used in a lot of rankings can be included in a single coordinate system only with great care – and conclusions cannot be drawn for the various rankings or regarding the importance of specific indicators. Instead, such a typology can demonstrate the image the various ranking makers have of higher education, if we are to use it to classify the various indicators. The joint value of the three dimensions (type of indicators, classification and weight) can only be shown three dimensionally, so here we only show the schema, while the results of the data analysis can be found at: ranking.elte.hu:

Type of indicator

Classification of indicators

Source (independent/ institution/survey) data or opinion based level of measurement (institution/

faculty/department

Exactitude of data (quantifiable, measurable) weight (%) validity
situation of recruitment (number of applicants, their performance, rate of entry, social and ethnic aspects)
student performance (scientific work, academic competitions, number of students, student ratios between undergraduate levels, distribution of students by professional field)
teacher supply (ratio of students/teachers, number of teachers, full-time/part-time teachers, qualification of teachers)
teaching conditions (square metre, library, IT, budget)
learning environment (availability of dormitories, fees, scholarships, sports and cultural facilities, education administration, student organisations)
eduicational output (ratio of graduates, ratios of graduate levels, time required for graduation)
usefulness of degrees (job availability, salaries, staying on the career path)
research (publications, citations, awards, research programmes)
capacity to raise funds (competition results, economic partnerships, external commissions, ratio of students paying tuition fees)
reputation (student and teacher acknowledgement, recognition,  opinion of labour market and social players)
international character (ratio of students and teachers, intra-institutional relations, number of joint research work, publications, grants, financing, organisational membership, conferences and events)
social and economic presence (career image of graduates, ties with alumni, financing by alumni, economic and social relations)
web presence (popularity, number of visitors, links, number of web contents)

Two approaches:

– Output indicators are strikingly underprepresented[9]. The literature especially points out labour market feedback and learning efficiency as missing. But it is not justified to view the labour market as an entity operating with objective and reliable criteria compared to higher education or student opinions.

– The measurement of training efficiency is a basic requirement of the all-time education policy. This is considered as an important element, that is, they wish to use it as a comparable indicator.

 

4. Relevancy of Indicators and Ranking-Positions – A Critical Overview

We present the criticism of rankings grouped by the main ranking elements.

a; Rankings present higher education one-dimensionally, in a simplified way, falsifying the essence of university performance.

  • The ranking indicators themselves would reflect how students view the universities[10], however, the image obtained through rankings is lopsided. Namely, this image presents higher education as an investment into career development, while for the students, being at university is as much a form of life as an investment into their career.
  • The greatest weakness of rankings is that they ignore diversity, that is, they consider institutions without regard to their missions, objectives, and structures.[11]
  • Rankings feature mainly institutions, while relevant data are much more accessible regarding specific training programmes, departments and insititutes..
  • The publication routines, possibilities and genres greatly differ among the various professional fields, so the rankings that use such indicators (mainly the global comparisons) present by necessity a lopsided picture of higher education institutions.[12]

b; The world of indicators is messy, they distort the reality of higher education.

  • The lists do not examine the efficiency of indicators, the various methodologies used are incongruent, and do not answer the need for implementing them in different countries or the issues of compatibility[13].
  • While national rankings try to reflect on this aspect somehow (even if in many cases, they too, mainly describe scientific reputation[14]), this practically is not in the scope of global rankings[15].
  • The general use of scientometric indices in itself has a dubious validity[16]. With regard to global university rankings, as it will be shown further on, the linguistic and cultural imbalances (the competitive advantage of the universities in Anglo-Saxon countries), the prevalence of (natural) sciences and within that, the changing positions in publications or support of the various professional fields cast a doubt on comparisons which use this as a key indicator.[17].
  • The opinion of scientists who are in most cases both geographically and professionally distant, is more likely to be based on past performances, and as such, has little value for present evaluation.
  • Relying on the so-called “third party” databases, that is, data collection from sources independent from institutions and participants is sometimes an impossible task.[18]The data obtained from surveys are very sensitive to sociological-statistical groundedness, however, empirical surveys often fail to meet such expectations. The use of the reputation indicator raises doubts anyway, because of the halo-effect.[19].
  • Often they select a ranking factor to be included among the components based on whether it is available at all regarding the issue in question, and if yes, how easily objective data can be obtained (from open resources), preferably from as many insititutions as possible. In other words: availability overwrites validity in the use of indicators.

c; The conjury of rankings: weighings and calculations are arbitrary and lead to false results.

  • In their simplest form, rankings are developed on the lines of mathematical algorithms. However, none of the rankings give any valid explanation of how they weighed a certain area when calculating.
  • The logic of the composition of the various indicators is also attacked by many. The summation of indices born from differing factors seems less legitimate[20]. Summated indicators have doubtful results from the point of view of the users, too, as the preferences of future students are manifold.
  • The rankings themselves are formed by creating weightings and summated indices. Even small deviations can result in great differences on the list.[21]
  • If a list were to upset the “natural order” of prestige in higher education, that is, the elite institutions were not at the top, nobody would take the ranking makers seriously. In addition, commercial ranking publications are accused of having the interest of publishing novelties from year to year, as otherwise no one would be interested in the new publications.

d; Rankings are unfulfilled promises.

  • So, rankings are inadequate in providing relevant and exact information on higher education for the students. The aggregated data characterize the whole of the institutions, although the students wish to know what each programme or department is like.
  • However, the managements of higher education institutions do not benefit from the adequate information[22]either, as they do not offer a picture of the performance of the various training programmes or individual organisational units, and this is particularly true for the global rankings.[23].
  • Due to the strength they represent in the media, rankings urge universities to improve their positions on the lists, often resulting in an autotelic drive for a better placement. As the palcements depend on arbitrary indicators, instead of the complex developments serving real needs, insitutions often make distorted strategic decisions.[24].

 

5. The Real Competition for Universities of East-Europe

In the “university zone” thus created from Helsinki to Sofia, the dominance of “western” universities is obvious, but of course, not surprising. The University of Helsinki is highly positioned in all rankings, it is placed continuously in the first hundred, and the improvement of its publication performance ensures an improving position for it on several rankings. Austrian universities are placed a category below, but are also solidly in the group of the best two hundred, with the University of Vienna especially strong on publication and the power of attracting international students. Of the universities of the former socialist countries, Charles Unversity of Prague is the most internationalised (although this is partly due to the large number of students traditionally arriving from the Slovak areas who are now considered foreigners), and its scientific performance is also competitive, so it is palced right after the Austrians.

As there are practically three factors (scientidic publication achievements, international prestige, foreign students’ interest) that decide the ranking chances of universities among all the indicator packages internationally, the group of Hungarian universities standing a chance to appear on general institutional lists is not surprising.

As for research excellence however, the performance of Hungarian higher education is outstanding in true scientific evaluation. For example, looking at the distribution of ERC grants of the EU within the dramatically undervalued East-European scientific world, Hungarian scientists won more grants till 2015 than all the former socialist countries together.

 

University Country ARWU THE THE BRICS&Emerging QS-World QS – EECA U.S. News
University of Vienna Austria 151-200 165 x 154 x 187
Technical University of Graz Austria x 401-500 x 501-550 x 618
Vienna University of Technology Austria 401-500 301-350 x 182 x 301
Sofia University Bulgaria x 1000+ 201-250 701-750 634
University of Zagreb Croatia 501-600 801-1000 196 601-650 37 526
Charles University Czech Republic 201-300 401-500 39 314 5 196
Palacky University Czech Republic 601-700 601-800 109 701-750 56 479
University of Helsinki Finland 56 90 x 102 x 81
Eotvos Lorand University Hungary 501-600 601-800 123 651-700 30 466
University of Szeged Hungary 501-600 601-800 129 501-550 27 756
Budapest University of Technology and Economics Hungary 701-800 801-1000 154 751-800 28 849
Semmelweis University Hungary x 401-500 65 x x 656
University of Debrecen Hungary x 801-1000 201-250 651-700 35 559
Corvinus University of Budapest Hungary x 801-1000 x 801-1000 45 x
CEU/KEE Hungary x x 16 x x
University of Pécs Hungary x 601-801 122 751-800 63 956
University of Trieste Italy 401-500 351-400 x 701-750 222
Jagellonian University Poland 401-500 601-800 112 461-470 14 388
University of Warsaw Poland 301-400 501-600 78 411-420 6 359
Babes-Bolyai University Romania 601-700 601-800 112 801-1000 42 583
University of Bucharest Romania x 801-1000 251-300 701-750 39 753
University of Beograd Serbia 201-300 x 196 801-1000 54 397
Comenius University Slovakia 701-800 601-800 151 701-750 43 511
University of Ljubljana Slovenia 401-500 601-800 129 651-700 32 394
University Miskolc Hungary x x x x 110 x
University of West Hungary Hungary x x x x 201-250 x

 

 

6. Interpreting rankings – from a Theoretical and Social Perspective

Despite academic reservations and serious methodological scruples, rankings have a regulatory effect on the functioning of higher education, which is explained by reactivity[25]on the organisation-sociological level. In this process, universities bristle somewhat at the state of “being measured” (a few of them, and only occasionally, do this vehemently), it is more general, however, that due to internal and external expectations of a better ranking placement, they influence their operation even by shaping their institutional identity.[26]This effect surpasses direct ranking positions and the organisational units involved in ranking. The state of “being measured” has a permanent ranking observation effect, similarly to Foucault’s description of the state of being observed in prisons which prevails even if the prisoner is not actually being observed by anyone.[27]An explanation can be found to all the above in the social perception of higher education, as knowledge-institutions of academic nature enter the world of media and are portrayed through prestige as the obvious code of interpretation, actualised in the form of a ranking; in other words, with more distortions than tolerable in content, evaluation and interpretation.

Because the process is media driven, it continues to strain the already increasing “internal” tension related to the measurable cultivation of science and knowledge transmission.[28]Similarly to the way in which the culture of scientific research had been “turned into numbers” by the languages of mathematical-statistical methods and self-expression[29]at first in the 18th-19thcenturies, numerical enchantment became dominant by the turn of the millennium through the notion of accountability in public services, including education. This again takes shape in the ranking based evaluation of higher education, in line with the preferences of the media consumer public and the corresponding expectations of the politicians of mass democracy.

By now, institutional and analytical critiques have profoundly reviewed the methodology of higher education rankings, especially that of global rankings, the image of higher education taking shape through their indicators, and the effect they exert on institutional and higher education policy.[30]The final conclusions of these are usually fatalistic: rankings are here to stay with us. Yet how they stay with us, how they are and will be part of the world of higher education is not necessarily pre-determined. The interesting phenomenon that Hazelcom had noted, namely that the ones who pay the least attention to global rankings are American universities, who happen to lead them can be encouraging for the traditional merits of university research and education..

In other words, the professional value of higher education confident in its own performance, reconfirmed by the participation of excellent teachers and students, or otherwise expressed in institution-organisational language, as well as its autonomy can still be re-formulated, re-built in a world pervaded by the media and management culture. This rebuilding may mot only be inspired by the social ideal of freedom and autonomy, but it can also closely relate to the essence of science, knowledge transfer and acquisition. Homogenizing and simplifying measurements establishes a culture of alignment which punishes non-conformity, therefore it holds back scientific and educational creativity – which in turn affects the organisation of society in which the rationality of science, the dynamics of tradition and innovation are indispensable factors – acting both as a model for and as virtue of the western world.

This notion of the university dates back to a time before than the great changes of the past decades. Since then we are witness to rankings being legitimized by mass media and mass democracy together, therefore a question to the world of academia is whether it can obtain similar legitimacy to its authorities which are indispensable for research and educational activities. The general course of the social perception of higher education and science defines this to a much larger extent than the rankings themselves.

[1]COMMISSION WORKING DOCUMENT on recent developments in European high educations systems. 2011. Brussels 2011. www.u-multira

[2]U Multinak Interim Report. 2012.

[3]Sarah Guri-Rosenblita Helena Sebková, Ulrich Teichler: Massification and Diversity of Higher Education Systems: Interplay of Complex Dimensions. Higher Education Policy (2007) 20, 373–389

[4]Salmi-Saroyan 2007

[5]Livingstone, Sonia (2009) On the mediation of everything. 2008. Journal of communication, 59 (1). pp. 1-18.

[6]Stig Hjarvard: The Mediatization of Society A Theory of the Media as Agents of Social and Cultural Change. Nordicom Review 29 (2008) 2, pp. 105-134

[7]Meyrowitz, J. (1985). No Sense of Place: The Impact of Electronic Media on Social Behavior. New York: Oxford University Press.

[8]Winfried Schulz
: Reconstructing Mediatization as an Analytical Concept European Journal of Communication 2004; 19; 87

[9]Boyadzhieva, P. – Denkov, D. – Chavdar, N. 2010. Comparative analysis of leading university ranking methodologies. Sofia: Ministry of Education, Youth and Science, Bulgarian. 15:06 EUROPEAN UNION EUROPEAN SOCIAL FUND Operative Programme „Human Resources Development 2007– 2013 August.

[10]Vossensteyn, J. J. 2005. Perceptions of Student Price-responsiveness – A Behavioural Economics Exploration of the Relationships between Socio-economic Status, Perceptions of Financial Incentives and Student Choice.Enschede: CHEPS/UT.

[11]Turner, David 2005. Benchmarking in Universities: League Tables Revisited. Oxford Review of Education, 31(3). pp. 353–371.

[12]Clarke, Marguerite 2002. Some Guidelines for Academic Quality Rankings. Higher Education in Europe, 27(4). pp. 443–459.

[13]Dill, D. – Soo, M. 2005. Academic Quality, League Tables, and Public Policy: A Cross-National Analysis of University Rankings. Higher Education, (49)4: pp. 495–533.

[14]Dill, 2006: 14

[15]Marginson, Simon – Wende, Marijk van der 2007.: To Rank or To Be Ranked: The Impact of Global Rankings in Higher Education. Journal of Studies in International Education, 11 (3/4), pp. 306–329.

[16]Weingart P (2005): Impactof bibliometrics upon the science system: Inadvertent consequences? Scientometrics 62(1): 117–131.

[17]Cunningham, Stuart (2008) ‘University and Discipline Cluster Ranking Systems and the Humanities, Arts, and Social Sciences’, Higher Education in Europe, 33: 2, 245 — 258

[18]YorkeM. 1997. A good league table guide?, Quality Assurance in Education. 5(2). pp. 61–72.

[19]Stuart, Debra L. 1995. Reputational Rankings: Background and Development. New Directions for Institutional Research In Walleri, Dan R. – Marsha, K (eds.) 1995. Evaluating and Responding to College Guidebooks and Rankings. San Francisco: Jossey-Bass. pp.13–20.

[20]Eccles, C. 2002. The Use of UniversityRankingsin the United Kingdom. Higher Education in Europe, 27(4), pp. 423–32: p. 425.

[21]Müller-Böling, M. – Federkeil, G. 2007. The CHE-Ranking of German, Swiss, and Austrian Universities. In Sadlak, J.- Cai, L. N. (eds.). The World-Class University and Ranking: Aiming Beyond Status. Bucharest, Romania: Cluj University Press. pp. 189–203.

[22]Hazelkorn, Ellen 2015. Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence. London: Palgrave Macmillan.

[23]Westerheijden, Don F.– Stensaker, Bjørn – Rosa, Maria João 2007. Quality Assurance in Higher Education: Trends in Regulation, Translation and Transformation. Dordrecht: Springer

[24]Naidoo, R. –Jamieson, I. M., 2005. Empowering participants or corroding learning?: Towards a research agenda on the impact of student consumerism in higher education. Journal of Education Policy, 20(3), pp. 267–281.

[25]Espeland– Sauer: Rankings and reactivity. American Journal of Sociology. Vol. 113, Num. 1. July. 2007. 1-40.

[26]Wedlin, Linda: Playing the Ranking Game. Field formation and boundary-work in European management education. 2004.

[27]Foucault, Michel: Discipline and Punishment. The History of Prisons.

[28]Hacking, Ian (1999). The social construction of what? Cambridge: Harvard University Press.

[29]Porter,Theodore M.: Trust in Numbers. Princeton University Press, Princeton, New Jersey, 1995.

[30]Teichler, U.2011. Social Context and systemic consequence of university rankings: A meta-analysis of the ranking literature. In: Shin, J. C. – Toutkoushian, R. K. – Teichler, U.(eds.). University rankings: Theoritical basis, methodology and impacts on global higher education. Dordrecht: Springer. pp.55–69.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.