THE

THE World Universiy Rankings

Historcal and Institutional Background

The creation of the original Times Higher Education-QS World University Rankings was credited in Ben Wildavsky’s book, The Great Brain Race: How Global Universities are Reshaping the World, to then-editor of Times Higher Education, John O’Leary. Times Higher Education chose to partner with educational and careers advice company QS to supply the data.

After the 2009 rankings, Times Higher Education took the decision to break from QS and signed an agreement with Thomson Reuters to provide the data for its annual World University Rankings from 2010 onwards. The publication developed a new rankings methodology in consultation with its readers, its editorial board and Thomson Reuters. Thomson Reuters will collect and analyse the data used to produce the rankings on behalf of Times Higher Education. The first ranking was published in September 2010.

Indicators and Methodology

 

Indicator Definition Weight
Research: volume, income and reputation

This category is made up of three ­indicators. First is a simple measure of a university’s research volume, scaled for institutional size, to give a sense of its productivity. We count the number of papers published in the ­academic journals indexed by ­Thomson Reuters per academic staff member to give a clear picture of each institution’s ­ability to get papers published in ­quality peer-reviewed journals.

This indicator is worth 9 per cent overall, up from 6 per cent in the World University Rankings, reflecting the reduced weight given to the reputation measures.

This category also looks at university research income, scaled against staff numbers and normalised for ­purchasing-power parity and for each university’s distinct subject profile. This indicator reflects the fact that research grants in science subjects are often bigger than those awarded for the highest-quality social science, arts and humanities research. This indicator is also weighted at 9 per cent, up from 6 per cent in the World University Rankings.

The final indicator in this category is based on the most recent results of our annual reputation survey. Thomson Reuters carried out its Academic ­Reputation Survey – a worldwide poll of experienced scholars – in spring 2012 (the 2013 poll has just closed and its data will be used to inform the World University Rankings 2013-14, to be published this autumn).

30%
Citations: research influence In this indicator, we examine a university’s research influence by capturing the number of times its published work is cited by scholars around the world.

Worth 30 per cent of the overall score, this single indicator is the ­largest of the 13 employed to create the table – and its weighting remains identical to that employed in the World University Rankings.

The data are drawn from the 12,000 academic journals indexed by Thomson Reuters’ Web of Science database and include all indexed ­journals published in the five years between 2006 and 2010.

Citations made in the six years between 2006 and 2011 are also ­collected, thus improving the ­stability of the results and decreasing the impact of ­exceptionally highly cited papers on institutional scores.

The findings are fully normalised to reflect variations in citation volume between different subject areas. As a result, institutions with high levels of research activity in subjects with ­traditionally high citation counts do not gain an unfair advantage.

For institutions with relatively few papers, citation impact may be ­significantly boosted by a small ­number of highly cited papers, so only those institutions that have published at least 200 papers a year are included.

30%
Teaching: the learning environment This category employs five separate performance indicators designed to ­provide a clear sense of the teaching and learning environment of each ­institution from both the student and scholarly perspective.

Despite a reduction in weighting from the World University Rankings, the main indicator in this ­category is still based on the Academic Reputation Survey 2012.

The results of the survey with regard to teaching make up 10 per cent of the 100 Under 50 – down from 15 per cent in the World University Rankings.

Our teaching and learning category also employs a staff-to-student (total student numbers) ratio as a simple proxy for teaching quality – suggesting that where there is a low ratio of ­students to staff, the former will get the personal attention they require from faculty members.

It is worth 6 per cent of the 100 Under 50 score – up from 4.5 per cent in the World University Rankings to help fill the gap left by reputation’s reduced importance.

The teaching category also examines the ratio of PhD to bachelor’s degrees awarded by each institution. We believe that institutions with a high density of research students are
more knowledge-intensive, and that the ­presence of an active postgraduate community is a marker of a research-led teaching environment valued by undergraduates and postgraduates alike.

The PhD-to-bachelor’s ratio is worth 3 per cent of the 100 Under 50 scores (up from 2.25 per cent).

This category also uses data on the number of PhDs awarded by an institution, scaled against its size as measured by the number of academic staff.

As well as giving a sense of how committed an institution is to nurturing the next generation of academics, a high proportion of postgraduate research students also suggests the ­provision of teaching at the highest level that is attractive to graduates and good at developing them.

Undergraduates also tend to value working in a rich environment that includes postgraduates.

The indicator makes up 8 per cent of the score (up from 6 per cent in the World University Rankings).

The final indicator in the teaching category is a simple measure of ­institutional income scaled against academic staff numbers.

This figure, adjusted for ­purchasing-power parity so that all nations compete on a level playing field, ­indicates the general status of an ­institution and gives a broad sense of the infra­structure and facil­ities ­available.

This measure is worth 3 per cent, a marginal increase from the World University Rankings figure (2.25 per cent).

30%
International outlook: people and research Our international category looks at diversity on campus and how much each university’s academics collaborate with international colleagues on research projects – signs of how global an institution is in its outlook. This ­category is unchanged from the World University Rankings.

The ability of a university to ­compete in a global market for undergraduates and postgraduates is key to its success on the world stage; this ­factor is measured here by the ratio of international to domestic students.

This is worth 2.5 per cent of the 100 Under 50 list’s overall score.

As with competition for students, the top universities also operate in a tough market for the best faculty. So in this category we give a 2.5 per cent weighting to the ratio of international to domestic staff.

We also calculate the proportion of each university’s total research journal publications with at least one inter­national co-author and reward the higher volumes.

This indicator, which is also worth 2.5 per cent, is normalised to account for a university’s subject mix and uses the same five-year window that is employed in the “Citations: research influence” category.

7.5%
Industry income: innovation 

A university’s ability to reinforce ­industry with innovations, inventions and consultancy has become such an important activity that it is often known as its “third mission”, alongside ­teaching and research.

This category seeks to capture such knowledge transfer by looking at how much research income an institution earns from industry, scaled against the number of its academic staff.

It suggests the extent to which ­businesses are willing to pay for research and a university’s ability to attract funding in the competitive ­commercial marketplace – key indi­cators of quality.

However, because the figures ­provided by institutions for this ­indicator are relatively patchy, we have given it a low weighting: just 2.5 per cent.

2.5%

To calculate the overall ranking score, “Z-scores” were created for all datasets. This standardises the different data types on a common scale and allows fair comparisons between the different types of data — which is essential when combining diverse information into a single ranking.

Universities were excluded from the World University Rankings tables if they do not teach undergraduates; if their research output amounts to less than 50 articles per year; or if they teach only a single narrow subject. Each institution listed in these rankings opted in to the exercise and verified its institutional data. Where institutions did not provide data in a particular area (which occurred in only some very low-weighted areas), the column has been left blank.

A worldwide Academic Reputation Survey was carried out during spring 2010. Some 13,388 responses were gathered across all regions and subject areas. The results make up a total of 34.5 per cent of the overall ranking score (15 per cent for teaching and 19.5 per cent for research).