Potential refinements in the QS World University Rankings 2015

Anyone who has seen me present will know that one of my most frequently used quotes is from the US statistician, George Box, said, “Essentially all models are wrong, but some are useful”. Rankings are controversial as much because they are imperfect, incomplete as anything else. Were there a perfect answer, and had someone found it, there would be no space for debate, discussion and disagreement.

The QS World University Rankings were one of the first, and remain one of the most popular, international rankings of universities. Part of this popularity has been in their simplicity and part in their consistency – six weighted indicators drawn together to present a simple table representing a global hierarchy of world universities.

Despite the basic framework remaining the same since 2005, QS has not been afraid to listen and make refinements. Switching to Elsevier’s Scopus database in 2007 was one such change. One of the well-known challenges in developing metrics from a bibliometric database like Scopus is taking into account the different patterns of publication and citation across discipline areas. Various efforts have been made to address this problem, perhaps with the Leiden Ranking being the leading protagonist.

QS has been seriously considering adopting a methodological refinement in this regard for around 18 months but, true to our broader philosophy, we are keen to keep things simple and open as well as to avoid some of the pitfalls evident in other attempts.

In the institutional fact files we distributed prior to the QS World University Rankings in September 2014, we expressed the simplicity of our intention – to balance the distribution of citations not across hundreds or thousands of disciplinary buckets but simply across our five broad faculty areas.

More recently, at the IREG forum on subject rankings in Denmark last month there was inevitable discussion about the future of rankings and how they should adapt to different realities in different subjects and inevitably the question of field normalization (particularly as relates to the aggregation and analysis of citations) was visited from a number of different perspectives. So, further feedback to listen to and consider then.

In analysing the latest set of Scopus data we can observe that a typical (average) institution receives 49.1% of citations in Life Sciences & Medicine; 27.6% in Natural Sciences; 16.5% in Engineering & Technology; 5.8% in Social Sciences and just 1.0% in Arts & Humanities.

Overall the emphasis that a flat ratio of citations per faculty places on the sciences in the QS rankings has been offset by a more even-handed approach on the surveys. In order to derive balance in the surveys we have, since the very beginning, made the judgement that the five faculty areas listed above are to be considered equal in the typical make up of a university. Our proposed model for addressing the imbalance in citations is adopt the same the assumption in this context – and we have done it. We have built a model which derives a normalized citation count across these five faculty areas and, furthermore, made adjustments to account for language bias emerging from placing greater emphasis on subjects in which a higher proportion of material is published in language other than English.

The outcomes look very interesting and have passed internal scrutiny and are currently with our Advisory Board for consideration. Institutions with an emphasis on Social Sciences, Arts & Humanities and Engineering & Technology will be likely to rise, the advantage hitherto afforded to those with a disproportionate reliance on a strong medical school will be neutralised. For some, if implemented, this will make a substantial difference in their indicator performance and, inevitably, their overall ranking as well.

This will represent the single biggest shift in approach since 2007 but we will have a cycle of communication running into any potential changes, and we will also lay out a plan to ensure institutions have what they need to maintain any year on year monitoring they have been doing.

Additionally, we are proposing to exclude papers with more than 10 affiliated institutions – representing 0.34% of records. This will ensure highly cited papers with very large numbers of contributors will not have a distorting effect, without discouraging collaboration – as would be the case for universal fractional counting. Finally, the window for survey responses is likely to be extended to five years (with a lower weighting for the earliest two years) bringing it into line with recent developments in the subject rankings.

QS takes pride in its work and, whilst we acknowledge that there is no perfect solution, we are always listening and looking for ways to develop and refine our rankings. These developments enable us to do that whilst maintaining our commitment to simplicity and openness.

[av_hr class=’short’ height=’50’ shadow=’no-shadow’ position=’center’ custom_border=’av-border-thin’ custom_width=’50px’ custom_border_color=” custom_margin_top=’30px’ custom_margin_bottom=’30px’ icon_select=’yes’ custom_icon_color=” icon=’ue808′ font=’entypo-fontello’]

Ben Sowter will be running webinars to further explain the changes within the methodology. You may also download a pdf explanation here

To register for the webinars, please click on the one that best suit your availabilities on Thursday, September 10th:

Nota bene: all times are in London BST.

More QS Insights

Sign up for industry insights

Receive the latest insights, expertise and commentary on the topics which matter most in higher education, straight to your inbox.

Sign up