Paul Wouters and Sarah de Rijcke @ CWTS

Main menu

Monthly Archives: October 2010

Post navigation

The different lists of university rankings have attracted increasing attention because of their potential as a weapon in the increasingly fierce global competition between universities. A university that is confronted with a lower position in the rankings has to provide a plausible explanation. And universities that are placed on a higher place in the list naturally celebrate this. Let us take a look at the Netherlands. A few weeks ago, the Leiden Ranking produced by CWTS was good news for the Erasmus University (EUR) in Rotterdam. They were placed as the 6th university in Europe. The university immediately published an advertisement in the national newspapers to congratulate its researchers with this leading position in the Netherlands. The advertisement had the facts right, but it emphasized the criterion that puts the EUR highest (number 6 in the list of 100 largest European universities): the number of citations per publication. This indicator is favorable for universities with large medical faculties and hospitals, because these are large research fields with on average much more references and citations than, for example, in the technical sciences or in philosophy. And it matters which universities are used as the relevant group to rank. Using the same indicator of citations per paper puts the EUR number 9 among the 250 largest European universities, because 3 smaller universities appear in the top before even Oxford and Cambridge. Still a very good score and still number 1 of the Netherlands in this ranking. But how does it look when we use other indicators? CWTS now uses two different indicators to take field differences into account. How does the EUR score in these lists? The traditional CWTS "crown indicator" puts the EUR on number 8 among the 100 largest and number 14 among the 250 largest European universities. The improved CWTS indicator gives the EUR a score of 11 among the 100 largest and 15 among the 250 largest universities in Europe. In all these cases, the EUR is highest among the Dutch universities. If size is taken into account in combination with quality, however, the University of Utrecht has the highest score in the Netherlands (nr. 8) and the EUR ends on position 20, after Utrecht and the University of Amsterdam.

So what is the lesson here? First, ranking is a pretty complicated affair because there are many ways to rank universities. Rankings simplify these comparisons of many different dimensions. The universities are forced to build on this and reduce this complexity even further. This is facilitated by the fact that the different rankings produce different results. It enables universities to choose the most favorable ranking. It also enables universities to debunk a ranking by pointing to other results in the other rankings or even debunk the ranking as such by showing contradictions among ranking results. However, this does not disempower these rankings. As Richard Griffiths (professor of social and economic history in Leiden) stated two weeks ago in the university weekly Mare: "Such a list can be a pile of junk, but it is best not to be in the bottom of the pile." Universities are therefore also discussing to what extent mergers can help to improve their ranking scores. For example, it might be profitable for a technical university to be coupled to a large academic hospital.

Not only individual universities are actively engaged in the debate about rankings, the same holds for associations of universities. The Dutch university association VSNU concluded from the Times Higher Education Supplement (THES) ranking that the Netherlands is the fifth best academic country in the world. As science journalist Martijn van Calmthout wrote in De Volkskrant: this requires some creativity because the Netherlands as a whole does no longer belong to the world top (which does not mean that there are no fields where Dutch researchers belong to the best performers in the world). No Dutch university belongs to the 100 best universities in this ranking (which uses a very different set of indicators from the Leiden Ranking, see the next blog post). In fact, the Dutch universities group together pretty close. Their relative position depends on the indicator used. Leiden scores highest when external funding is the main criterion in the THES ranking. And Shanghai puts Utrecht highest (number 50 in the world list) followed by Leiden (at 70). How significant are the differences among the Dutch universities actually?

The differences between the different rankings creates a drive to keep producing new indicators to capture aspects and dimensions of quality that are not measured satisfactorily in the existing ones. This cannot go on endlessly. It may be time to take the perverse effects of this one-dimensional ranking more seriously. One way is to further develop truly multi-dimensional indicators, another to investigate the underlying properties of indicators more thoroughly, and a third to take the limits of indicators more seriously, especially in science policy. Will it be possible to combine these three strategies?