Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch

Thursday, November 16, 2017

A Tale of Three Rankings

In the Spring of 2018 the US News and World Report should release their latest rankings of US graduate science programs including computer science. These are the most cited of the deluge of computer science rankings we see out there. The US News rankings have a long history and since they are reputation based they roughly correspond to how we see CS departments though some argue that reputation changes slowly with the quality of a department.

US News and World Report also has a new global ranking of CS departments. The US doesn't fare that well on the list and the ranking of the US programs on the global list are wildly inconsistent with the US list. What's going on?

75% of the global ranking is based on statistics from Web of Science. Web of Science captures mainly journal articles where conferences in computer science typically have a higher reputation and more selectivity. In many European and Asian universities hiring and promotion often depend heavily on publications and citations in Web of Science encouraging their professor to publish in journals thus leading to higher ranked international departments.

The CRA rightly put out a statement urging the CS community to ignore the global rankings, though I wished they made a distinction between the two different US News rankings.

I've never been a fan of using metrics to rank CS departments but there is a relatively new site, Emery Berger's Computer Science Rankings, based on the number of publications in major venues. CS Rankings passes the smell test for both their US and global lists and is relatively consistent with the US News reputation-based CS graduate rankings.

Nevertheless I hope CS Rankings will not become the main ranking system for CS departments. Departments who wish to raise their ranking would hire faculty based mainly on their ability to publish large number of papers in major conferences. Professors and students would then focus on quantity of papers and this would in the long run discourage risk-taking long-range research, as well as innovations in improving diversity or educating graduate students.

As Goodhart's Law states, "when a measure becomes a target, it ceases to be a good measure". Paradoxically CS Rankings can lead to good rankings of CS departments as long as we don't treat it as such.

6 comments:

There's another ranking of North American CS departments, based on the outcome of faculty hiring: http://advances.sciencemag.org/content/1/1/e1400005 . The up side of this approach, and an advantage over any bibliographic approach like "CS Rankings", is that it relies on the more deliberative evaluations of hiring committees rather than the vagaries of the CS conference system.

Suppose you had to pick one ranking system to be "main." From your post it sounds to me that you would pick CS rankings.I would pick CS rankings without reservations, even though I still think it should be improved (see the older post of mine for some suggestions https://emanueleviola.wordpress.com/2016/07/21/csrankings/).

I think we all agree that beancounting isn't all that there is to it. But, there is in my opinion an even more general problem. A ranking is, by its own nature, a number. It looks a little strange to me that someone wants to produce a number without using numbers.

I think my utopia in this respect is to have all the information out there and easily accessible, including papers, grant money, awards, etc. Then anyone is free to slice this information in any desired way, or to ignore it.

Note that csranking is just a generalization of Seddighin-Hajiaghayi ranking from theory to general cs (on which there were several discussions on this blog). See https://projects.csail.mit.edu/dnd/ranking/