The rankings of the world’s top universities that my magazine has been publishing for the past six years, and which have attracted enormous global attention, are not good enough. In fact, the surveys of reputation, which made up 40 percent of scores and which Times Higher Education until recently defended, had serious weaknesses. And it’s clear that our research measures favored the sciences over the humanities.

Yes, it’s a ‘manel’ (from the left: me, Johnny Rich, Rob Carthy). In our defence, Sally Turnbull, who was chairing, sat off to one side and two participants (one male and one female) had to withdraw at short notice. Photo by @UKHESPA (with permission).

The big news on the release of the Times Higher Education World University rankings for 2017 was that Oxford, not Caltech is now the No. 1 university in the world.

According to the BBC news website, “Oxford University has come top of the Times Higher Education world university rankings – a first for a UK university. Oxford knocks California Institute of Technology, the top performer for the past five years, into second place.”

Ladies and gentlemen, this is what is widely known as ‘fake news’. There is no story here because it depends on a presumption of precision that is simply not in the data. Oxford earned 1st place by scoring 95.0 points, versus Caltech’s 94.3. (Languishing in a rather sorry sixth place is Harvard University, on 92.7).

The central problem here is that no-one knows exactly what these numbers mean, or how much confidence we can have in their precision. The aggregate scores are arbitrarily weighted estimates of proxies for the quality of research, education, industrial contacts and international outlook. And they include numbers based on opinions about institutional reputation.

In all likelihood these aggregate scores are accurate to a precision of about plus or minus 10% (as I have argued elsewhere). But the Times Higher (and most other rankers – I don’t really mean to single them out) don’t publish error estimates or confidence intervals with their data. People wouldn’t understand them, I have been told. But I doubt it. That strikes me rather as an excuse to preserve a false precision that drives the stories of year on year shifts in rank even though they are, for the most part, not significant.

Now Phil Baty, the editor of the Times Higher Rankings (and someone who, to give him his due, is always happy to debate these issues) is stout in his defence of what the Times Higher is about. A couple of months ago he wrote in an editorial criticising the critics of university rankings:

“beneath the often tedious, torturous ad infinitum hand wringing about the methodological limitations and the challenges of any attempt to reduce complex universities to a series of numbers, the single most important aspect of THE’s global rankings is often lost: the fact that we are building the world’s largest, richest database of the world’s very best universities.”

But who can define ‘best’? What is the quantitative measure of the quality of a university? Phil implicitly acknowledges this by conceding that “there is no such thing as a perfect university ranking.” I would ask, is there one that is good? Further, if the point is to assemble a database, why do the numbers in the different categories have to be weighted and aggregated, and then ranked? Just show us the data.

The problem, as is well known, is that these rankings have tremendous power. They are absorbed by university managers as institutional aims. Manchester University’s goal, for example, stated right at the very top of their strategic plan is “to be one of the top 25 research universities in the world”.* How else is that target to be judged except by someone’s league table? In setting such a goal, one presumes they have broken down the way that the marks are totted up to see how best they might maximise their score. But how much is missed as a result? Why not be guided by your own lights as to what is the best way to create a productive and healthy community of scholars? Surely that is the mark of true leadership?

Such an approach would enable institutions to adopt a more holistic approach to what they see as their missions as universities. And to include things that are not yet counted in league tables, like commitment to equality and diversity, or to good mental health, or – in these troubled times when we are beset on all sides by fake news – to scholarship that upholds the value of truth.

How fair is your institution – what’s your gender balance?
How representative are your staff and student bodies of the population that you serve?
How much of their allocated leave do your staff actually take?
How well do you support staff with children?
And… How many of your professors are assholes?

Now, Jenny may have had her tongue in her cheek for some of these but there is a serious point here for us to discuss today. How often do rankers think about the impact on the people who work in the universities that they are grading?

I would argue that those who create university league tables cannot stand idly by (as bibliometricians used to do), claiming that they are just purveyors of data. It is not enough for them to wring their hands as universities ‘mis-use’ the information they provide.

“Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation.”

What is true of bibliometrics is true of university ranking. Therefore I call on this community here today to take action and come up with its own manifesto. Since we are in London, we could even call it the London manifesto. (After Brexit, we’re about to become the centre of nowhere and nothing, it would be nice to have something for people to remember us by!)

I stand ready to help with its formulation. I urge you to consider this seriously and quickly. Because if providers won’t do it, maybe some of us will do it for you.

Thank you.

A couple of afterthoughts on the meeting:

It was noticeable that the rankings provider who spoke after the panel addressed more of the technical shortcomings and cultural issues of university league tables than those who presented earlier in the day. It is important to keep the debate on rankings and university evaluation alive.

I was surprised that there were relatively few questions after each talk from the audience, which consisted mostly of people involved in strategic planning at various universities. I hope that doesn’t indicate a certain degree of resignation to the agenda-setting power of rankers and, as a result, a reluctance to consider the broader impacts. But I remain concerned. In answer to my question about why one of the providers had bemoaned the fact that some university leaders rely too heavily on rankings, I was told – candidly – that in some cases he felt it was a matter of poor leadership.

I was struck by an example mentioned by my co-panellist, Rob Carthy, from Northumbria University which pointed out one of the perverse effects of rankings. His university works hard to select and recruit Cypriot students even though they often only do one A level (a feature of the school system). In doing so, however, the average A level tariff of their intake drops which, on some league table measures, will reduce their score. The rankings therefore disincentivises searches for student talent that look beyond mere grades. I suspect they may also be reducing the motivation of some universities to widen participation.

*To be fair to Manchester, on this web-page the phrase appears to have been edited to read: “Our vision is for The University of Manchester to be one of the leading universities in the world by 2020.”

________________
“Some NTU “rejects” even went on to Ivy League Universities overseas. Many understandably could not afford the costly overseas education. A mere tweaking of the arbitrary cut-off points for NTU Admissions would easily have absorbed 6,500 more Singapore students. The cutoff point appeared deliberate in order to have less local students, in favour of foreign studnets in order for NTU to excel in the foreign students criteria of the QS Ranking criteria.”
_______________
Were Singaporean Students and Professors Sacrificed for NTU Top Rankings?

____________________
Anche in Canada c’è chi si pone il problema degli effetti distorsivi sull’opinione pubblica, a cui vengono dati in pasto fenomeni inesistenti o irrilevanti (con il rischio per nulla remoto di perdere di vista le cose davvero importanti). Segnalo un articolo che è interessante già a partire dal titolo. Anche in questo caso, il punto di partenza è la pubblicazione della classifica QS. Eccone alcuni stralci.
===============
Universities are not sports. So why do we pay so much attention to rankings?
by Alex Usher
_______________
The 2018 QS World University Rankings, released last night, are another occasion for this kind of analysis. The master narrative for Canada – if you want to call it that – is that “Canada is slipping.” The evidence for this is that the University of British Columbia fell out of the top 50 institutions in the world (down six places to 51) and that we also now have two fewer institutions in the top 200 – Calgary fell from 196 to 217 and Western from 198 to 210 – than we used to.

People pushing various agendas will find solace in this.
[…]
Nationally, people will try to link the results to problems of federal funding and argue how implementing the recommendations of the Naylor report would be a game-changer for rankings.

This is wrong for a couple of reasons. The first is that it is by no means clear that Canadian institutions are in fact slipping. Sure, we have two fewer in the 200, but the number in the top 500 grew by one. Of those who made the top 500, nine rose in the rankings, nine slipped and one stayed constant. Even the one high-profile “failure” – UBC – only saw its overall score fall by one-tenth of a point; the fall in the rankings was more a result of an improvement in a clutch of Asian and Australian universities.

The second is that in the short term, rankings are remarkably impervious to policy changes.
[…]

And that’s exactly right. Universities are among the oldest institutions in society and they don’t suddenly become noticeably better or worse over the course of 12 months. Observations over the span of a decade or so are more useful, but changes in ranking methodology make this difficult (McGill and Toronto are both down quite a few places since 2011, but a lot of that has to do with changes that reduced the impact of medical research relative to other fields of study).
[…]
What’s not as useful is to cover rankings like sports, and invest too much meaning in year-to-year movements. Most of the yearly changes are margin-of-error kind of stuff, changes that result from a couple of dozen papers being published in one year rather than another, or the difference between admitting 120 extra international students instead of 140. There is not much Moneyball-style analysis to be done when so many institutional outputs are – in the final analysis – pretty much the same.
____________________https://beta.theglobeandmail.com/opinion/universities-are-not-sports-so-why-do-we-pay-so-much-attention-to-rankings/article35248888/?ref=https://www.theglobeandmail.com&service=mobile