How to score in academia

The way you score in academia seems to have changed down the years. Once, I remember, it was the number of articles you managed to publish that mattered.

A couple per year got you through the ranks, and a book might nudge you through to a professorship, even if no one actually read it. Nobody cared too much where these things were published, so long as they added bulk to your CV.

It then dawned on promotions committees and the like that articles also didn’t mean too much if no one cited them. So it was not so much the number of articles that mattered, but rather how many times they got cited. In 1964, the Science Citation Index, later called Web of Knowledge, was established, so you could quickly find out how many times your work had been cited and add that number to your promotion application. Of course it wasn’t perfect, because you might write 10 worthy articles, each cited once, while your colleague in the next office wrote only one which was cited ten times. Who is the more worthy, she or you? Maybe she is, because a big stone makes a bigger splash than a lot of pebbles. On the other hand she might seem a tad lazy.

So a compromise was needed. It came in 2005 as the h-index, named after its inventor Jorge E. Hirsch, which combines productivity with impact. Your h-index is x if x of your papers have been cited x or more times. According to the Web of Knowledge, Hirsch’s own h-index is 58, and his article on it is easily his most cited contribution, having been now been cited 2931 times. This still seems a bit crude, because you might have one career-defining paper that gets cited 1,000 times and yet have an h-index of only 1. You would be indistinguishable from the person who publishes 1000 papers, each cited only once. Even so, the h-index seems the current metric of choice. For the purposes of advancement, you ARE your h-index.

Except that it comes in different sizes. The h-index based on the Web of Knowledge may seem harsh because it is based on publications in journals that meet fairly rigid standards, and is a bit unfair to those in the humanities, or perhaps to those who publish great articles in bad journals. Or books. Alternative sources are out there, including Elsevier’s Scopus. The most preferred, though, is probably Google Scholar, which started reporting the h-index from 2011, and does include citations in books and in the so-called “grey literature,” such as working papers, government documents—and in my case at least one or two things I didn’t actually write, or can’t remember having written. According to one study Google Scholar identified on average 53 percent more citations than Web of Knowledge, and consequently gave higher h-indices. Best, then, to go for broke and use the Google version, which now seems to be the norm.

The other thing to consider if you’re an academic with PBRF bearing down on you is to publish jointly with many others, because each author gets counted as though she or he was the sole author. Some people prudently belong to large consortia, perhaps working on the principle of “you scratch my back and I’ll scratch yours.” I should acknowledge, of course, that PBRF panels are urged to judge quality on the basis of the actual publications, rather than on quantitative measures, but this is not easy to do if panelists are not experts in all the areas they are asked to assess. The irony is that the various indices, for all their imperfections, may actually be more reliable measures of quality than qualitative assessments themselves.

3 Responses to “How to score in academia”

Quite true. Especially that these metrics may be less subject to bias than people’s opinions (especially if your work challenges established beliefs and mantras).
What next?
What is the impact of your research on society? – Altmetrics counts papers mentioned in news media, youtube, facebook, Twitter etc.
Do you play your part in scientific service as well as published? – Publons counts how may papers you refereed for what journals and lists your editorial roles.

Yes, there are now measures of other forms of impact, such as impact on society or scientific service. There is something of a slippery slope from the highly conservative Web of Knowledge through Google Scholar to Altmetrics, and the litter of reads, mentions, and likes on the social media. The danger, I suppose, is that there is something of a tradeoff between societal impact and bullshit. It’s good to have scientific truth disseminated widely, but only so long as it really is the truth. My guess is that the h-index will hang in there, but perhaps yield increasingly to broader sources of citation.
But my real hope is that we might somehow move away from the obsession with quantification and managerialism, and regain some integrity for science and scholarship.

Thanks for this interesting piece Prof. Corballis. As an early career fellow I find it all very confusing….and I am sure I am not the only one (at least judging from conversations I’ve had with some of my peers). I found that this piece by Prof. Daniel Nettle quite interesting, maybe others find it interesting too: http://www.danielnettle.org.uk/2017/09/25/hotte-7-staying-in-the-game/ .

Sciblogs Archive

Sciblogs is the biggest blog network of scientists in New Zealand, an online forum for discussion of everything from clinical health to climate change. Our Scibloggers are either practising scientists or have been writing on science-related issues for some time. They welcome your feedback!

Sciblogs was created by the Science Media Centre and is independently funded