Wednesday, January 11, 2012

How big is yours?

While writing this, I discovered that Google Scholar has an add-on that will tot up your citations to establish an h-index. From that, I gather that mine is around 29. One of the comments on the Guardian thread points out that Richard Feynman has an h of 23. As Nigel Tufnell famously said apropos Jimmy Page, “I think that says quite a lot.”

_________________________________________________________________

Many scientists worry that theirs isn’t big enough. Even those who sniff that size isn’t everything probably can’t resist taking a peek to see how they compare with their rivals. The truly desperate can google for dodgy techniques to make theirs bigger.

I’m talking about the h-index, a number that supposedly measures the quality of a researcher’s output. And if the schoolboy double entendres seem puerile, there does seem to be something decidedly male about the notion of a number that rates your prowess and ranks you in a league table. Given that, say, the 100 chemists with the highest h-index are all male, whereas 1 in 4 postdoctoral chemists is female, the h-index does seem to be the academic equivalent of a stag’s antlers.

Few topics excite more controversy among scientists. When I spoke about the h-index to the German Physical Society a few years back, I was astonished to find the huge auditorium packed. Some deplore it; some find it useful. Some welcome it as a defence against the subjective capriciousness of review and tenure boards.

The h-index is named after its inventor, physicist Jorge Hirsch, who proposed it in 2005 precisely as a means of bringing some rigour to the slippery question of who is most deserving of a grant or a post. The index measures how many highly cited papers a scientist has written: your value of h is the number of your papers that have each been cited by (included in the reference lists of) at least h other papers. So a researcher with an h of 10 has written 10 papers that have received at least 10 citations each.

The idea is that citations are a measure of quality: if a paper reports something important, other scientists will refer to it. That’s a broadly a reasonable assumption, but not airtight. There’s evidence that some papers get highly cited by chance, because of a runaway copycat effect: people cite them just because others have, in the same way that some mediocre books and songs become unaccountably popular.

But to get a big h-index, it’s not enough to write a few influential papers. You have to write a lot of them. A single paper could transform a field of science and win its author a Nobel prize, while doing little for the author’s h-index if he or she doesn’t write anything else of note. Nobel laureate chemist Harry Kroto is ranked an apparently undistinguished 264th in the h-index list of chemists because his (deserved) fame rests largely on a single breakthrough paper in 1985.

That’s one of the criticisms of the h-index – it imposes a one-size-fits-all view of scientific impact. There are many other potential faults. Young scientists with few publications score lower, however brilliant they are. The value of h can be artificially boosted – slightly but significantly – by scientists repeatedly citing their own papers. It fails to distinguish the relative contributions to the work in many-author papers. The numbers can’t be compared across disciplines, because citation habits differ.

Many variants of the h-index have been proposed to get round these problems, but there’s no perfect answer, and one great virtue of the h-index is its simplicity, which means that its pros and cons are relative transparent. In any case, it’s here to stay. No one officially endorses the h-index for evaluation, but scientists confess that they use it all the time as an informal way of, say, assessing applicants for a job. The trouble is that it’s precisely for average scientists that the index works rather poorly: small differences in small h-indices don’t tell you very much.

The h-index is part of a wider trend in science to rely on metrics – numbers rather than opinions – for assessment. For some, that’s like assuming that book sales measure literary merit. It can distort priorities, encouraging researchers to publish all they can and follow fads (it would have served Darwin poorly). But numbers aren’t hostage to fickle whim, discrimination or favouritism. So there’s a place for the h-index, as long as we can keep it there.

51 comments:

Whim and discrimination are the bedrock of propaganda, and the madness of crowds was the inspiration of the Hitlerian socialist paradigm that: the people will believe the bigger lie (though it was possibly stated by Goebbels, and practiced by all socialists ever since).

And if mere number lacks its own quality in a metrication of British science, why raise the point that 1 in 4 women are given chemistry PhDs? Unless you were pointing out that 1 in 4 of such valuable resources, were clearly not amounting to much.

Clearly the problem is 'citation' itself, as it is prone to the 'ethical' selection of politically motivated publishing houses, not to mention quota filling research funding that will stipulate gender proportions rather than merit. If you don't believe me, try citing authors that the Guardianistas don't care for, such as Frederick Hayek, or Ayn Rand, who had a view or ten, regarding numbers and 'virtue'.

If you want a 'value addded' index of scientific worth, you must include an economic measure. Maybe have a suite of indexes, ranging from mere citation through to economic gain, so that the public know when their money is being wasted on Marxist-Feminist embezzelments, such as 'research' into how better same sex foster parents are at rearing children, than real parents.

This is one of the good articles you can find in the net explaining everything in detail regarding the topic. I thank you for taking your time sharing your thoughts and ideas to a lot of readers out there. mr bean games