We are under starters’ orders, then, and beginning to feel the pressure as expectation builds for us to achieve success and acclaim.
No, not the 2012 Olympics; rather, everything in UK academic research is gearing up for the 2014 Research Excellence Framework, the mechanism by which the quality of our research over the last few years will be judged, and decisions made on how government money should be shared between institutes in the future.

Much of the focus of this exercise, certainly for those of us in scientific disciplines, is on the production of so-called ‘high impact’ papers. Each of us must submit for assessment our four ‘best’ papers of the last 5 years or so; a panel will then grade each on a scale from 1 to 4 (actually, 1* to 4*, but the stars seem superfluous to me when there is no option of an unstarred number). A 1* paper is ‘quality that is recognised nationally in terms of originality, significance and rigour’; 2* replaces ‘nationally’ with ‘internationally’; 3* is ‘internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence’; and 4* is ‘world-leading in terms of originality, significance and rigour’. (Work can also be unclassified if it’s crap, but it seems unlikely that many submissions will fall into that category.)

Of course, the £610K / $1M question is, how do you judge the quality of a piece of published research? The REF has devoted thousands of man hours and reams of text to this question, which I won’t rehash here. Suffice it to say that the most simplistic interpretation, which happens to be one that many institutions are assuming will be the major criterion applied in the REF, is, depressingly, that the quality of work is accurately reflected by the impact factor of the journal in which it appears.

I have written before on how misguided that is, but, amazingly, it appears that my blog went unheeded. Again, I don’t really want to go over old ground again, simply to state that it is commonly assumed that a 4* paper means a paper in one of the weeklytabloids; nothing else will do.

Now, I have no intention of denigrating these publications, still less of biting the hand that hosts me – clearly, they are world-leading in their standards of originality, significance, and rigour. Rather, I wanted to point out what I see as a worrying trend, which is conducting research with the objective of ‘writing a Nature paper’.

Of course, every research project probably starts with the thought that, if all goes well, if the results are unusually stunning, unambiguous, and free of messy nuance; if the stars align just so, and the editor is having a good day when I submit; then maybe, just maybe, this might get into a top journal. But the focus is on doing the science as well as possible, and deciding on an outlet for it once the results are in.

Now, however, it seems that every workshop I am invited to has the objective (very likely a condition of funding the thing) of producing a ‘high impact paper’. In other words, the template for reporting the results is decided before any analysis is begun. Ambition is no bad thing, of course, and one might as well aim high. But, if you set by writing the headline, what happens if the story doesn’t quite pan out how you hoped?

The right option is to re-assess, and perhaps take a few more pages to thoroughly explore your more complex results for a good, but CV-wise non-stellar paper in a less high-profile journal. The temptation though, is to plough on regardless, to try to squeeze that pint into a quart pot; not, in almost all cases, to fabricate anything; but perhaps to emphasise here to draw attention away from here.

Of course, playing this game is no guarantee of success: we can usually rely on the review process to sift the wheat from the chaff, even in cases where the drive and strength of personality of the group leader has managed to bring the work to the submission stage. But the wider question is, Is this a good way to do science? And the corallary: What implications does this have for assessing research quality?