Eyes on the prize are blind to reality

Scientists’ quest for publication in journals with high impact factors is widely perceived as one of the more refractory barriers to the fuller adoption of open access, which I believe to be in the best interests of science.

But the barrier problem is complicated. Some of its dimensions were teased out in a debate on ‘Open Science and the Future of Publishing’ held at Oxford at the end of February this year that involved publishers, funders and proponents of open access. The video of the debate is well worth watching (especially the first 45 min – though there’s a nice summary over at F1000). I was particularly struck by the comments made by Alison Mitchell of the Nature Publishing Group (NPG) (from 10:50 – 19:12) which started a train of thought about their gold-standard publication, Nature. See if you think I’m going off the rails.

The problem

Publication in Nature is a highly coveted prize. Many labs will literally pop champagne corks if their paper is accepted by one of the most prestigious scientific journals in the world. I shared in this wonderful feeling once, though it was back in 1993 and the memory, alas, is fading.

Nature’s prestige is hard-earned. The journal’s editorial staff and volunteer peer reviewers sift through thousands of submissions and select fewer than 10% for the privilege of occupying a few pages in the weekly publication. This rigorous filtration, argues the publisher, ensures that the journal serves up a quality product to the scientific community.

The success of the journal — and its authors — is one reason cited for their resistance to open access modes of publishing. Although NPG offers several open access options through the group’s various titles and permits deposition of author-formatted manuscripts in PubMed Central 6 months after publication in Nature, it does not offer Gold open access options for the vast majority of its Nature branded journals (the notable exceptions being Nature Communications and Scientific Reports*).

Alison Mitchell defends this stance by arguing that Nature’s reader/author ratio is so high — considerably higher than ‘normal’ scientific journals — that it does not make sense to charge the journal running costs to authors, which they estimate would work out at up to £30,000 per paper if a switch to full open access were to be made.

This is a reasoned position, but I would still like to pick it apart because some of it doesn’t make sense to me.

For a start, part of that £30k fee would presumably be needed to cover the costs of the very good front matter that appears in Nature and occupies about a third of the journal. The front matter includes news, commentary, feature articles (analysing scientific trends and matters arising in science policy and education) and summaries or highlights of the papers appearing in that week’s issue). These items are mostly written by Nature staffers or commissioned from academics. There is no case for charging the costs of writing them to the authors of Nature’s scientific papers, although I would be loathe to see the front matter disappear from the journal. I suspect these are the pages that most people read. Let’s face it, although Nature is a general science journal, few these days have the learning to be able to profit from all its articles. The spread is too great and the divisions between specialisms, unfortunately, are too deep (a point I will return to later).

Part of Nature’s predicted high open access charges also reflects the very high rejection ratio — above 90% — which means that the journal processes many more articles than are eventually published. Nature relies on skilled editorial staff — at PhD level or above — and the selectivity imposed by them and their reviewers to ensure quality and maintain the prestige of the Nature brand. The careful sifting is reflected in its impact factor which, at 36.101, is one of the highest in the business.

This latter point bears closer inspection, particularly if one has the bigger picture of science in mind.

First, most, if not all of the papers rejected by Nature will be eventually published elsewhere, though only after the delay caused by cycles of rejection and resubmission as authors chasing impact factors work their way down the journal rankings. The chase retards the dissemination of scientific information — and can be exhausting and demoralising for authors.

Second, despite all the careful sifting, Nature’s system is incapable of picking winners reliably, a problem that was highlighted in the past week in BBC4’s Beautiful Minds documentary on Andre Geim, who shared the 2010 Physics Nobel prize with Konstantin Novoselov for the discovery of grapheme (catch it if you can — wonderful). As revealed in the program, Nature rejected Geim’s and Novoselov’s initial paper** on graphene — twice.

Nature’s failure in this case is not the particular fault of anyone at the journal; it simply represents the intrinsic difficulty of forecasting from a slew of submissions which ones will go on to spark the greatest interest in the scientific community. The problem is not simply anecdotal; it is widespread. Nature’s impact factor is dominated by a minority of the papers that it publishes, as the journal itself has acknowledged. A 2005 editorial revealed that fully 89% of the citations to work published in the journal in 2004 derived from just 25% of papers; at the other end of the citations distribution, over half the Nature papers from that year had fewer than 20 citations.

It is genuinely difficult to pick winners: the skewed distribution is in fact typical of most journals, whatever their ranking, as shown by Per Seglen in a fascinating analysis performed back in 1992. Despite its rigorous selection procedures, Nature appears statistically no better at determining the relative quality of its submissions than other journals; it wins at the impact factor game because the brand ensures that the average quality of the submissions is higher.

What this means — and this has long been recognised — is that the journal impact factor is not a reliable indicator to the quality or influence of a particular paper. Nature knows this, and publicly bemoans the mis-use of journal impact factors in the assessment of individuals. And yet it cannot help itself from trumpeting its success in a full page advert whenever the latest impact factor calculation is published, or from dangling the statistic in the faces of prospective authors.

But the journal is not particularly to blame for this. Their stance is all of a piece with a scientific culture that has grown to over-value journal rankings. Everyone in the business knows what publication in Nature means. It is an accolade that we seek out because the system — largely devised and run by scientists — recognises and rewards winners of this prize with funding and promotion. We might wish that it were otherwise but wishing only works in fairy tales.

Playing the game makes fools of us all. We chase prizes that our critical faculties and our mathematical analyses have long demonstrated to be awarded prematurely and inaccurately. Worse still, running after these prizes slows us down.

Surely we can do better?

A proposal

We certainly still need to weigh and judge the scientific output of our peers. This is necessary to determine distribution of funds and preferment. But rather than relying on the inaccurate shorthand of impact factors, we need to reserve judgement until after publication. I am not suggesting that we abandon pre-publication peer review, which I think serves a useful function in filtering and improving the published literature. But it would be better to fast-track publication and then to arrive at a more considered judgement of their quality by assessing how well they had been put to use — downloaded, cited, commented on, criticised or lauded — by the scientific community. Adoption of full open access would facilitate this by allowing the whole community to be involved. There is a burgeoning industry of post-publication assessment that can better harness the wisdom of the community and which, done well, could provide more accurate judgements than a handful of peer reviewers.

We could even buttress this system by a more formal and more extensive procedure of prize-giving — prizes are important to us — which would do a better job of recognising achievement than publication in a high-ranking journal. Such prizes could replace the incentivising function of the glamour publications in stimulating the productivity of research groups. Again, if they were judged by the wider community, by the people who actually make use of the published literature, such awards would have the merit of being a fairer and more thorough assessment of scientific work.

This is a radical proposal but it has many advantages. It is more equitable; it clears the way for open access; it speeds up the process of peer-review and publication; it accelerates science.

The proposal may threaten the business model of Nature and break with the journal’s long and venerable tradition of publishing ground-breaking work. But I think there is still a place for truly multi-disciplinary journals in this new landscape. As I mentioned already, no-one can properly read and profit from the breadth of papers that Nature currently publishes; the constraints of the abbreviated format of Nature papers and the deep specialisation that characterises modern science make this impossible. That is unfortunate for science since new discoveries are often made at the intersections between fields: Geim exemplifies this with his field-hopping success.

I would like suggest therefore that Nature re-models itself as a platform for scientists to write a separate version of their ground-breaking research (published in long form elsewhere) that is intelligible to scientists outside their field. In doing so they would need to include the broad context of the work and to unshackle the text from excessive jargon. I can see the grimaces already but this would be a good exercise for authors — obliged to think large — and provide a valuable stimulus for interdisciplinary research to the scientific community. One might take this even further and include at least a lay summary that could be appreciated by a non-scientist readership. Nature might then rediscover its original mission.

I don’t for a moment suppose that this proposal will be taken seriously at NPG. But I thought it was worth thinking about.

*Thanks to Graham Steel for pointing out my omission of these titles in the original post.

**Geim’s and Novoselov’s paper was eventually accepted by Science. It currently has over 8000 citations according to Google scholar.

My thanks to the various people who sent me a copy of Seglen’s paper which I could not access from my institution.

Initial thoughts…isn’t what you propose essentially done through citations which of course have their own problems as a metric? Plenty of people read papers but won’t comment on them on journal websites, me included. I certainly can’t find the time to comment in any useful way on 1/10th of the papers I read. Doesn’t mean the others are not valuable or interesting, just that I was’t able,willing, or ready to comment on/about them at the time.

The publication of a short form with longer form elsewhere is also something that I believe that PNAS now does … Albeit with publication of both forms in PNAS.
There are also many other journals who operate the kind of model you state including (but not restricted to) some new ones such as Cellular Logistics and Autophagy.

Personally I hate these pieces. I can’t see the point and have always declined to write them. I would also expect them to dilute ones own citations – splitting them between the original paper and the “mini review”.

I’d also suggest that more prizes = dilution of their value….better that less emphasis is given to the journals in which the recipient publishes by the prize givers….

Well, it’s certainly true that none of these proposals has been worked out in great detail. This is more of a form of thinking out loud.

However, I see a more formal system of prizes as a way of drawing attention to work that is genuinely considered to be valued by the community. I don’t trust mean counting citations as the measure of value but am looking to additional measures (I think comments will come eventually though they have not worked in the past) that would have to include assessment from people who know the field and have read the paper.

As for whether more prizes would mean dilution I’d say not. At the moment, as I argue in the post, publication in a high IF journal is already considered a prize but the award mechanism is flawed — because it relies on too few people.

I’ve not seen the long/short form combinations that you mention — got any links? There are problems here since I can see reluctance on the part of authors. But if Nature retained its prestige that might be a way for ambitious scientists to really show their mettle — by demonstrating that they could show how their work was important not only in their own field, but potentially in others.

These are interesting developments. I agree many might see it as a tiresome overhead when they might rather be getting on with the next piece of research. But it does add real value to the scientific literature. That’s why I think there would be interest in having such digests/papers in a single journal and that, for the most exciting stuff, Nature would be the place to do it. To give them their due, NPG has launch a whole series of review journals in recent years which might be considered to be part of this trend but I wonder how inter-disciplinary they are considered to be.

“Are other Nature-branded journals going to introduce an open access option?

No — there are no plans to introduce an open access option on any other established Nature-branded title. In these cases, self-archiving provides an alternative solution, and NPG has a progressive self-archiving policy. NPG’s services and policies ensure that authors can fully comply with the public access requirements of major funding bodies worldwide — for more information visit http://www.sherpa.ac.uk/romeo/“

Thanks for the correction Graham – I’d overlooked those, even though Nature Communications was mentioned in Alison Mitchell’s comments. I’ll amend the post. It’s certainly true that NPG is offering some OA options (as I had said).

“I would like suggest therefore that Nature re-models itself as a platform for scientists to write a separate version of their ground-breaking research (published in long form elsewhere) that is intelligible to scientists outside their field”

Isn’t this what Nature already does? (I’m arguing this point semi-seriously btw)

Take a look at most palaeontological articles in Nature – the print version of the article can only run to such a brief length (say, 3 pages max.) that there is barely anything in there, I believe this to be commonly bemoaned within the palaeo community. The actual paper is often tucked away in upwards of 80 pages of electronic supplementary materials, that may or may not have been as thoroughly peer-reviewed.

I suggest your proposal has already happened, just rather unannounced.

I don’t agree. The advent of Supplementary Material (a boon or a curse, depending on your point of view) hasn’t resulted in a major shift in the way that the printed article is written. My proposal was aimed specifically to encourage authors to write with inter-disciplinarity in mind so that any scientist picking up Nature would be able to access (intellectually) most, if not all the articles.

To a degree, Nature does a pretty good job of this already via the News and Views commentaries (often written by paper reviewers I believe) which give a useful summary and contextualisation that is not in the paper. But the N&V pieces only cover a minority of the papers published in any one week. I guess I would be arguing for authors to do this themselves within their own Nature paper.

It’s definitely valuable to think about how to best harness post-publicaton peer-review, and free ourselves from under the yoke of journal impact factors!

I wonder if we could implement some kind of voting system, like Reddit (with up and down votes) or google +1 to identify papers worth awarding a prize or recognition. Obviously, any system would need careful consideration – for instance, I think it would be necessary to ensure one vote per person, and voting wouldn’t be anonymous (to prevent gaming the system).

Yes – but one would have to think very carefully about strategies to prevent people gaming the system. This is an underlying concern for anyone thinking about post-publication assessment. I’m not sufficiently web-savvy to come up with particular answers to that one. But Google appears to do a decent job of preventing web-sites from gaming their search results (though they’ve recently come under fire for breaking their own system by populating search results with hits from G+!).

“There is a burgeoning industry of post-publication assessment that can better harness the wisdom of the community and which, done well, could provide more accurate judgements than a handful of peer reviewers.”

One example is compchemhighlights.org which is an overlay journal in the area of theoretical chemistry. It uses blogger.com and was set up in a few hours (gathering the editorial board is another matter), so anyone can in principle do this for other areas.

This is a quote I wanted to highlight as well. I’m not yet convinced that shifiting the system from ≥2 (and it is generally >2) pre-publication reviewers to n = ? post publication reviewers adds clarity or ease of access to the “important” science we should all be reading.

The advent of PLoS One (in general, a good thing as it has forced others to rethink financial publication strategies) and similar innovations also requires excellent, user friendly filtering tools with high (universal?) uptake, to deal with accessing the enormous increase in published literature. This hasn’t yet happened, in my opinion.

Pre-publication peer review serves as an important filter (which I know you agree with, Stephen). Post-publication review won’t get round all the existing problems and may introduce others, e.g., cronyism (which may occur with pre-pub review anyway) or generating even more (soft) literature to work through. Two different bloggers may take very different views of the same article.

Don’t get me wrong, I think it’s great that you’re coming up with and highlighting alternatives here, getting the ball rolling. My main concern is legitimate filtering of the massive literature content, which I don’t think we’ve cracked yet. Sorry I’ve not come up with any constructive ideas here!

Thank Joerg – didn’t know about that example. Seems to have been discontinued. But although it addresses the part about making the science more accessible, that scheme does not tackle the central problem of breaking the grip of the IF (which, to be clear, I do not expect publishers to solve by themselves.)

In my view possibly the most powerful way for scientists to change the system would be through boycott – not reviewing for the fashion journals, and not publishing in them – in short, rediscovering the value of your own field’s high-quality open-access specialist journals and supporting them. If the majority of people did it, then the obvious corollary would be that people would no longer be judged on their number of Nature/Science/et al papers they had, because no one of quality would be racking them up any more. Boycotts were tried before with limited success (I’m remembering the early days of PLoS). To get everyone on side, globally, would take a hell of a lot of activism.

Boycotts certainly add to the mix. The current Elsevier boycott has been tremendously powerful in raising awareness around this issue. But given the sheer scale of the scientific (nay, academic) community, I can’t see them being effective on their own. Which is why I wanted to explore the scope for re-organising our incentives. The Wellcome Trust has promised to get tough on its funded scientists who fail to publish via OA and I suspect that the RCs will fall into line. But repeatedly in conversation with my colleagues I come up against the “Ah, but…” line in reference to impact factors. There is a degree of enslavement to IFs that is going to be very difficult to shrug off.

But rather than relying on the inaccurate shorthand of impact factors, we need to reserve judgement until after publication.

Err, impact factors are post-publication judgements. Of course, the choice of a journal is pre-publication, and journals have reputations, but I’m not sure the reputation is primarily determined by impact factor.

I’m not convinced by post-publication assessment, largely because it’s unregulated, so we don’t know why a paper is popular: good papers can be overlooked and poor papers become popular depending on whether they are seen by the “right” people (i.e. people like PZed Myers). If you award prizes by a popular vote, I don’t see how you’re going to avoid this. If you do it with a panel, how is that different from pre-publication peer review?

I disagree, Bob. IFs may be assessed retrospectively but they are *not* post-publication assessments of a particular piece of submitted work. The authors are, in effect, getting the benefit of the previous work of other scientists.

The argument here is that we need mechanisms of assessment that look at how good or important or influential a piece of work is. I do agree with you that there are technical problems in devising a system that is accurate but I do see potential in being able to gather article-level metrics. Of course one would have to try to prevent gaming of the system, and to avoid simplistic dependencies on metrics or stats (or popularity contests). The involvement of a panel (e.g. to determine prize-giving) would be better than pre-publication peer-review because it could involve more people and it would have more information at its disposal.

Stephen – what I meant about IFs being post-publication is that they are calculated from post- publication statistics. I think it’s a mistake to confuse a statistic with how it is used: logically any statistic that’s used to measure how good a scientist is now will be pre-publication too, because it’s also going to borrow from the past, in the same way.

On prize panels, you’re asking for senior scientists to do even more. AFAIKS you either have a small number of prizes, which becomes horribly elitist and is asking to be gamed (you know what competition is like with the REF), or you give out so many prizes that it becomes impossible to administer. Can you really find a happy medium between these two?

Oh, and an additional thought (which raises a problem with all post-publication metrics). Because they are post publication they will take time to accumulate. For junior scientists this will pressurise them to publish earlier: they can’t afford to wait to get a publication out because then they’ll be to late to get a prize that will mean they can get a post-doc. So, whilst the pressure to get good publications is the same there is now an additional time pressure to get publications out early.

Hi Bob – good, challenging comments as ever! Maybe I expressed myself badly but I don’t think I’m confused. What I’m arguing is that the assignment of an IF (a post-pub statistic) for the journal to a particular paper on publication — by dint of acceptance of the manuscript — is a premature ‘prize’ for that particular piece of work. On average, a paper accepted in Nature is likely to be better than one accepted in a lower-ranking journal, say JBC. But the point is that such averages are no guidance to the particular merit of any given paper. The problem arises because such prize are now taken too seriously. I think we probably both agree on that.

As for a solution, a way to break the hold of IFs over careers and funding decisions, well I don’t pretend to have all or indeed any of the answers. The suggestion of prizes was to draw attention to the fact that publication in Nature is already a kind of prize. I’m not sure how a prize system would work out in practice but it’s definitely worth considering. There are concerns about post-publication metrics but this is a fast moving area and Joerg above (who is a senior editor at Nature Materials) doesn’t regard them with a completely jaundiced eye. Neither do I.

I would envisage any prize system as offering a large number of prizes — after all, Nature awards several hundred per year. Yes, it is an overhead, though breaking the grip of impact factors should reduce the number of rejections and resubmissions in the system and so save some work. Perhaps the editorial boards of each journal could review their published output after 12 months and make an assessment of the top 100 or something like that — perhaps informed also by metrics, reader comments.

And, this article raises another potential problem with IFs: How journal rankings can suppress interdisciplinary research. The analysis has been done in the field of Business and Management; I honestly can’t say how it might transfer to the sciences but I had touched on the issue of fostering interdisciplinarity in my post.

I have just discovered the Frontiers series of journals and they seem to have something a little similar to what you propose.

The Frontiers Evaluation System uses analytic tools to “automatically track down every article’s views and downloads: during 3 months, the Frontiers platform analyzes the reading activity on an article based on the inputs of the entire Frontiers Community”. It also “provides the basis for the distillation of published articles in what is known as the Frontiers Tiering System”.

Under this the “the top 10% articles in a tier are democratically selected for review as prestigious higher tier articles. The authors of the selected articles are therefore invited to revise their research article in a review style focused on the original discovery and with the support of the Frontiers peer review. Focused Reviews and Frontiers Commentaries aim at the broader audience of a field community”.

Byzantine is the word. Although their declared aim is to foster scholarly and public communication, it took me a while to figure out how the journal works! I think this is the best place to start.

But you’re right — this is an innovative form of community based, reasonably regulated post-publication review. The strength is the 2-tier nature of the process whereby 10% of the articles are selected (based on aggregated community judgements) to be re-worked into a fresh review-type article with the collaboration of the journal editors.

Great news re the Willetts announcement – though still much further to go.

One idea for how to work is as follows: Since we exist in a REF-skewed environment for the moment at least, why not encourage HEFCE to recognise Journal Editorial Boards (wherein the ongoing prestige and review capability of the journal inheres) rather than the Journal titles? So for example, if the entire Board of a prestige journal migrates to an Open Access title, then HEFCE will award recognition to this new title, rather than the old, ‘closed’ one? The academics will follow suit, since this is where the current REF incentive system points them.

All it would take would be a clear policy statement from HEFCE, in line with the government’s commitment to Open Access – and the rest would follow.