Time to review peer review

Standard lore has it that scientific results are supposed to be published in academic journals before they are even worth discussing. These publications use a "peer-review" system to determine the validity of a paper. If it's not valid in the eyes of the relevant expert community, it won't be published. It's supposed to be a way we can tell good science from bad: with the community as our judge.

That makes some sense but the ideal isn't quite a reality (at least not in my field, theoretical physics and astronomy). We are not really trusting the community; we are trusting one or two selected members of the community known as "the referees". We are trusting the editor of the journal to select referees who are competent and free from competing interests. And we are supposed to put our trust in the process despite the referees being completely anonymous - neither the author nor the reader knows who's involved.

Even if a referee believes the paper is worthy of publication, he or she can demand the author make changes. The author must respond by revising the paper to the referee's satisfaction. The paper bounces back and forth in a slow-motion game of tennis. If the author believes the referee is playing unfairly, any appeals must be made to the journal's editor. But editors rarely undermine a referee that they selected in the first place.

Assuming the work does eventually get published, the author's original intentions are hopelessly mixed up with the biases of anonymous third parties. Genuine, honest scientific disaccord is obscured by a process which is invisible to the reader.

I am not arguing to remove peer review entirely from the scientific publishing process but it does need a radical overhaul, and now is the right time to start. Peer-review offered a quality-control filter in an age where each printed page cost a significant amount of money. It's not totally clear that the quality-control filter was ever particularly effective, but at least it gave journals a way to cut down the volume of print.

These days most physicists now download papers from arxiv.org, a site which hosts papers regardless of their peer-review status. We skim through the new additions to this site pretty much every day, making our own judgements or talking to our colleagues about whether each paper is any good. Peer-review selection isn't a practical priority for a website like arxiv.org, because there is little cost associated with letting dross rot quietly in a forgotten corner of the site. Under a digital publication model, the real value that peer review could bring is expert opinion and debate; but at the moment, the opinion is hidden away or muddled up because we're stuck with the old-fashioned filtration model.

Most of the papers on arxiv.org end up in a peer-reviewed journal as well, but that's about the prestige on the authors' CVs: for practical, result-sharing purposes we really couldn't care less. Asking an expert colleague is far more meaningful, because you get a real nuanced opinion of the work's validity. The service that journals offer has become an expensive career-points scheme. For each paper in Astrophysical Journal, score one point. Physical Review, score two points. Nature, score five points.

Right now, there's a growing momentum behind the open-access movement - a laudable attempt to stop publishers levying absurd charges for downloading the articles we write. With this model, publishers collect a one-off publication fee from the author, or their employer, allowing the final content to be provided free-of-charge to readers.

But the mainstream argument for open access tacitly assumes that peer review in its current form needs to be maintained at all costs. If we're going to re-examine how to distribute results from science, why accept peer review before we start? Some open access journals like PLoS One published by the Public Library of Science already have a discussion thread attached to each paper. This is surely a more valuable way to allow objections or qualifications to be voiced by the expert community. It is undoubtedly more open. Admittedly the discussion threads are not terribly active right now, but it needn't be so.

Imagine a future where the role of a journal editor is to curate a comment stream for each paper; where the content of the paper is the preserve of the authors, but is followed by short responses from named referees, opening a discussion which anyone can contribute to. Everyone could access one or more expert views on the new work; that's a luxury currently only available to those inside privileged institutions.

Yes, it might be hard but striving to achieve a grown-up discussion would be a vastly more profitable enterprise than the endless, invisible, biased game of paper-tennis.

As a non-scientist who enjoys reading science papers.. I will very much miss having access to academic papers once I complete my graduate degree. It seems contrary to science itself for things to be this way.

robiD
on June 22, 2012 11:28 PM

I remember professor Giuliano Preparata was used to talk about Nature magazine as the "Pravda" (the regime's newspaper in the old CCCP).

AnotherPostDoc
on June 23, 2012 6:38 AM

Good article -- many good points and ideas. But aren't there drawbacks to consider with the alternative schemes as well? In an arxiv-type repository system, for instance, how does one know that a paper has received no comments/downloads because: (a) it genuinely has no new or interesting ideas to offer, and therefore it has been properly "left to rot", or (b) the experts/crowds have simply failed to notice it, and therefore is has never even been read thoroughly? Is there a way ensure that everything in such a huge respository gets read at least once by an expert who's "invested" in the process to keep good ideas from slipping through the cracks?

But, I'm just playing devil's advocate here and am of course saying all this as someone who's never used arxiv for sharing my own work. As an engineering postdoc, I tend to sympathize with the author here on the drawbacks of the journal peer review system. Here's my biggest pet peeve: people who expect dozens of their submissions to be reviewed promptly and thoroughly, but then refuse to spare time do even a single review for someone else!

True story: my advisor is an AE for another journal and asked one young assistant professor outside our school to do reviews on at several (at least five) separate occassions, all of which he declined. And yet, that same assistant professor has submitted and published in the same journal many times over the same span. It would be nice if he returned the courtesy to other assistant professors who have taken time from their frantic schedules to review his work!

The problem is partly alluded to by the article: getting into journals has become almost like a game with a "prestige points" system. All the incentive lies in saying that you got your paper accepted into Journal X, so why bother doing anything else for Journal X that requires extra time/energy outside of publishing your own work? Even if you do a review, why bother putting in any real effort to help improve someone else's work, especially if they're your "competition" (i.e. for grant money or academic jobs)?

Perhaps research communities need to proceed a la Amazon's Mechanical Turk and put stronger emphasis on "meta-reviews" (i.e. peer reviews of reviewers and their reviews of actual work) to help stabilize the whole process?

How refreshing! We at WebmedCentral have come up with a similar platform for biomedical scientists. You post your article and peers can come and then post their reviews. If authors then want to publish it elsewhere to get their academic points, they can do so. Initial response from the community has been encouraging. Time will tell if biomedical scientists can embrace such a model.

Regards,
Kamal Mahawar
CEO WebmedCentral

David Waltner-Toews
on June 24, 2012 6:50 PM

Excellent article! The issues Pontzen raises are not peculiar to astrophysics. Over a lifetime of publishing in epidemiology, veterinary science, public health, infectious disease and ecology journals & I can tell you that the problems of peer review are even more acute in interdisciplinary fields and those with small scholarly communities, where the notion that reviewers and authors do not know each other is clearly fiction. A 2011 workshop at the Calouste Gulbenkian Foundation, Lisboa 18 – 20 MAY 2011,on "Science in a Digital Society" covered much of this, with the added challenges of a multiplication of on-line scholarly venues (which essential fragment the notion of a peer community), and of peer reviewing digital data - for instance enhanced digital photographs (especially in molecular fields) that claim to present "reality" in a scientifically verifiable way.

Dave
on June 25, 2012 5:20 AM

Oooooh, sounds like Andy's had a paper rejected.

Yvan Dutil
on June 25, 2012 2:35 PM

I don't know why but my post did not got trough the first time.

My own experience as a reviewer is very recent (less than 2 years). Nevertheless, I have reviewed 8 papers, rejected 3 and ask for significant modification on 3.

Those rejected were awful, below the level you would expect from an undergraduate student! Those requiring major modifications had majors weakness that needed to be corrected.

Without pear review I wonder what would have happened.

L. Bamforth
on June 25, 2012 5:16 PM

Interesting article and I agree with the substance of it, however there would be issues with the new system.
Currently, the conditionally accepted paper becomes the first draft with the final paper requiring extra effort on behalf of the submitter to sometimes fix holes or generally strengthen the paper. If you just had a comment system, then perhaps much of this would not be done, resulting in perhaps weaker papers overall. I think your proposed system would be much better than the current one but it would need carefully planning to overcome such potential weaknesses. Perhaps a system to openly request rework/justification with the paper being able to be updated in a version control system? More like wikipedia than a blog, for example. It should also require a robust commenting system (University sponsor perhaps with full legal names for commenters), because guaranteed it would be hijacked by those wanting to disprove reputable science where it hurts them financially.
I especially agree with AGreenhill: everyone should be able to openly access papers; we should reject this elitist knowledge hoarding allowing the Daily Mails of this world to print whatever drivel it likes as the accessible science for the masses. Of course this will fiercely be resisted by those making a fortune from the current system.

stringph
on June 26, 2012 3:19 PM

The replacement of print with electronic media is a red herring - just as in the past the supply of paper was limited, now the supply of human attention is. Now more than ever we need responsible guidance as to what is or isn't worth spending our time reading - i.e. filtration.

I think editors are reluctant to do anything about bad refereeing simply because they are absurdly overloaded, not because of any underlying prejudice. If you have (say) 20 papers a day to either assign for review or reject out of hand you'll need some pretty good reason to challenge the referee's advice.

Editors, as public, eminent figures who choose and reward referees with power, should be held ultimately responsible when bad refereeing occurs. I don't know how best to enforce this responsibility but at least it's not impossible to do so, since the editor's name and institution are public.

With a relatively slight change of behaviour from journals and editors to arbitrate and decide for themseves, rather than automatically deferring to the power of referees - as it used to be some decades ago, I understand - the usefulness of refereeing could be dramatically improved.

As for the great benefits to be got from publicizing expert opinions on recent papers, I suspect the benefits would quickly evaporate once the experts realized exactly how many opinions a day they would be asked to write and how many people they could offend by doing so.

You may be lucky enough to be able to ask world authority Professor X down the corridor what he thinks of the occasional preprint, but perhaps Prof X wouldn't give his opinion quite so readily if thousands of people were asking him about tens of papers a week. There's only a finite supply of real expertise available to spread over a huge number of papers...

I agree that it's time to review peer review. The APEER survey will be looking at peer review from a gender perspective. Who actually gets the privilege (or burden) of becoming a peer reviewer in the first place? What criteria are used to select suitable peers? Despite having the right technical knowledge, it seems many researchers, particularly women, are excluded and are never really considered as peers.

Rachel
on July 5, 2012 5:54 AM

Would it not be possible to keep the peer review process yet make the process transparent by publishing the names of the referrees? If they are working with integrity then I'm not sure I understand why they have to be anonymous?

I agree with everything, but what is good for science is not necessarily the same of what is good for scientists. In the suggested model, we do not need journals not even those like PloS One. We could just put our work online in any repository, even on your own web site, and Google will find them. This might be fine for Noble Laureates or tenured professors, but the rest of us have to be able to authoritatively document that we are actually doing some meaningful science.

It is true that reviewers' demands and conditions often make a papers much worse, forcing to lose its poignancy. See, for example:

The current system does not weed out irresponsible and unprofessional reviewers. One way to fix the situation would be not to use anonymous reviewers. People tend to be more responsible, professional and polite when they use their real names. After judging whether the paper is scientifically sound, additional reviewers' comments would be "suggestions for improvement" and not "conditions for publication".