Monday, March 07, 2011

PLoS ONE, Open Access, and the Future of Scholarly Publishing

Open Access (OA) advocates argue that PLoS ONE is now the largest scholarly journal in the world. Its parent organisation — Public Library of Science (PLoS) — was co-founded in 2001 by Nobel Laureate Harold Varmus. What does the history of PLoS tell us about the development of PLoS ONE? What does the success of PLoS ONE tell us about OA? And what does the current rush by other publishers to clone PLoS ONE tell us about the future of scholarly communication?

Our story begins in 1998, in a coffee shop located on the corner of Cole and Parnassus in San Francisco. It was here, Harold Varmus reports, that the seeds of PLoS were sown, during a seminal conversation he had with colleague Patrick Brown. Only at that point did Varmus realise what a mess scholarly communication was in. Until then, he says, he had been “an innocent person who went along with the system as it existed”.

Enlightenment began when Brown pointed out to Varmus that when scientists publish their papers they routinely (and without payment) assign ownership in them to the publisher. Publishers then lock the papers behind a paywall and charge other researchers a toll (subscription) to read them, thereby restricting the number of potential readers.

Since scientists crave readers (and the consequent “impact”) above all else, Brown reminded Varmus, the current system is illogical, counterproductive, and unfair to the research community. While it may have been necessary to enter into this Faustian bargain with publishers in a print environment (since it was the only way to get published, and print inevitably restricts readership), Brown added, it is no longer necessary in an online world — where the only barriers to the free-flow of information are artificial ones.

Physicists, Brown said, have overcome this “access” problem by posting preprints of all their papers on a web-based server called arXiv. Created by Paul Ginsparg in 1991, arXiv allows physical scientists to ensure that their work is freely available to all. “Should not the biomedical sciences be doing something similar?” Brown asked Varmus.

It was doubtless no accident that Brown — who had previously worked with the Nobel Laureate — chose Varmus as his audience for a lecture on scholarly publishing: at the time Varmus was director of the National Institutes of Health (NIH) — the largest source of funding for medical research in the world. He was, therefore, ideally placed to spearhead the revolution that Brown believed was necessary.

Fortunately for the open access movement (as it later became known) Varmus immediately grasped the nature of the problem — aided perhaps by some residual Zen wisdom emanating from the walls of the coffee shop they were sitting in, which had once been the Tassajara Bakery. Varmus emerged from the café persuaded that it would be a good thing if publicly-funded research could be freed from the publishers’ digital padlocks. And he went straight back to the NIH to consult with colleagues to that end.

Again fortuitously, one of the first people Varmus broached the topic with was David Lipman — director of the NIH-based National Center for Biotechnology Information (NCBI). NCBI was home to the OA sequence database GenBank, and Lipman was an enthusiastic supporter of the notion that research should be freely available on the Web. By now Varmus’ conversion was complete.

This conversion was to see Varmus embark on a journey that would lead to the founding of a new publisher called Public Library of Science, the launch of two prestigious OA journals (PLoS Biology and PLoS Medicine), and subsequently to the creation of what OA advocates maintain is now the largest scholarly journal in the world — PLOS ONE.

As we shall see, Varmus’ journey was to prove no walk in the park, and some believe his project lost its bearings on the way. Rather than providing a solution, they argue, PLoS may have become part of the problem.

Certainly PLoS ONE has proved controversial. This became evident to me last year, when a researcher drew my attention to a row that had erupted over a paper the journal had published on “wind setdown”.

Even some of the journal’s own academic editors appeared to be of the view that the paper should not have been published (in its current form at least). As the row appeared to raise questions about PLoS ONE’s review process — and about PLoS ONE more broadly — I contacted PLoS ONE executive editor Damian Pattinson.

The response I got served only to pique my interest: While Pattinson invited me to send over a list of questions, I subsequently received an email from PLoS ONE publisher Peter Binfield informing me that it had been decided not to answer my questions after all.

14 comments:

1. If PLoS has not adequately defined and explained its "high technical standard" bar, can you point to any journal which has done so? What I mean is, PLoS ONE will publish any paper that meets the minimum quality for acceptance into the published record -- by design, this is the minimum bar that must also apply at every other journal. So it seems reasonable to ask where are the relevant explanations from those journals as well.

This is not a trivial thing to do, and I think you will find that nearly every journal relies on the old "definition of pornography" model -- "I know scientific rigor when I see it". This is not necessarily to defend the practice, merely to point out that it is part of the culture of science. Most working scientists would claim to be able to decide whether a paper was sufficiently rigorous for publication, but probably could not describe in detail how they go about making that decision.

2. Why is it widely considered somehow wrong for PLoS ONE to subsidize the flagship journals, but perfectly OK for NPG to maintain a stable of a couple dozen "second tier" (their term!) journals trading on the Nature name and making up revenue by higher-volume publishing?

No, I am not aware of any journal that adequately defines and explains its technical standard bar. And I agree that if one is asking this of PLoS ONE, then one should ask it of all journals.

It does indeed appear that all journals assume every researcher knows scientific rigour when they see it. The problem — aside from the fact that gut feeling (if that is what you are describing) is not very “scientific” — is that building a review system on that assumption makes it somewhat vulnerable to abuse.

I also agree with you that what is acceptable for subscription journals in terms of cross-subsidisation should not be deemed unacceptable for OA journals. The point about author-side publishing fees, of course, is that (in theory at least) they make pricing far more transparent. Or at least they invite questions about costs that tend not to be asked in a subscription environment. That, I believe, is a good thing.

Perhaps the key point here is that the debate about PLoS ONE (and about OA in general) has cast a spotlight on both pricing and peer review practices. In doing so, it has raised uncomfortable questions about both. That too, I believe, is a good thing.

It's so nice to see long-form work on such an important topic -- many thanks!

Much is made of the potential negative incentive of the APC, but how is this so different from he page charges levied by conventional publishers? For that matter, how is the expansion of PLoS ONE any different from the constant new launches (which picked up steam well before the debut of OA) from the incumbents in the market? Along the same lines, I don't think there is sufficient support for the statement, "In a subscription environment the ability to offer cascading peer review is appealing, but not compelling." I think a closer look at the past decades' worth of financial statements of STM publishers would belie that.

In future, I would be tempted to de-emphasize the statements about OA's essential "badness" from the folks who are directly competing with it. Nobody likes to see a comfortable living threatened, but the self-dealing of the comments from Elsevier and Nature are breathtaking in their transparency.

Thanks for your comment. I have heard many different things about page charges, but I am not aware of any study that has been done to establish how common a practice it is, or what a typical charge might be. Without that information I don’t feel qualified to comment. Can you point me to some solid information on this?

I do think that there is a difference between the constant launches of new journals by incumbents and the expansion of PLoS ONE — for the kind of reasons detailed in the piece I wrote.

I don’t fully understand your comment about cascading peer review. What would the past decades’ worth of financial statements from STM publishers tell us specifically about cascading peer review?

I have to admit to being a little intrigued by your advice that I de-emphasise statements from subscription publishers. Does the same apply to statements from OA publishers, and OA advocates (who surely have a vested interest in promoting a certain viewpoint, and so presumably equally susceptible to making self-dealing comments)? Is the self-dealing of an OA publisher less breath-taking than the self-dealing of a subscription publisher? I don’t know, but I suspect not. Indeed, when writing about scholarly publishing whose statements could one cite without knowing that they are likely to be self-dealing — be they publishers (of any flavour), librarians, or researchers? If I were to exclude citing anyone with a vested interest would that not leave me in a position where I was not able to cite anyone?

Might it also be possible for someone with a vested interest in promoting a certain viewpoint to nevertheless be right?

I would be interested to know exactly where you sit in this debate Ed. Are you a publisher? Are you a researcher, a librarian perhaps?

Ah, I was sorta hoping no one would point to that Bjoern. The thing is, I'm not sure that work stands up, and I really really need to write a public update, the gist of which is this:

1. the NIH figures were dropped into a presentation before Congress without explanation, and I haven't been able to find any solid basis for 'em.

2. On a listserv I read, I pointed to the same post and someone (I won't quote or say who, having not asked permission) took me to task, saying that in 40 years as a publisher they had hardly ever seen page charges levied. Now, that hasn't been my experience, but the same person was able to point to online policies at two or three very large publishers which indicated that page charges are indeed rare. It could well be that I've happened to publish in some of the few journals that levy such charges. Given the stated policies -- those publishers between them account for maybe 3/4 of all journals -- it's hard to see how the NIH figures make sense.

So, something is fishy about the numbers I came up with, and the best way to deal with it would be to make a list of publisher policies (in the hope that they have blanket policies so there will be no need to list policies journal by journal), and to get in touch with someone at the NIH to track down the basis of their figures.

I haven't had time to do any of that, so I haven't written the necessary update. But you have reminded me that I really must put a warning on that post...

Many thanks for putting this together- it's an interesting read. I found myself with two comments:

1) You hit on a very interesting question with your comment on 'fast turnaround' on p23. Why does PLoS one strive for the fastest possible decision? Authors are clearly attracted to ever faster decision times and the potential sugar rush of quick acceptance, but it's the journal's responsibility to balance speed against quality, and to ensure that they really are publishing valid results. Now that P1 is attracting a vast number of submissions, they could safely gain a lot from taking a little longer over peer review - after all, their peer review process is the key service they're providing to authors whilst getting their reviewers for free, so why not make the best possible job of it?

2) You say on p 39 that "But as we‟ve seen peer review is deeply flawed and getting worse", but don't provide any footnote or reference to where this conclusion comes from. I see statements like this all over the place, but apart from a number of well repeated anecdotes about instances where peer review has failed, I've yet to see anything that suggests that the system as a whole is either flawed or collapsing. Could you perhaps elaborate? For what it's worth, I'm the ME of a large ecology/evolution journal (which does mean I get to see about 2000 decisions per year), and despite scepticism on my part the whole thing seems to work very well.

I agree with you on your first point. We should perhaps note, however, that speed has become an issue because of the “publish-or-perish” pressure on researchers.

I have had scientists a number of times tell me that in many circumstances (when a tenure meeting is approaching for instance) speed is more important than outlet. In other words, they have to have a paper published by a certain date. The question then is which publisher is more likely to achieve that for them.

On your second point I will reply with some quotes.

The first is from former editor of The New England Journal of Medicine Marcia Angell. Writing in The New York Review of Books in 2009, Angell said, “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.”

The second is from former editor of the BMJ Richard Smith. Citing Angell’s remark last year on the BMJ blog, Smith said, “Sadly I followed the same path and spelt out my disillusionment in my ‘j’accuse’ book The Trouble with Medical Journals. I wrote it in 2004, and since then my pessimism has deepened.”

The third is also from Smith. Writing in The Journal of Participatory Medicine in 2009, Smith said: “The Sixth International Congress of Peer Review in Biomedical Publication was held in September 2009, and dozens of scientific studies were presented on the subject. The First Congress was in Chicago in 1989, when many of the presentations were opinion rather than new data — but that was about the beginning of studies of peer review. Until then, it was unstudied despite being at the core of how science is conducted. Sadly, in my experience, most scientific editors know little about the now large body of evidence on peer review. So paradoxically, the process at the core of science is based on faith rather than experimental evidence.”

Amongst the studies listed in that body of evidence, by the way, is a 2009 study by Michael Callaham. This concludes that the quality of the reviews written by individual researchers tends to deteriorate over time.

As ScienceNewsput it , reviewers, “don’t improve with experience. Actually, they get demonstrably worse. What best distinguishes reviewers is merely how quickly their performance falls."

Thanks for taking the time to respond. I agree that journals do compete for authors, and speed is a factor in where authors send papers. I did take a closer look across the PLoS One site and could not find any information on how long they give reviewers. Their total time from submission to first decision is actually the same as ours (33 days, see here and here, but I have a vague memory that they only give reviewers a week (I would be happy to be corrected on this point). Anyway, with the exception of their 'accelerating peer review' tagline, there's not much talk of speed on the PLoSOne site, so maybe they have de-emphasised this.

The sources on peer review you pointed to were an interesting read- I'd seen some of the Smith quotes before but not the underlying material. The view does seem gloomy from the medical journals, but the situation there is clouded by the sums of money at stake and the close but difficult relationship between business and academia. I do concede that the peer review process is based on the assumption that all parties are acting with integrity and honesty in the interest of advancing knowledge, so fields where that might not be the case may not be the best place to begin an objective assessment. Sharp reviewers may catch signs that the authors have misrepresented their case or their data, but to be completely sure that everyone is acting in good faith would need a much more intrusive and investigation-like review system. For example, how can you actually be sure that the presented data were ever collected? Checking things like this goes far beyond the scope of peer review.

Callaham's data on reviewers' performance declining over time are very interesting, and this is something I'm keen to look for in our data. However, as long as young and enthusiastic reviewers are constantly entering the system there's no reason to think that the quality of the process as a whole is declining.

Sorry, no firm numbers, but limited anecdata: my wife completed her post-doc about ten years back, and publishing most of her papers cost her (or her P.I. in this case) a non-negligible sum. Nobody thought this was out of the ordinary, and I was led to believe that it was pretty common in the biomedical sciences (perhaps it's different in other fields?).

My comment about cascading peer-review is merely to point out that the larger publishing groups (particularly our friends in the Netherlands) are already making out bandits though owning great swaths of journals, from the very prestigious to the completely unknown. The system of kicking papers "down" until they find their publishable level was well in place before OA entered the scene and would still be there if every PLoS title ceased publication tomorrow. The issue is not really one of OA providing too much supply; it is a matter of basic economic incentives.

Yes, it is indeed possible (and even somewhat likely) that someone with a vested interest could have something worthwhile to say. I only ask that such statements (particularly those seeking to sway the decisions of official government panels) be treated with the requisite skepticism. As a librarian, my own views are also undoubtedly blinkered in some ways (which is precisely why dialog is so useful).

Richard Poynder has written another timely and important eye-opener about Open Access. Although (as usual!) I disagree with some of the points Richard makes in his paper, I think it is again a welcome cautionary piece from this astute observer and chronicler of OA developments across the years.

(1) Richard is probably right that PLOS ONE is over-charging and under-reviewing (and over-hyping).

(2) It is not at all clear, however, that the solution is to deposit everything instead as unrefereed preprints in an IR and then wait for the better stuff to be picked up by an "overlay journal". (I actually think that's utter nonsense.)

(3) The frequently mooted notion (of Richard Smith and many others) of postpublication "peer review" is not much better, but it is like a kind of "evolutionarily UNstable strategy" that could be dipped into experimentally to test what scholarly quality, sustainability, and scaleability it would yield -- until (as I would predict) the consequences become evident enough to induce everyone to draw back.

(4) Although there is no doubt that Harold Varmus's stature and advocacy have had an enormous positive influence on the growth of OA, in my opinion Richard is attributing far too much prescience to Harold's original 1999 E-biomed proposal. [See my 1999 criticisms. Although I was still foolishly flirting with central deposit at the time (and had not yet realized that mandates would be required to get authors to deposit at all), I think I picked out the points that eventually led to incoherence; and, no, PLOS was not on the horizon at that time (even BMC didn't exist).]

(5) Also, of course, I think Richard gives the Scholarly Scullery way too much weight (though Richard does rightly state that he has no illusions about those chefs' motivation -- just as he stresses that he has no doubts about PLOS's sincerity).

(6) Richard's article may do a little short-term harm to OA, but not a lot. It is more likely to do some good.

(7) I wish, of course, that Richard had mentioned the alternative that I think is the optimal one (and that I think will still prevail), namely, that self-archiving the refereed final draft of all journal articles (green OA) will be mandated by all universities and funders, eventually causing subscription cancellations, driving down costs to just those of peer review, and forcing journals to convert to institutional payment for individual outgoing paper publication instead of for incoming bulk subscription. The protection against the temptation to "dumb down" peer review to make more money is also simple and obvious: no-fault refereeing charges.

(8) Richard replied that the reason he did not dwell on Green OA, which he too favors, is that he thinks Green OA progress is still too slow (I agree!) and that it's important to point out that the fault in the system is at the publisher end -- whether non-OA publisher or OA. I continue to think the fault is at the researcher end, and will be remedied by Green OA self-archiving by researchers, and Green OA self-archiving mandates by research institutions and funders

I agree wholeheartedly with your concept of re-examining OA in the context of PLOS. In my opinion, PLOS is a sort of special case, it doesn't really relate to the other part of OA, institutional repository, which is often included in the OA concept.

PLOS is a scientific journal with a different objective, i.e. to ensure publication of quality material without a use fee. Fine.

Many of the comments raise the well rehearsed arguments about quality, refereering, impact factors etc., but, IMO, ignore the elephant in the room, getting published in a quality journal in the first place. I'm not referring to PLOS per se, but I am referring to the actuality faced by many researchers, getting published where your peers will see it..... In many cases, this means getting published in the 'top' journal in your field and this means, and I admit I have no specific data to hand to back this, a journal published by a commercial publishing house.

It is widely perceived that such publishers have quotas for the number of papers published in a given period. There may not be much data on this, but any conversation with a researcher on the topic of getting published includes this belief.

Anecdotally, a recent conversation with a researcher on getting published elicited the information that a new twist has appeared - rejection because the proposed paper 'doesn't fit with our present publishing policy' this in a journal which had already published work by the researcher. It is interpereted as - you've already been published by us, wait in the queue to be published again. My interlocuteur says it is awidely recognised phenomenon amongst researchers.

Maybe slightly off topic but since OA isn't by any means universal there is along way to go to get all the good work published.