Progress to open access has stalled. After two decades of trying, the proportion of born-free articles is stuck at 20%. Kicking off the Impact Blog’s Open Access Week coverage, Toby Green suggests the solution to our financially unsustainable scholarly publishing system may lie in rethinking traditional processes using internet-era norms. Embracing the principle of “fail fast”, all papers should first be published as freely available preprints to test whether they “succeed” or “fail”, with journals then competing to invite authors to publish. This would reduce the costs of the expensive, straining peer review system while ensuring all papers are available to all readers.

Let’s face it, progress to open access has stalled. No progress has been made over the past year – roughly 80% of all new articles published this year will be paywalled – same as last year. As Open Access Week dawns, let’s take a closer look at why.

No one has been idle these past 12 months. Librarians have been getting tougher with publishers, most notably in Germany and Sweden; publishers have innovated with Read and Publish offers; and, with the EU’s blessing, 13 funders are peddling Plan S. My feeling is that these efforts are the final throes of the tired “Green-Gold-Diamond” approach to open access which seeks a flip to a supply-side funding model from the traditional consumption-side model. A flip that’s flawed because all it does is transfer inequity of access to inequity of authoring; i.e. previously those without funds couldn’t read, now they won’t be able to publish. A flip that’s failed because after two decades of trying, we’re stuck at 20%.

In thinking about this problem, I have come to the conclusion that open access is the wrong target, it’s beside the point. The crux of the matter is that scholarly publishing is unsustainable both financially and in terms of human effort. Let me count the ways.

The funds available to pay for publishing research are not growing fast enough to keep pace with the growth in research budgets and, consequently, the number of articles that emerge. The number of articles submitted for publication is growing ~6% per annum; the library and funder budgets that pay for publishing are not.

Publons’ report on peer review shows a system under severe strain: it’s taking longer to find reviewers and they are less likely to complete a review quickly. Peer review costs around US$1,500 per paper; that’s a lot of money if the result is rejection.

It’s kind of ironic that the weakest papers are costing the most to publish. Authors are encouraged to re-submit rejected papers to another journal, sometimes only to once more be rejected, before, finally, the paper finds a home in a title further down the foodchain. Every submission and rejection costs money. Elsevier alone rejects over 4,000 papers every working day – that’s an estimated daily cost of US$100,000.

Authors are “double-dipping”: they increasingly post their articles as preprints to share their findings with peers fast, then submit to impact-factored journals to boost their career and grant-winning prospects. With changes between the former and the latter versions being small, we’re paying to publish the same content twice. This is not to cast blame. Authors need the internet-era speed of preprints to counter the analogue-era timescale of formal publishing. They need traditional, impact-factored journals to counter the exclusion of preprints from the reputation economy on which their careers depend.

Until the scholarly publications ecosystem is transformed in line with the digital age, I argue that open access can’t be afforded. So, how to transform it?

I think the answer lies in “digital transformation”, the rethinking of traditional processes using internet-era norms. An example is the process to apply for a British passport. Previously, application involved lots of form-filling, “peer review” in the form of a signature from another passport holder, and other user-unfriendly, bureaucratic, pen-pushing practices. Today’s online system is user-centric and would make any internet start-up proud. It’s undoubtedly a lot less costly for the UK authorities too.

So, inspired by this example, how could we rethink the process of scholarly publishing? One internet-era principle is “fail fast” – if your project fails, you stop and move on in another direction. What if all papers were first published on preprint servers to test whether it “succeeds” or “fails”? If it succeeds, journal editors would compete to invite authors to publish in their journal, flipping the submission process. If it failed to garner interest, no matter, the paper remains on the preprint server (perhaps to gain attention later as a slow-burner) and the author moves on in another direction.

Let’s assume that half of the preprints succeed in gaining the attention of a journal editor and, of these, half survive peer review – the total saving on the current publishing system would be significant. Cutting 15% off today’s cost of publishing journal articles is US$1.5bn. Yet, in terms of getting papers in front of readers, nothing would have changed: all would be available online just as they are today.

One catalyst is needed: the “reputation economy” (comprising tenure, promotion, and grant-giving committees) must value preprints just as it does articles published in impact-factored journals today. To help this process along, preprint servers need to have comment fields like those in TripAdvisor and Airbnb. Just as consumers trust consumers in making choices about where they eat and stay the night, readers will trust readers in making choices about what they read next. Perhaps reader comments could be codified and included in altmetric scores?

Nothing comes for free and this proposal implies another change: authors will have to do more to promote their papers. Funders are increasingly looking to measure the impact of the research they fund so this is something authors will have to do more of in any case. There is a danger that those who are already well-known will do better than newcomers (the Matthew Effect) but I would argue that a preprint system open to all offers newcomers a greater chance of breaking through than today’s closed world of peer-reviewed journals.

Once significant costs have been stripped out of the system, it should be possible for libraries and funders to fund both open preprint repositories and open access journals without the need for paywalls or play-walls. But until the costs come down, I fear we’ll remain stuck with the same frustrations we have today, only things will become more heated. Worst of all, I bet I’ll be writing that the number of articles born-free is still stuck at ~20% in 12 months’ time.

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the author

Toby Green has spent 35 years in scholarly publishing working with commercial, society, IGO and NGO organisations on all types of content: books, journals, databases, A&I services and encyclopedias – always with an eye on the reader experience. He writes this piece in a personal capacity and in the hope that it contributes to thinking about how to find a sustainable and effective way to make scholarship available to all. His ORCID iD is 0000-0002-9601-9130.

Share This Story, Choose Your Platform!

8 Comments

I find this a very refreshing view. Thanks for working it out in so much detail and for (re-)sharing it in this digestible format. Re-designing the process in an all-digital (born digital) way, is the way to go….
I would think one of the biggest challenges will be dealing with the legacy systems around research evaluation and tenure, as you also refer to. Do you have more thoughts on that?

Thanks, Yvonne, for your kind comments. As for dealing with legacy systems, I’m under no illusions that change in the way researchers are evaluated for promotion and grants etc is going to be challenging but it is possible if only juries and panels would realise it. I saw yesterday that researchers in Norway are still judged on whether they publish in ‘category 1’ or ‘category 2’ journals. I can understand that in an analogue age, measuring at the level of the journal was probably the only cost-effective method – but in today’s internet-era, measuring at the level of an article is eminently possible and affordable. So, all it takes is for panels and juries to look at article-level ‘altmetric’ assessment tools and I’m sure change could happen fast.

You’re hitting the nail on the head with the idea of digital transformation. Replicating structures that perhaps were sensible in the print environment, in the current digital environment, is making us miss many opportunities for improvement. From the structure of the documents (our attachment to the concept of “page” and static texts is increasingly indefensible), to the structure of scientific communication (the idea that only journals can “publish” or vet contents for quality).

To me it doesn’t make sense that a manuscript (another term that perhaps should be phased out) may have to sit in the author’s hardrive for months (sometimes years), until a couple of people, who might not even be that interested in the topic of the article, decide whether it is worthy of being published in Journal X. Nowadays anyone can publish a text and make it available to everyone else, so there’s no need to perpetuate the “bug” of limited dissemination inherent in the print environment as if it were a desirable feature.

As you say, succeed or fail fast, or in other words, let each paper stand or fall on its own merits. So yes, preprint first, always. Of course, it will be the responsibility of the readers to accept or disregard the validity of the claims made on the paper (but this was also true before). Also, the review culture should adapt to these changes. The people who read a paper should act as self-appointed reviewers and comment on what they think are the strong and the weak points of the paper. In my experience, this almost never happens at the moment.

Thanks for an interesting post. What do you think of the F1000 model (proprietary, I know) that allows authors to publish what are effectively preprints within 7 days then have the external peer review done transparently with peer reviewers’ reports posted alongside the original article and subsequent versions? The F1000 APC cost for a long article (over 2,500 words) is about US$ 1,000.00 and no cost for subsequent versions of the same paper.

That model is interesting but it locks the author into one channel. I would prefer to see preprints on subject-based servers and have journals compete to publish the best preprints. This would oblige journals to deliver better value.

[…] helpful to pause, look back, and take stock. Open Access week is a perfect time to do that. Like Toby A. Green, I wonder what is holding up efforts at a transition to more open scholarship. Sometimes, it feels […]

One possible (unintended) consequence of this approach is that the article is never “published” because the “article” will be a living breathing thing that is constantly evolving as input is added from other researchers. (Or it dies/hibernates in the blockchain until someone revives it.)

I predict journals of the future will only exist as these blockchain platforms and publishers will charge entrance fees to join them. The question then becomes: Who pays for the entrance fees?

We use cookies on this site to understand how you use our content, and to give you the best browsing experience. To accept cookies, click continue. To find out more about cookies and change your preferences, visit our Cookie Policy.