Expensive research journal subscriptions could be on the way out, if the Wellcome Trust has its way. The moneybags UK research foundation has published a report favoring free, so-called open access, journals over those that charge a fee for access. The report reviewed the activities of research institutions that received funding from the trust. It found that it is cheaper, and thus a better use of grants, to place papers in freely available journals.

Meanwhile, the trust feels it's not getting enough bang for its bucks from hybrid publications. These hybrids charge scientists a decent wedge of cash to publish their work, charge people for journal subscriptions, and offer access to individual articles for free. In other words, the foundation would rather scientists submit their work to open-access journals, which are cheaper than hybrids in terms of publication and subscription costs. "We find that hybrid open access continues to be significantly more expensive than fully open access journals, and that as a whole, the level of service provided by hybrid publishers is poor and is not delivering what we are paying for," the trust said.

Related Stories

Jason Schmitt was working at Atlantic Records when the online site Napster disrupted the music industry by making copyrighted songs freely available. Now, the communications and media researcher at Clarkson University in Potsdam, New York, is pushing for a similar disruption of academic publishing with Paywall, a documentary about the open-access movement that debuts today in a Washington, D.C., theater. "I don't think that it's right that for-profit publishers can make 35%–40% profit margins. The content is provided for them for free by academics," Schmitt, who produced the film, says.

The documentary explores the impact of Sci-Hub, a website that provides pirated versions of paywalled papers for free online, and interviews academics and publishing figures. Schmitt says many large publishers refused to go on camera—although representatives from Science and Nature did—and he is not impressed that several have begun publishing some open-access journals. "Elsevier is as much to open access as McDonald's fast food is to healthy," he says.

Scientific journals should start routinely publishing the text of peer reviews for each paper they accept, said attendees at a meeting last week of scientists, academic publishers, and funding organizations. But there was little consensus on whether reviewers should have to publicly sign their critiques, which traditionally are accessible only to editors and authors.

The meeting—hosted by the Howard Hughes Medical Institute (HHMI) here, and sponsored by HHMI; ASAPbio, a group that promotes the use of life sciences preprints; and the London-based Wellcome Trust—drew more than 100 participants interested in catalyzing efforts to improve the vetting of manuscripts and exploring ways to open up what many called an excessively opaque and slow system of peer review. The crowd heard presentations and held small group discussions on an array of issues. One hot topic: whether journals should publish the analyses of submitted papers written by peer reviewers.

Publishing the reviews would advance training and understanding about how the peer-review system works, many speakers argued. Some noted that the evaluations sometimes contain insights that can prompt scientists to think about their field in new ways. And the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations. "We saw huge benefits to [publishing reviews] that outweigh the risks," said Sue Biggins, a genetics researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, summarizing one discussion.

But attendees also highlighted potential problems. For example, someone could cherry pick critical comments on clinical research studies that are involved in litigation or public controversy, potentially skewing perceptions of the studies. A possible solution? Scientists should work to "make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process," said Veronique Kiermer, executive editor of the PLOS suite of journals, based in San Francisco, California.

The real game changerThe real game changer(Score: 0) by Anonymous Coward on Saturday March 26 2016, @10:36PM

Will probably come when a lot of top shelf universities band together and throw their weight behind some new journals that aren't part of the Nature/Springer/Elsevier/etc oligopoly. Scientists and researchers have enough to worry about that most of them won't risk submitting what they consider groundbreaking work to an unproven online journal.

risk submitting what they consider groundbreaking work to an unproven online journal.

What the fuck is this bullshit?

What risk? Just publish the shit everywhere. It's fucking information, not a stock investment. The journal(s) who pick up the breakthroughs become more popular. Make the publisher fuckers do some work. That's the real revolution: Stop doing all the work for the publishers and make them your bitches -- they need researchers, not the other way around.

Stop giving grants based on shit that's published and rather by shit that actually leads to successful reproduction and deployment in the field.

Everyone in academia is evaluated by their publications, and often it takes between a year and five years before the community as a whole can really judge if the work is actually groundbreaking. Therefore, people have moved to using the "impact factor" of the journals you publish in as a stand in for the quality of the work. This will impact your tenure review, your grant applications, or even your ability to move from grad school to post-doc or from post-doc to faculty. The only people that can afford to disregard impact factor then are the fully tenured senior faculty, and even for them they will only do it for their single author work so as to not injure their students possibilities.

I've noticed that some people publish essentially the same work over and over again, with a differently worded summary and with a rather modest extension with additional results. I think it's because a published paper is a meal ticket for the annual conference in their field, which are often held in a nice resort like Provence. In that case, the researcher would want to aim for the most prestigious journal for the original work, but then could afford to spread out the follow-on work in other journals, even considering some that have just launched and might not last.

Impact factor is complete bullshit. Sorry to be negative.The sheer volume of publications and the scarcity of competent reviewers, and the fact there is MARKETING involved, makes my scientific spidey sense suspicious of much I read.

Is it irretrievable? Maybe not, because truly brilliant work will get noticed...though maybe later than society might need.

The problem is that really complicated subjects require a great deal of time in education and research to have a good sense of the boundaries of what is reasonable.

So I have a choice when I review - do I tear it to shreds? Then journals may not ask me to review. But then weak publications might get through. And I'm not getting paid...even though the journals are!!!!

Peer review only works well in one instance - when someone publishes something that can be independently tested. If something is published that cannot be challenged (no equipment, expensive materials, impossible to replicate, dogmatic theory), it increase the inertia of poor science. Especially when money is involved...

And then we get to the murky business of "science by press release" so institutions can give the impression they are doing something, which is cheaper than *actually* doing something.

yeah. the problem is when you're sitting on the hiring committee for a tenured track position and you have to go through a hundred applications in a couple of hours because you also have to teach and you also have to do your own research and you have a life and stuff. it's at that point that you need just one number per person, and reading one hundred sets of 10-20 papers is not an option. The only things you can read are recommendation letters, and you can also look at the h-index, and you can also check to see if the applicant's peers thought they were worth publishing in a widely read journal (i.e. high impact factor).

That's why the funders need to make them do it...(Score: 0) by Anonymous Coward on Sunday March 27 2016, @02:51PM

From 2001: http://www.pdfernhout.net/open-letter-to-grantmakers-and-donors-on-copyright-policy.html [pdfernhout.net]"Foundations, other grantmaking agencies handling public tax-exempt dollars, and charitable donors need to consider the implications for their grantmaking or donation policies if they use a now obsolete charitable model of subsidizing proprietary publishing and proprietary research. In order to improve the effectiveness and collaborativeness of the non-profit sector overall, it is suggested these grantmaking organizations and donors move to requiring grantees to make any resulting copyrighted digital materials freely available on the internet, including free licenses granting the right for others to make and redistribute new derivative works without further permission. It is also suggested patents resulting from charitably subsidized research research also be made freely available for general use. The alternative of allowing charitable dollars to result in proprietary copyrights and proprietary patents is corrupting the non-profit sector as it results in a conflict of interest between a non-profit's primary mission of helping humanity through freely sharing knowledge (made possible at little cost by the internet) and a desire to maximize short term revenues through charging licensing fees for access to patents and copyrights. In essence, with the change of publishing and communication economics made possible by the wide spread use of the internet, tax-exempt non-profits have become, perhaps unwittingly, caught up in a new form of "self-dealing", and it is up to donors and grantmakers (and eventually lawmakers) to prevent this by requiring free licensing of results as a condition of their grants and donations."