Researchers boycott publisher; will they embrace instant publishing?

Elsevier, a traditional publisher, has been targeted with a boycott. Can a new …

Many scientists were miffed by the introduction of the Research Works Act, which would roll back the US government's open access policy for research it funds. Some of that annoyance was directed toward the commercial publishers that were supporting the bill. That, combined with a series of grievances about the pricing policies of one publisher, Elsevier, has now led a number of scientists to start a boycott—they won't publish in or review for journals from that publisher.

At the moment, the site where the academics are organizing the boycott is down, but the signatories were heavily biased towards math and the physical sciences.

This wasn't the only news from the publishing world, however. The Faculty of 1000 is a site that organizes what's been termed "post-publication peer review." Instead of reviewing publications prior to their being published, the Faculty of 1000 comments on papers in their areas of research after they've been published, adding an additional layer of quality and sanity checking (something that, unfortunately, is often needed).

Now, Faculty of 1000 is launching F1000 research, which is a different twist on academic publishing. When a manuscript is submitted to F1000R, an editor will provide a basic sanity check and, if it passes, the paper will immediately be published under a Creative Commons license. Only after it's online will the journal arrange for reviewers to perform peer review on it. Reviewers' scope will be limited to the scientific validity of the results and won't include an evaluation of the paper's significance. Other researchers will be able to attach comments to the paper that will act a bit like informal reviews. F1000R will also host any large datasets associated with the publications.

This approach could run into trouble, given some of F1000R's other goals for the service. For example, they're apparently willing to accept preliminary work, negative results, and thought experiments. Will all of these end up being reviewed? Does anyone even think having their preliminary work formally reviewed is a good idea? There's time to sort out details like that before the service launches later this year, but details like this could be essential for determining how it ends up being used (if it's used at all) by the research community.

A number of people commenting online have compared this service to the arXiv, which hosts works-in-progress for the physics and math communities. Some of the works on the archive are clearly from the fringes of science, and a number of them don't survive peer review. Nevertheless, the arXiv has enhanced the communication among scientists in the fields that use it. If F1000R manages to provide a similar platform for other fields, there are definitely researchers interested in embracing it.

38 Reader Comments

thecostofknowledge.com is up as far as I can see. 2066 signatories. It's not surprising that this is biased towards math and physics, since the idea of the boycott originated in a blog post by a great mathematician less than 2 weeks ago: http://gowers.wordpress.com/2012/01/21/ ... -downfall/

It sounds like they're setting F1000R up to get flooded with too much stuff for everything to get reviewed.

It's really pretty easy for an editor to just say "no, completely unreasonable" and move on. The trickier part is "This is slightly outside my field. The techniques *might* be OK, but I am unfamiliar with them, and I need to spend a week thinking about it."

I'm by no means an expert on the peer review and publishing process, but given some of the articles here on Ars talking about how difficult it is to get something discredited once it's been published (such as HIV deniers, climate change deniers, etc.), it sounds almost like the fiasco that is the US PTO and trying to invalidate a bogus patent once it's been granted patent status, in that it's relatively easy to say "no go," but once it's been published / patented, it's practically impossible -- in the literal sense of both those words -- to get it discredited / invalidated.

With this system, will that translate to the potential for more pseudo-science to make its way into the "gospel" of "published works?" If so, what will this mean, in terms of damage control, when people attempt to cite that flawed or fraudulent work? Again, given that a lot of damage can already be done just by one piece of bogus work getting published, what will happen if it's suddenly possible to have a lot published, and we now have the nutjobs citing published sources, however flimsy, nonsensical, or outright wrong they are, and to not be lying or misrepresenting the facts when they do it?

F1000R is an interesting development, but it is focused on biology and medicine. Mathematics have arXiv and economics have blogs and online working papers that are very important for the development of the research. When will we see similar developments in other sciences?

I'm by no means an expert on the peer review and publishing process, but given some of the articles here on Ars talking about how difficult it is to get something discredited once it's been published (such as HIV deniers, climate change deniers, etc.), it sounds almost like the fiasco that is the US PTO and trying to invalidate a bogus patent once it's been granted patent status, in that it's relatively easy to say "no go," but once it's been published / patented, it's practically impossible -- in the literal sense of both those words -- to get it discredited / invalidated.

Within science, this is not really a problem. Scientists are highly skeptical of many papers, and carefully look at the work and the research group. Keep in mind that arXiv, a very similar site, has been around for 10+ years and has not caused these kinds of issues.

Outside of science... I'm not sure if it makes the situation better, worse, or just more of the same. Any crackpot can "publish" on a website and attract a community of true-believers.

Patents and publications are completely different. Patents give you a limited, legal monopoly, while publications give you ... well, nothing really legal. You get citations from other publications, some "street cred" in your field, and some further proof that grant agencies should fund you.

It sounds like they're setting F1000R up to get flooded with too much stuff for everything to get reviewed.

I actually predict the opposite. I think this will be a tremendous flop.

There is a great deal of concern in biology/medicine about being "scooped" by another research group. I don't think there are many experimental biology/medicine groups that will be willing to divulge their research before it's ready for a journal submission.

On top of this, the idea that you'd want to subject a paper to a double review (at F1000 and then at an actual journal) makes my head spin. Getting a paper through peer review can be a terrible headache. Why do it twice: once for play, and then for reals?

I'm by no means an expert on the peer review and publishing process, but given some of the articles here on Ars talking about how difficult it is to get something discredited once it's been published (such as HIV deniers, climate change deniers, etc.), it sounds almost like the fiasco that is the US PTO and trying to invalidate a bogus patent once it's been granted patent status, in that it's relatively easy to say "no go," but once it's been published / patented, it's practically impossible -- in the literal sense of both those words -- to get it discredited / invalidated.

Within science, this is not really a problem. Scientists are highly skeptical of many papers, and carefully look at the work and the research group. Keep in mind that arXiv, a very similar site, has been around for 10+ years and has not caused these kinds of issues.

Outside of science... I'm not sure if it makes the situation better, worse, or just more of the same. Any crackpot can "publish" on a website and attract a community of true-believers.

Patents and publications are completely different. Patents give you a limited, legal monopoly, while publications give you ... well, nothing really legal. You get citations from other publications, some "street cred" in your field, and some further proof that grant agencies should fund you.

EDIT: removed an extra "also" in the second sentence.

My comparison to patents was only insofar as getting the work formally discredited. It's extremely difficult to invalidate a patent once it's been recognized by the patent office. Likewise, other articles on Ars have mentioned that in order to get rid of a published work completely, you have to have the consent of the author. You can continue to publish works discrediting the bogus one, but the bogus one continues to exist as a published work, with all the credibility implied therein.

It was implied that the publishing process itself gave work weight and credibility, to the point that without a published work to support a claim, you "have no proof," as it were. Once there's a published work, you can claim it as a proof, even if it's bogus work and behaves much like this gem from XKCD: http://xkcd.com/978/

My comparison to patents was only insofar as getting the work formally discredited. It's extremely difficult to invalidate a patent once it's been recognized by the patent office. Likewise, other articles on Ars have mentioned that in order to get rid of a published work completely, you have to have the consent of the author. You can continue to publish works discrediting the bogus one, but the bogus one continues to exist as a published work, with all the credibility implied therein.

It was implied that the publishing process itself gave work weight and credibility, to the point that without a published work to support a claim, you "have no proof," as it were. Once there's a published work, you can claim it as a proof, even if it's bogus work and behaves much like this gem from XKCD: http://xkcd.com/978/

Sure, if you have no published work, you have no proof. Having published work does not give you proof, only the possibility of proof. Published papers are not taken as gospel.

I really like the idea of open access science. But as the conversation above suggests, there are a lot of kinks to work out. Is the current model like democracy? Works terribly but it's the best we have? I don't know, but this conversation is important to have.

When a manuscript is submitted to F1000R, an editor will provide a basic sanity check

Uh huh. Given the sheer breadth of research topics, this is an idea that sounds good but unworkable in practice. Therefore, a useless idea.

Plus, so many papers are written by non-native English speakers with tons of little mistakes and mis-explanations throughout the paper that make review difficult, unless one is just glancing through it. Unless the editor's 'basic sanity check' involves actually reading the paper, the scientific validity of such papers will be hard to judge. And if the editor is reading the paper, that's review.

Quote:

Reviewers' scope will be limited to the scientific validity of the results and won't include an evaluation of the paper's significance.

Thereby encouraging reproduction, which is a good thing. Only problem is students finding and citing recent, potentially lower quality, papers and overlooking the body of prior work that exists. But the editor will catch all of that right? Never mind the extra work it'll create for reviewers in a pre-publication process.

Quote:

Other researchers will be able to attach comments to the paper that will act a bit like informal reviews. F1000R will also host any large datasets associated with the publications.

And who will moderate these comments? This sounds like a never ending peer-review process. Just post- instead of pre-publication. Comments on sites with a relatively well-educated readership can get out of hand (see: Ars Technica). Senior scientists turn Letters/Opinions pages into get into bitchy, ego-driven arguments about little things. The let's-think-of-more-ways-to-make-money business model of major publishers is annoying, but this proposed alternative sounds like an ever bigger headache for researchers. It sounds like a system that will give even more responsibility than normal to editors, thereby encouraging editors to arbitrarily reject papers because 'it's too much work'.

I really like the sound of F1000R and I hope it does well. I think most researchers agree that open-access publication is preferable, all else being equal, and it sounds like this has a shot at tackling the major problems of publication bias into the bargain (negative results going AWOL, successful replications never seeing the light of day, etc.).

That said, I'm very pessimistic. I suspect that as long as researchers' career prospects are dependent on the number of peer-reviewed publications they can get into high-impact journals and the number of citations their work gets, we won't see a large-scale move towards these sorts of services.

Edit: Other commenters seem to imply that this is not an issue that affects all fields - interesting, if true. I can only comment on health-related research.

I'm by no means an expert on the peer review and publishing process, but given some of the articles here on Ars talking about how difficult it is to get something discredited once it's been published (such as HIV deniers, climate change deniers, etc.),

This has nothing to do with peer-review and everything to do with people, generally non-specialists, clinging to something that's been discredited because they want to, need to to believe in it.

The IETF provides a good model for a reasonable process for publishing the results of scientific research. Before the IETF, the practice was to charge for computer technology standards documents at the same kind of rate that is commonly used today for academic papers. Something more or less equivalent to an RFC typically cost something like $30 for a hard copy that was protected by copyrights and not supposed to be duplicated with a photocopier. This kind of pricing scheme was used by standards bodies whether they were public like ISO or private like the IEEE. The IETF broke this insanity. Pretty much universally today standards related to computer technology are provided online at no cost. The IETF manages the servers to access its publications. It also manages to grade RFC's to distinguish those that are purely informational and those that are at different stages of the standardization process. It is able to do this work at low cost because all of the work in developing standards is contributed by those who have an interest in creating them. None of the private companies involved have any problem turning over the intellectual property rights related to the standards documents to the IETF. There is no reason why all of science could not use this kind of model.

As someone who does a lot of lit searches, I am hugely in favor of this idea. Pay walls can be obscure and mystifying, especially when you work in a cross disciplinary field (I work in Public Health), and are dipping into medical, biological, psychological and social topics.

That said, these need to be considered white papers until reviewed. I would argue a large portion of peer-reviewed papers use suspect methods and clever reporting of data, so that although it is not wrong per se, the data as reported is correct but misleading.

But giving researches the ability to put their findings out for the world to see without pay walls will help a large number of other researchers. I don't think the media or lay public will be much affected either way. When the media finds it perfectly ok to question solid theories with hundreds of years of research and dozens or more papers adding to validity, what difference will calling it a white paper or published paper make? Those who know how to read these papers will probably be able to figure out what to find credible on their own.

Edit: As for negative results, those can be highly useful findings. Learning approaches to a hypothesis that did not work are often just as useful as those that do.

With this system, will that translate to the potential for more pseudo-science to make its way into the "gospel" of "published works?"

Here's your problem, right there. If you take published works to be "gospel", then its only a matter of time before you end up believing things which arent true. Peer review isnt magic, its just the best thing we have.

I think its great that they are boycotting their publishers if they are unhappy. If a small group of intellectuals cant successfully boycott a service that caters to only a small group of intellectuals then I anyone talking about wider boycotts for other more prolific industries might as well just go home now.

Most states have deceptive trade practices statutes which include broad restrictions on "deceptive" or "unfair" trade practices. These statutes often include prohibitions against unconscionable practices.

It's time for a comprehensive deceptive practices act to prohibit ANY practices whose primary EFFECT is to deceive, such as labeling in legislative proposals. (I specifically do not think any sort of INTENT should be required -- its the EFFECT itself I would regulate -- and across all endeavors.)

To label that clause as Private Sector Research because somebody "entered into an arrangement to make a value-added contribution, including peer review or editing" has a clear DECEPTIVE effect If I make a value-added contribution, such as a postage stamp, or a network socket, any public funded research is now Private Sector Research. Cool. (For me.)

Even more deceptive, but minor relatively, is that I only have agree to provide something -- I don't even have to do it!

If we continue the precedent of the last few hundred years of outlawing specific practices, and requiring intent, we'll need HUNDREDS of lawmakers, working FULL TIME, just to keep up with nefarious types dreaming up new ways to deceive. (Not to mention regulating themselves.)

My comparison to patents was only insofar as getting the work formally discredited. It's extremely difficult to invalidate a patent once it's been recognized by the patent office. Likewise, other articles on Ars have mentioned that in order to get rid of a published work completely, you have to have the consent of the author. You can continue to publish works discrediting the bogus one, but the bogus one continues to exist as a published work, with all the credibility implied therein.

It was implied that the publishing process itself gave work weight and credibility, to the point that without a published work to support a claim, you "have no proof," as it were. Once there's a published work, you can claim it as a proof, even if it's bogus work and behaves much like this gem from XKCD: http://xkcd.com/978/

Sure, if you have no published work, you have no proof. Having published work does not give you proof, only the possibility of proof. Published papers are not taken as gospel.

Media attention gave me the opposite impression regarding the retraction process (I think it was that specific study, actually, that I read about before on Ars), and as to how published work is used, a simple google search will turn up plenty that's blindly quoted and misused to further political aims.

That being said, thank you very much for helping clear up my own misconceptions regarding the process and its effects. "They" always say you learn something new every day, but honestly, it happens so rarely, I'm glad this could be one of them

grimlog wrote:

sporkwitch wrote:

I'm by no means an expert on the peer review and publishing process, but given some of the articles here on Ars talking about how difficult it is to get something discredited once it's been published (such as HIV deniers, climate change deniers, etc.),

This has nothing to do with peer-review and everything to do with people, generally non-specialists, clinging to something that's been discredited because they want to, need to to believe in it.

Of that I'm well aware lol, I merely point out that the work remains, and thus is still pointed to.

bonewah wrote:

sporkwitch wrote:

With this system, will that translate to the potential for more pseudo-science to make its way into the "gospel" of "published works?"

Here's your problem, right there. If you take published works to be "gospel", then its only a matter of time before you end up believing things which arent true. Peer review isnt magic, its just the best thing we have.

Didn't mean to say it was, hence the quotes, more that unless it _is_ published, it's taken as fantasy, whereas once published, it lends an aire of credibility. That's all. If nothing is published, there's nothing to point to, but once something _is_ published, it provides something to point to, much like religious texts, whether the speaker understands those texts or not doesn't change their presence and use.

Within science, this is not really a problem. Scientists are highly skeptical of many papers, and carefully look at the work and the research group. Keep in mind that arXiv, a very similar site, has been around for 10+ years and has not caused these kinds of issues.

(Where the 'issues' are bad papers getting the credibility boost of publication and then being cited by the non-scientist public to reinforce difficult-to-kill misconceptions or quackery)

arXiv may sound like a similar site, but it is different in that (most) topics in physics and mathematics are one or more of the following: uncontroversial, inscrutable, irrelevant for daily life. I say this in the most complimentary way, as a former physicist. With a few exceptions, arXiv's papers don't attract a lot of media attention or crank citations; it is mostly a tool for people in-field who are well equipped for evaluating the credibility of the paper and the research team.

I do think your concern is a real risk for biology/medical papers, or papers concerning politically sensitive topics. We'll see how it works.

I'm a big believer in peer review, because it is the only way, as a layman, I can access the validity of research I don't understand. But I also know when I was working in science that speed and accessibility was more of a concern in my daily work. In short, peer review probably benefits non-experts more than experts.

Rebecca Lawrence (F1000 Research): Thank you for some great comments - we are aware there are a lot of details we need to work out and some of which we will need to test over the coming months until we get it right. That is in fact why we announced our plans prior to launching, so we can engage the relevant communities in just such debate as we work through the issues and identify the best way to deal with them. I would like to respond to a few of the comments made above.

@sporkwitch: You say 'With this system, will that translate to the potential for more pseudo-science to make its way into the "gospel" of "published works?" I should point out that one of the key advantages of the fact that the peer review of the article isn't arbitarily stopped is that at any point when someone realises there is something seriously flawed about an article, they can say so against that article and it will be immediately obvious then to anyone who views the article that someone has just spotted an issue with it. This contrasts with now where such critcism is usually buried in another paper published in another journal much later. So this should in fact reduce the chances of 'pseudoscience' permanently becoming 'gospel'.

Additionally, our intentions are to make it very obvious immediately if an article is awaiting review (i.e. not yet peer reviewed) and to show whether it has received positive or negative comments so you can see at a glance the overall view of the paper without having to delve through all the detailed comments.

@darkvine - you say about getting scooped and publishing twice. I should emphasise that this IS your publication - you don't go and publish it again. You submit it here when you are ready to submit your article (just as you would to any other journal) and immediately you get priority on your findings - this means there is no time for anyone to scoop you as the priority is immediate. This is where we differ from the likes of ArXiv, Nature Precedings (and in fact our own F1000 Posters).

@grimlog You mention the initial sanity review by the editors - this really is just a quick check to make sure it isn't complete rubbish and it is readable; I am not suggesting it is a check for scientific accuracy - that is what the referees are for and these would of course then be picked for being knowledgeable in the specific topic of the paper in question. In terms of commenting, we will of course need to monitor this and check it doesn't get out of hand. At the moment, the challenge will be to encourage commenting - it is all out in the open and so it won't do anyone's career any good being i nappropriately disparaging to others. Similarly, submitting a very poor paper as an author which then becomes openly criticised wouldn't do your career any good either.

This is one of the most thoughtful discussion threads I've read anywhere online for a while.

On topic: I can see a lot of potential with F1000. Pros:

- I like the idea of anyone being able to comment on an article; I imagine it would be like commenting on books and other products at Amazon, with allowance made for people to vote on the helpfulness of comments.

- I like the idea of instant access to a work as soon as it is posted, but, as an improvement to arXive, with a track to having it peer reviewed.

- In some ways I like the public access of the work as soon as it is published. In many ways it could help the peer reviewer, if any interested person in the field can make critical comments. Of course you will always have competitors who will be quick to point-out the flaws in a paper, almost inviting your, possibly prominent, often hostile, competitors to participate in the process. Frankly, this is not always a good thing--sometimes ideas need time to be explored without the naysayers smothering them.

What I would improve:

- I would make the line of demarcation between reviewed and non-reviewed works stronger than it appears to be in the article and discussion. I would even go as far as making a different journal title for the two. I know when I see an arXive or tech report citation in a paper I appreciate the ability to quickly know that the cited paper has not been through peer review, recognizing that many such papers are of very high quality.

As a regular reader of/contributor to the arXiv I can tell you that it is really close to ideal right now:

1. Since 2004 it is operates on an endorsement policy, meaning you can submit something only if you've submitted before, or if you are endorsed by someone who submitted before. This keeps the worst crackpots out. Most people who complain about this have no business submitting to the arXiv in the first place. They can go to viXra.org.

2. Most journals allow you to submit your paper to the arXiv and publish with them, and only a handful (Nature group, Science) want you to wait with the arXiv submission until the paper has appeared in their journals. Still, you can put the published paper on the arXiv immediately after (it's about the news factor for them). In any case, you have the ability to put your research where others can find it free of charge. On the arXiv you update the journal ref field once it's published, and people know your paper has been peer-reviewed (and get the DOI). AFAIK this does not violate the RWA.

3. (Almost) nobody puts "works in progress" on the arXiv, at least not intentionally . The damage to your reputation if you make a silly mistake is just too great. It also means that the quality is typically pretty high, even if not all papers make it through the peer review process in one piece. Priority is also established, because you can make the date of the arXiv submission the same as the date of the journal submission (which is often what counts). In practice, the arXiv submission date establishes the priority, even if you do not submit to a journal immediately (updating your arXiv version generates a new version and will not "rewrite history").

4. Knowing that you are reading things on the arXiv reinforces that you should be careful taking things at face value. And that's a good advice whatever the circumstances!

For the life of me, I don't know why not more fields set up an arXiv version of their own. It's a no-brainer. No need for any time-consuming post-publication review or screening.

I don't believe that F1000R is the clear alternative to publishers like Elsevier. In the computer science community we have the USENIX association, which organizes competitive conferences, gets peer reviewers, and publishes proceedings --- but makes all of the proceedings (including video of talks) freely available.

I don't think that a wild west, out there for all to see approach is the right solution in all cases. Publishers like USENIX fill the same niche as traditional publishers while also offering a review process that people are familiar with and have (some) confidence in.

With this system, will that translate to the potential for more pseudo-science to make its way into the "gospel" of "published works?"

Here's your problem, right there. If you take published works to be "gospel", then its only a matter of time before you end up believing things which arent true. Peer review isnt magic, its just the best thing we have.

Bizzactly. We shouldn't be taking published papers to be The One True Truth. But by the same token, we cannot possibly be expected to run every experiment ourselves, analyse each paper in depth. Peer Review provides the only extant means of which I am aware that allows us to say "the chances that this is valid are quite high." If the topic is important for any reason, then you need to actually take the thing off the mental rack and do a deep dive into it.

But overall, I think we require something like peer review as a means of telling us "what is true" without having to research it all. If my doctor hands me a bottle of pills, then I will go read the summaries on a few studies related to those pills. I probably don't have time to do a month's worth of deep analysis into the papers in question and I am willing to bet I don't have the equipment, funding or legal clearance required to reproduce the experiments.

As you say; peer review is imperfect, but it is by far the best option we have.

You say 'With this system, will that translate to the potential for more pseudo-science to make its way into the "gospel" of "published works?" I should point out that one of the key advantages of the fact that the peer review of the article isn't arbitarily stopped is that at any point when someone realises there is something seriously flawed about an article, they can say so against that article and it will be immediately obvious then to anyone who views the article that someone has just spotted an issue with it. This contrasts with now where such critcism is usually buried in another paper published in another journal much later. So this should in fact reduce the chances of 'pseudoscience' permanently becoming 'gospel'.

Additionally, our intentions are to make it very obvious immediately if an article is awaiting review (i.e. not yet peer reviewed) and to show whether it has received positive or negative comments so you can see at a glance the overall view of the paper without having to delve through all the detailed comments.

Hi Rebecca!

I am very interested in all of this. I think it has some profound implications for non-scientists such as myself. I have given a great deal of thought recently into "how exactly should I go about determining what is most likely to be 'true' and what is not?"

The following is a snippet from a conversation that occurred in a previous thread.

I’d like to add a bit just to make sure my position here is crystal clear, and won’t be taken out of context. Hopefully this will satisfy the semantic and pedantic arguments.

Goal: To acquire knowledge beyond my areas of expertise.Foundational assumption: I cannot learn all things about all subjects which are relevant to me through direct experimentation.Choice: Give up on my goal, or determine a reliable source of knowledge. (I choose finding a reliable source of knowledge.)<research into various knowledge dissemination mechanisms occurs here>Conclusion: Because of its inbuilt correction mechanisms, peer review presents a signal-to-noise ratio so significantly above all other candidates knowledge dissemination systems analyzed that it renders said other knowledge dissemination systems functionally invalid.Outcome: Given the above, treat peer reviewed papers as logically equivalent to evidence*. Addendum: The above is not a perfect method for determining Truth. Some Truth will slip through the cracks, and some UnTruth will be mistaken for truth. Unfortunately, at this time no individual or organization has been able to present a realistic** alternative methodology for Understanding Things that offers even a comparable signal-to-noise ratio, let alone one that is superior.

* Peer reviewed papers are not themselves evidence. Instead, they contain evidence. But for nearly all purposes I have, they’ll do. If issues arise wherein I need to question a peer-reviewed paper for any reason, I then review it in depth, check references and do hard analysis on said paper (and any associated items as required.)

** See foundational assumption.

I remain open to alternatives if/when/where they present themselves. If you have a superior methodology for Understanding Things with a superior signal-to-noise ratio, please share. Self-improvement in this area is one of my goals.

What you are proposing that F1000R’s new site could do – assuming it meets the expectations your have outlined here – does indeed look like it may be a “leg up” on the traditional Peer Review process. We can’t know for sure until after the site has been up for a while, but it shows promise.

I think it's long overdue for scientific publications to leave the dark age of closed anonymous review. Let's hope this initiative has legs especially with the consolidation of journal publishers into a virtual monopoly charging outrageous prices hurting both the scientific community, and more-so, the public.

It sounds like they're setting F1000R up to get flooded with too much stuff for everything to get reviewed.

Ah, well the part that was not mentioned in this article is that F1000R is a for-profit, privately owned journal that will charge an author fee for every single thing you want to post. How much do you think scientists are willing to pay to "publish" incomplete data sets or "thought experiments"?

And who will moderate these comments? This sounds like a never ending peer-review process. Just post- instead of pre-publication. Comments on sites with a relatively well-educated readership can get out of hand (see: Ars Technica). Senior scientists turn Letters/Opinions pages into get into bitchy, ego-driven arguments about little things.

I wonder the same thing. Feuds already infect the peer review system (especially when specialties don't contain many researchers), but it seems easier for it to get out of hand in this model. It gives a rival more power to tarnish the reputation of your work.

And who will moderate these comments? This sounds like a never ending peer-review process. Just post- instead of pre-publication. Comments on sites with a relatively well-educated readership can get out of hand (see: Ars Technica). Senior scientists turn Letters/Opinions pages into get into bitchy, ego-driven arguments about little things.

I wonder the same thing. Feuds already infect the peer review system (especially when specialties don't contain many researchers), but it seems easier for it to get out of hand in this model. It gives a rival more power to tarnish the reputation of your work.

It's always a difficult balance. You need the ability to weed out the true crazies, but also the ability to prevent personal gripes from clouding judgement. The last thing we need are more gyres.

A publication that relies on post-peer-review also seems to me that it might be susceptible to astroturfing. Get enough people willing to “pay to post a comment” (or enough sockpuppet accounts? How’s the posting vetting work?) on something and suddenly you have bull**** pseudo science in the literature backing up AIDS denialism, cell phone causing cancer, intelligent design, vaccines cause autism, etc.

arXiv would seem to have it right on that regard; you have to be invited in. “Vouched for.” In theory it would be possible to sneak a trojan horse into such a system (who could then “vouch for” others thus disintegrating everything,) but so far that doesn’t seem to have been much of a problem.

As a regular reader of/contributor to the arXiv I can tell you that it is really close to ideal right now:

Math and, I guess, physics get away with this because there are some very reputable journals that are completely open access. It doesn't help much with older papers, but this probably isn't as much of an issue as it is in math (I've cited work from as far back as 1955). There's really no good reason for new papers to not be open access or at least mirrored in arXiv.

The more interesting thing is what F1000R is trying with peer review. I can't imagine it will work though. I guess the ideal would be something like the exchange between Ed Nelson and Terry Tao last year on the inconsistency of the Peano axioms. Nelson posted a paper of a surprising result, it got a lot of attention, Tao guessed where there might be an error, Nelson confirmed his guess and withdrew the paper. Most papers are not as potentially interesting. I cannot get Tao to even glance at the twelve pages of integrals in my "On a tedious calculation that yields a slight improvement in a constant, correcting a largely forgotten 10-year old paper on an unsurprising result". (look for it on arXiv!)

The F1000R thing is certainly interested, but I'd like some more coverage of the Elsevier boycott. Since one of the world's largest publishers has achieved a state of evil overlordship that scientists are defecting en masse, that sounds like a real story. And it's not even necessarily about the quality of the job they do so much as how much money they charge, which is a great angle to focus on. There's a lot in there about the fight by scientists to make science more accessible.