Any tool that improves the peer review process for published papers is a boon to science. It will be up to the community to utilize these tools and to police the "peers" submitting comments.

However, this will do nothing to solve the problem of pseudo-scientific journals posing as legitimate purveyors of scientific research. The publishing of junk by these non-reputable journals and magazines do much to muddy the waters and increase misinformation to the public.

It is still reliant on scientists and skeptics to point out bad science and counter pseudo-science in every case where the experimental evidence does not support the findings.

While a lot of that first half made little sense to my brain, I believe I caught the gist.

The last couple of paragraphs resonated, though. I don't think it's ever good to take the stance of you're wrong and I'm right. I'm all for peer review.

I witnessed first hand how other designers and engineers can tear down flaws of a design. If you're not prepared to take criticism that might ultimately help your idea then you're doomed to failure anyway. It of course helps if the people criticizing actually know their shit.

I think discussion/review sites like that are an excellent step forward for science.

The current "fund-research-review-publish" process is well broken. Professors are no longer scientists, but managers/fundraisers. Postdocs and graduate students are forced into caring more about their funding prospects than the quality of their research. And high-impact journals have business model that is money-grabbing, elitist, and failing in its core duties of rigorous review and dissemination of results.

Jeff Atwood has already solved the Q&A problem, and is actively trying to solve the public discourse problem. Can a clever mix of the 2 bring solutions to the publication problem?

Good point.However, in this world (edit: of academic review), I think that you can't have 100% anonymity. You can hide the user identity from other users, but I think the whole effort would become derailed if you're not performing verification at least on account creation. The incentive for sock puppetry in academia is huge.

Online peer review of scientific publications could be very useful, but it won't address some of the larger problems. The low quality and huge quantity of papers means few will be read by anyone but the authors and friends. Many useful papers never see publication if they don't support a reviewer's conclusions or if they don't cite important previous work by the reviewer and friends. Not obvious how one would cope with irresponsible reviewers in an open setting - just view any public comment stream.

And the cost of avoiding sock puppetry, namely removing anonymity, is low compared to the rest of the internet since scientists already publish their names on their papers.

In my experience review is "double-blind" so that the reviewers don't know who the authors are, and the authors don't know who the reviewers are. The journal will have an editor or two shepherding the process, and they of course know who everyone is.

It is disheartening to get a review back that goes something like this:"Although the idea of an extreme-temperature refrigerant has merit, the authors should look into the possibility of adding pigment, reducing the viscosity, replacing the glycerol-based substrate with a water-based one, and abandoning the thermal resistance so that the substance can used as ink."

Edit: Apparently double-blind review is less common in other fields than I thought.

There is a real problem with research being locked up behind a pay wall. For one thing, even if you are a member of a technical organization that is specific to your training, going one step outside your area will cost you $20 to $30 a paper, or a few hundred dollars to join yet another organization. When papers were actually on paper, this made sense. But when the journal is on line, the reviewers are not paid, and the author pays to get "published", just where is the money going?

To prepare hydrogen peroxide at a concentration in this range to an accuracy of 0.01% means not only accurate pipets and technique but scrupulous exclusion of all transition metals. Getting buffers to picomolar iron doesn't sound like fun.

And the cost of avoiding sock puppetry, namely removing anonymity, is low compared to the rest of the internet since scientists already publish their names on their papers.

I think you need to preserve user to user anonymity (at the user's option) if you're opening up for discussion or assigning reviewers from a pool. that's wholly different from putting the authors' names on a paper.

Suppose you're a grad student and you call out a senior tenured professor on shoddy work. Now suppose you apply for a position in the department said senior tenured professor works in? Maybe they will applaud you for the courage to call them out, but they may also go the other way and sabotage your effort.

"The authors answered this in a follow-up paper, in which they claim that changing the hydrogen peroxide concentration by a tiny amount—a change of 50nM in a solution with a concentration of 120μM—is sufficient to change the way the gold particles form. But in the original paper, they show the same transition taking place in a linear fashion over a concentration range of 120μM, a range that is 2400 times larger."

"The authors answered this in a follow-up paper, in which they claim that changing the hydrogen peroxide concentration by a tiny amount—a change of 50nM in a solution with a concentration of 120μM—is sufficient to change the way the gold particles form. But in the original paper, they show the same transition taking place in a linear fashion over a concentration range of 120μM, a range that is 2400 times larger."

And the cost of avoiding sock puppetry, namely removing anonymity, is low compared to the rest of the internet since scientists already publish their names on their papers.

In my experience review is "double-blind" so that the reviewers don't know who the authors are, and the authors don't know who the reviewers are. The journal will have an editor or two shepherding the process, and they of course know who everyone is.

It is disheartening to get a review back that goes something like this:"Although the idea of an extreme-temperature refrigerant has merit, the authors should look into the possibility of adding pigment, reducing the viscosity, replacing the glycerol-based substrate with a water-based one, and abandoning the thermal resistance so that the substance can used as ink."

Edit: Apparently double-blind review is less common in other fields than I thought.

My field is small enough that you wouldn't need to add the names of the researchers to the manuscript for me to know at least what group was doing the work. And there are precious few enough reviewers that I can guess who my reviewers are just by their comments.

Its the worst though when a reviewer who is supposed to be anonymous makes themselves known by asking why you didn't mention certain little known and unhelpful articles all written by the same researcher. No lie I once had to track down a poster from a conference to please a reviewer.

Online peer review of scientific publications could be very useful, but it won't address some of the larger problems. The low quality and huge quantity of papers means few will be read by anyone but the authors and friends. Many useful papers never see publication if they don't support a reviewer's conclusions or if they don't cite important previous work by the reviewer and friends. Not obvious how one would cope with irresponsible reviewers in an open setting - just view any public comment stream.

This is a thin line. First, of course you will receive reviews giving you hints to some papers you might assume to be by the reviewer. Actually, if your suspicion is true in that sense it also supports the hypothesis that the reviewer seems (hopefully) actually to be an expert on the concrete topic you are publishing about, something which really is desireable for both, revied and reviewer for the sake of a sound mutual understanding and a discussion level really at the edge of the state of the art. I received reviews by non experts on some of our work and it was close to useless but still hurting The overall outcome.

Second, reviewers are asked to not only criticize papers but to also give advice and hints for the authors, e.g., its not enough to tell them that their related work has flaws but in that case to also give additional hints where to find further information. And you might have missed relevant work, which too often results in useless scientific effort. For example, in my field (Computer Science) I have seen numerous reinventions of The wheel (so to speek) because people tend to be agnostic to the related work. They are engineering a new system and are so proud of it cause it took so much effort but by ignoring existing approaches it is just a replication of existing approaches, often packed into an own ideosyncratic terminology. Novelty is non-existing. Comparison to existing work is non-existing. Whats the value of this?

Hence, to my own experience (on both sides of the process) i have to say its alias easy to first critizize the other whereas I should have checked first, if the problem is not on my side. And i can be glad, if critizism i received is backed-up by useful help (in this case hints to material i missed). And, btw, somtimes as a reviewer, one really doesn't want to do the work for the reviewed one, e.g., collecting all relevant work they should have considered, but to just hint them to an article one has published because in this article theire already is The information like Important work to be taken into account.

The reviewing process is "economically" mis-designed. If, but only if, a paper I review will be published, then I should take the time to go through it carefully, down to the level of grammatical errors in some cases, and certainly checking the equations. This is a full day of work for ONE paper. (In my field, papers run 20 to 40 pages.) Furthermore, at that level of detail there are always questions that should be the subject of a dialog with the authors.

But of course if I tried to do that with every paper I review, it would be a complete waste of my time, because most of them won't get published in their current form. It would also give the editors many excuses to reject the paper. Plus, I just don't have enough time even if I wanted to. (In my field we don't have stables of grad students we can treat as unpaid labor for this, either.) Not to mention that I don't have the expertise to critically analyze ALL parts of the more interesting papers.

The solution I ostensibly use is to try to guess whether the paper will be accepted, and adjust my review effort accordingly. What really happens depends on much more human factors: how interesting I find the paper, how late at night and how late my review is, and so forth.

Many of the discussions of peer review seem completely oblivious to the effort involved in doing a really thorough review.

Nice article. I thought you brought out most of the important issues. However, the headline completely misrepresents the content. A more accurate title would have been something like "Online peer review: Why we need it and Why it might not work".

Any tool that improves the peer review process for published papers is a boon to science. It will be up to the community to utilize these tools and to police the "peers" submitting comments.

However, this will do nothing to solve the problem of pseudo-scientific journals posing as legitimate purveyors of scientific research. The publishing of junk by these non-reputable journals and magazines do much to muddy the waters and increase misinformation to the public.

It is still reliant on scientists and skeptics to point out bad science and counter pseudo-science in every case where the experimental evidence does not support the findings.

I completely agree. Perhaps it is time the Scientific community instigated a "tick of approval" system for proper research. In a similar way to the Heart Foundation approving foods that meet certain standards.

A little add-on to one of the main sources of the peer-review problems discussed here (b.t.w., there is a vivid discussion in the sciences on peer review and alternatives.):

Maybe it is not so much the idea or process of peer-review in itself which is flawed but simply the ill-defined goal to publish as much as possible. Imho, Reegor (in his/her comment above) pointed it out in a way without saying it. There is not enough time for a good review. Why? Because science adopted quality measures defined for some (different) economical purposes: Quantity is more important than quality. This is not only true for publications but it can be found now in almost every aspect of academic life. For example, in many resource distribution schemes, i.e., money flows in a university, number of successful students in a course or number of people graduating determines the money an organization unit (like a department or further an individual group) receives. What a bad idea that is when it comes to quality is obvious.

Same is true for any aspect where scientific reputation is measured by the number of publications. This problem is now more and more identified and countermeasure pop up here and there. The german science foundation, e.g., asks researchers to only name up to three own most relevant publications in a project proposal, not a list of 100+ papers. But this is just one step in the right directions. As long as we still have colleagues with publication lists of several hundred articles (done on average, in 20+/- years), one still has to compete with them in some aspects of academic life. A scientific code of ethics exist, i.e., one should only be a co-author if one really put in some work. Problem still is, that these ethical codes are just ignored too often.

b.t.w, my personal opinion: The ongoing installation and widespread of management procedures adopted from industry or aimed at industrial processes is one of the major problems here, as can be seen clearly in Germany during the forced substitution from the diploma to bachelor/master programs. Academia eaten alive by bureaucracy and its minions...*sigh*

Really good article and interesting discussion in the comments. I think we all pretty much agree that an ecosystem that includes high quality post-publication peer-review would be strictly better than the status quo.

The challenge is to build the necessary critical mass, the right culture, and an incentive structure that encourages reviews (these last two reasons are why we think DOI assignment for reviews endorsed by the community is such an exciting release).

Always looking for new suggestions and ideas so please sign up and send me your feedback!

The reviewing process is "economically" mis-designed. If, but only if, a paper I review will be published, then I should take the time to go through it carefully, down to the level of grammatical errors in some cases, and certainly checking the equations. This is a full day of work for ONE paper. (In my field, papers run 20 to 40 pages.) Furthermore, at that level of detail there are always questions that should be the subject of a dialog with the authors.

But of course if I tried to do that with every paper I review, it would be a complete waste of my time, because most of them won't get published in their current form. It would also give the editors many excuses to reject the paper. Plus, I just don't have enough time even if I wanted to. (In my field we don't have stables of grad students we can treat as unpaid labor for this, either.) Not to mention that I don't have the expertise to critically analyze ALL parts of the more interesting papers.

The solution I ostensibly use is to try to guess whether the paper will be accepted, and adjust my review effort accordingly. What really happens depends on much more human factors: how interesting I find the paper, how late at night and how late my review is, and so forth.

Many of the discussions of peer review seem completely oblivious to the effort involved in doing a really thorough review.

What tyrant of an editor expects referees to go through a first-round submission for grammatical errors? I will point out abuses of the language when they affect the meaning or make something completely nonsensical, but it is usually sufficient to tell the authors to hire a copyeditor.

On the second or third round of review, you are dealing with something much closer to the final text. At that point (after the obviously-not-getting-published papers are screened out), you can flag phrases that bother you. Just remember your comparative advantage, you are a subject matter expert... it is a mis-use of your time to be an English teacher. Besides, real English teachers can use the money from freelance copyediting.

Really good article and interesting discussion in the comments. I think we all pretty much agree that an ecosystem that includes high quality post-publication peer-review would be strictly better than the status quo.

The challenge is to build the necessary critical mass, the right culture, and an incentive structure that encourages reviews (these last two reasons are why we think DOI assignment for reviews endorsed by the community is such an exciting release).

Always looking for new suggestions and ideas so please sign up and send me your feedback!

-- Andrew

But this suggests at least a bit that we can deal with this structural problem with a technological solution. Like I said before, on of the cores of the problem is the success measure being quantity not quality and hence an inflation of publications (and conferences and journals etc.). If we answer this by bringing technology in that helps us to optimize the review process further, I am afraid the available time resources initially freed by such a system would instantly be eaten-up by an increase of review work.

Given a multi-user approach like many suggest here, of course, this work will spread around many more people instead of the 2-5 people involved today. Still, the overall workload increases. Technology can't solve the structural problem. As a computer scientist, I am teased to say this is a typical approach by engineering...sadly, this is only too seldom to be valid.

But this suggests at least a bit that we can deal with this structural problem with a technological solution. Like I said before, on of the cores of the problem is the success measure being quantity not quality and hence an inflation of publications (and conferences and journals etc.).

I quite agree. The problems people tend to complain about are due to the incentives inherent to the world of academic research. The question is how to change those incentives? Emphasising the value of peer-review over the publication of yet another journal article is one way to start.

But this suggests at least a bit that we can deal with this structural problem with a technological solution. Like I said before, on of the cores of the problem is the success measure being quantity not quality and hence an inflation of publications (and conferences and journals etc.).

I quite agree. The problems people tend to complain about are due to the incentives inherent to the world of academic research. The question is how to change those incentives? Emphasising the value of peer-review over the publication of yet another journal article is one way to start.

But the (good) journals are using peer-review too, so there is no contrast to any other peer-review principle. As much as I like that you are searching for alternatives with publon, the ones proposed (using even more first level reviewers and then reviewers for reviewers etc.) proliferates more work. In addition, it is largely based on a a web of trust which seems vulnerable too (as is the current method of course). And, personally, the profile build-up process proposed by publon is a nightmare for all long standing senior scientists which have built up their reputation over decades (and are hence, e.g., invited to program committees etc.). Now they should throw all that over board and start from scratch? Good luck with that one...

If scientists are scared of open review then we should be scared of scientists results. The whole publication system seems to be more and more about scientists egos and less and less about the truth. Open peer review should be the norm of the future not the exception.

But this suggests at least a bit that we can deal with this structural problem with a technological solution. Like I said before, on of the cores of the problem is the success measure being quantity not quality and hence an inflation of publications (and conferences and journals etc.).

I quite agree. The problems people tend to complain about are due to the incentives inherent to the world of academic research. The question is how to change those incentives? Emphasising the value of peer-review over the publication of yet another journal article is one way to start.

But the (good) journals are using peer-review too, so there is no contrast to any other peer-review principle. As much as I like that you are searching for alternatives with publon, the ones proposed (using even more first level reviewers and then reviewers for reviewers etc.) proliferates more work. In addition, it is largely based on a a web of trust which seems vulnerable too (as is the current method of course). And, personally, the profile build-up process proposed by publon is a nightmare for all long standing senior scientists which have built up their reputation over decades (and are hence, e.g., invited to program committees etc.). Now they should throw all that over board and start from scratch? Good luck with that one...

All good points. And yet I believe we can improve the system we have now (without needing to throw everything out and start from scratch). We will see!

How can I use these tools to deal with this problem: There is a particular medical condition that causes much suffering and for which there are no sufficiently effective treatments. There are two contending theories of etiology. I have studied this issue for many years. The dominant theory is being pushed by a dominant personality. I have gone through the literature and found methodological errors in several frequently cited papers supporting the dominant theory. I have written a review paper that sets forth these errors and in so doing resolves various conflicts in the published research. I cannot even get attention from the underdogs in this fight because I am not an MD. For example, I sent my paper to one of the underdogs and invited him to be co-author. The initial response was: "Dear Colleague,...About your invitation, I feel very honoured. Probably for me it could be rather easy to publish this paper ...". I accepted and explained that I was not an MD. No further response. I believe I have important insight into to a major medical problem. What do I do?