One reason you might seek to get harsh with some authors is if they have a track record of corrigenda and errata supplied to correct mistakes in their papers. This kind of pattern would support the idea that they are pursuing an intentional strategy of sloppiness to beat other competitors to the punch and/or just don't really give a care about good science. A Journal might think either "Ok, but not in our Journal, chumpos" or "Apparently we need to do something to get their attention in a serious way".

There is another reason that is a bit worrisome.

One of the issues I struggle with is the whisper campaign about chronic data fakers. "You just can't trust anything from that lab". "Everyone knows they fake their data."

I have heard these comments frequently in my career.

On the one hand, I am a big believer in innocent-until-proven-guilty and therefore this kind of crap is totally out of bounds. If you have evidence of fraud, present it. If not, shut the hell up. It is far to easy to assassinate someone's character unfairly and we should not encourage this for a second.

Right?

I can't find anything on PubMed that is associated with the last two authors of this paper in combination with erratum or corrigendum as keywords. So, there is no (public) track record of sloppiness and therefore there should be no thought of having to bring a chronic offender to task.

On the other hand, there is a lot of undetected and unproven fraud in science. Just review the ORI notices and you can see just how long it takes to bust the scientists who were ultimately proved to be fraudsters. The public revelation of fraud to the world of science can be many years after someone first noticed a problem with a published paper. You also can see that convicted fraudsters have quite often continued to publish additional fraudulent papers (and win grants on fraudulent data) for years after they are first accused.

I am morally certain that I know at least one chronic fraudster who has, to date, kept one step ahead of the long short and ineffectual arm of the ORI law despite formal investigation. There was also a very curious case I discussed for which there were insider whispers of fraud and yet no findings that I have seen yet.

This is very frustrating. While data faking is a very high risk behavior, it is also a high reward behavior. And the risks are not inevitable. Some people get away with it.

I can see how it would be very tempting to enact a harsh penalty on an otherwise mild pretext for those authors that you suspected of being chronic fraudsters.

But I still don't see how we can reasonably support doing so, if there is no evidence of misconduct other than the rumor mill.

Before I get into this, it would be a good thing if the review of scientific manuscripts could be entirely blind. Meaning the authors do not know who is editing or reviewing their paper- the latter is almost always true already - and that the editors and reviewers do not know who the authors are.

The reason is simple. Acceptance of a manuscript for publication should be entirely on the basis of what is contained in that manuscript. It should rely in no way on the identity of the people submitting the manuscript. This is not true at present. The reputation and/or perceived power of the authors is hugely influential on what gets published in which journals. Particularly for what are perceived as the best or most elite journals. This is a fact.

The risk is that inferior science gets accepted for publication because of who the authors are and therefore that more meritorious science does not get accepted. Even more worrisome, science that is significantly flawed or wrong may get published because of author reputation when it would have otherwise been sent back for fixing of the flaws.

We should all be most interested in making science publication as excellent as possible.

Blinding of the peer review process is a decent way to minimize biases based on author identity, so it is a good thing.

My problem is that it cannot work, absent significant changes in the way academic publishing operates. Consequently, any attempts to conduct double-blinded review that does not address these significant issues is doomed to fail. And since anyone with half a brain can see the following concerns, if they argue this Nature initiative is a good idea then I submit to you that they are engaged in a highly cynical effort to direct attention away from certain things. Things that we might describe as the real problem.

Here are the issues I see with the proposed Nature experiment.
1) It doesn't blind their editors. Nature uses a professional editorial staff who decide whether to send a manuscript out for peer review or just to summarily reject it. They select reviewers, make interim decisions, decide whether to send subsequent revised versions to review, select new or old reviewers and decide, again, whether to accept the manuscript. These editors, being human, are subject to tremendous biases based on author identity. Their role in the process is so tremendously powerful that blinding the reviewers but not the editors to the author identity is likely to have only minimal effect.

2) This policy is opt-in. HA! This is absurd. The people who are powerful and thus expected to benefit from their identity will not opt in. They'd be insane to do so. The people who are not powerful and are, as it happens, just exactly those people who are calling for blinded review so their work will have a fair chance on its own merits will opt-in but will gain no relative advantage by doing so.

3) The scientific manuscript as we currently know it is chock full of clues as to author identity. Even if you rigorously excluded "we previously reported..." statements and manged to even out the self-citations to a nonsuspicious level (no easy task on either account) there is still the issue of scientific interest. No matter what the topic, there is going to be a betting gradient for how likely different labs are to have produced the manuscript.

4) The Nature policy mentions no back checking on whether their blinding actually works. This is key, see above comment about the betting gradient. It is not sufficient to put formal de-identification in place. It is necessary to check with reviewers over the real practice of the policy to determine the extent to which blinding succeeds or fails. And you cannot simply brandish a less than 100% identification rate either. If the reviewer only thinks that the paper was written by Professor Smith, then the system is already lost. Because that reviewer is being affected by the aforementioned issues of reputation and power even if she is wrong about the authors. That's on the tactical, paper by paper front. In the longer haul, the more reputed labs are generally going to be more actively submitting to a given journal and thus the erroneous assumption will be more likely to accrue to them anyway.

So. We're left with a policy that can be put in place in a formal sense. Nature can claim that they have conducted "double blind" review of manuscripts.

They will not be able to show that review is truly blinded. More critically they will not able to show that author reputational bias has been significantly decoupled from the entire process, given the huge input from their editorial staff.

So anything that they conclude from this will be baseless. And therefore highly counterproductive to the overall mission.

These results reveal an innate brain circuit that can turn an animal’s water-drinking behaviour on and off, and probably functions as a centre for thirst control in the
mammalian brain.

Somebody like me immediately thinks to himself "subfornical neurons control drinking behavior? This is like the fifth lecture in Psych 105: Introduction to Physiological Psychology."

Let's do a little PubMed troll for "subfornical drinking". Yeah, we've known since at least the 1970s that the subfornical control of drinking behavior is essential, robust and mediated by angiotensin II signalling. We know how this area responds to blood volemia and natremia and how the positioning relative to the third ventricle and the function of the circumventricular organ vis a vis the blood-brain barrier permits this rapid-response. We know the signalling works through AT1 receptor subtype to excite subfornical neuronal activity via electrophysiological recording techniques and genetic deletions. Cholinergic mechanisms have likewise been identified as critical components via pharmacological experiments. Mapping of activated neurons has been used to identify related circuitry. The targets of subfornical neurons are known and their involvement in drinking behavior has likewise been characterized. Extensively. We know that electrical stimulation of these neuronal populations activates drinking in water sated rats, for goodness sake! We know there are at least three subpopulations of SFO neurons involved and something about the neurochemical signalling complexity.

The new work by Oka and colleagues simply repeats the above-mentioned electro-stimulation experiment from 1983 using optogenetic stimulation. Apart from this, maybe, we have an advance* in that they identified ETV-1 vs VGAT (GABA transporter) markers of two distinct subpopulations of neurons which have opposite effects on the motivation to consume water.

That's it.

This paper is best described as a very small, incremental advance in understanding of thirst and drinking behavior, albeit tarted up with the pizzaz of optogenetic techniques.

Yet it was published in Nature.

Someone really needs to introduce the editorial staff of Nature to PubMed.
__
*BTW, a Nature editor confirms this microscopic incremental advance is what is new about this paper.

One thing it does is keep a lid on people submitting a priority place holder before the study is even half done. I could see this as a positive step. Anything to undermine scooping culture in science is good by me.

First time submitted to JN. Submitted revision with additional experiments. The editor sent the paper to a new reviewer and he/she asks additional experiments. In the editor's word, "he has to reject the paper because this was the revision."

This echoes something I have only recently heard about from a peer. Namely that a journal editor said that a manuscript was being rejected due to* it being policy not to permit multiple rounds of revision after a "major revisions" decision.

The implications are curious. I have not yet ever been told by a journal editor that this is their policy when I have been asked to review a manuscript.

I will, now and again, give a second recommendation for Major Revisions if I feel like the authors are not really taking my points to heart after the first round. I may even switch from Minor Revisions to Major Revisions in such a case.

Obviously, since I didn't select the "Reject" option in these cases, I didn't make my review thinking that my recommendation was in fact a "Reject" instead of the "Major Revisions".

I am bothered by this. It seems that journals are probably adopting these policies because they can, i.e., they get far more submissions than they can print. So one way to go about triaging the avalanche is to assume that manuscripts that require more than one round of fighting over revisions can be readily discarded. But this ignores the intent of the peer reviewer to large extent.

Well, now that I know this about two journals for which I review, I will adjust my behavior accordingly. I will understand that a recommendation of "Major Revisions" on the revised version of the manuscript will be interpreted by the Editor as "Reject" and I will supply the recommendation that I intend.

Is anyone else hearing these policies from journals in their fields?
__
*having been around the block a time or two I hypothesize that, whether stated or not, those priority ratings that peer reviewers are asked to supply have something to do with these decisions as well. The authors generally only see the comments and may have no idea that that "favorable" reviewer who didn't find much of fault with the manuscript gave them a big old "booooooring" on the priority rating.

It's become apparent to me that there is a group of reviewers who all display the same phenotype when it comes to their reviews. They all i) are quick to agree to review manuscripts in our common sub-sub-field, ii) submit their reviews on time, and iii) will recommend acceptance or minor revisions for all manuscripts. All.

On time? Suspicious that.

Did I mention that this bloc of reviewers are all strongly linked to one particular well-known member of our sub-sub-field? Former trainees, co-authors etc.