Had an interesting category of thing happen on peer review of our work recently.

It was the species of reviewer objection where they know they can't lay a glove on you but they just can't stop themselves from asserting their disagreement.

It was in several different contexts and the details differed. But the essence was the same.

I'm just laughing.

I mean- why do we use language that identifies the weaknesses, limits or necessary caveats in our papers if it doesn't mean anything?

Saying "...and then there is this other possible interpretation" apparently enrages some reviewers that this possibility is not seen as a reason to prevent us from publishing the data.

Pointing out that these papers over here support one view of accepted interpretations/practices/understanding can trigger outrage that you don't ignore those in favor of these other papers over there and their way of doing things.

Identifying clearly and carefully why you made certain choices generates the most hilariously twisted "objective critiques" that really boil down to "Well I use these other models which are better for some reason I can't articulate."

Do you even scholarship, bro?

I mostly chuckle and move on, but these experiences do tie into Mike Eisen's current fevers about "publishing" manuscripts prior to peer review. So I do have sympathy for his position. It is annoying when such reviewer intransigence over non-universal interpretations is used to prevent publication of data. And it would sometimes be funny to have the "Your caveats aren't caveatty enough" discussion in public.

These reviews are the private, academic version of "someone is wrong on the internet." https://xkcd.com/386/

You are just wrong. Your foundation is wrong, leading you to use the wrong methods, so your data is bad, and you're a bad person and should feel bad.

Thing is, I used to feel that there were objectively "right" and "wrong" ways to conduct scientific experiments...but so often there isn't, and a substantial part of my training has been me learning and relearning this concept in every new scenario. I am not sure everyone learned this, though. Especially if you come from an insulated lab tradition (BSD pedigree or Uni that hires its own trainees), it can make your preferred methods and interpretations seem like The One True Way and all others appear to be Sloppy Science.

Woe betide the lab that gets their manuscript caught in the crossfire between a BSD lab dominating the literature with their approach and a Small Town Grocer who published a methods paper on the "proper" technique.

I can appreciate, to some degree, the fact that people within an interpretive tradition do not want to rehash arguments 10, 20 years old* with every new paper. But the fact is, most of "the way it has to be done" arguments weren't really settled in any objective sense. The powerful simply won and everyone else had to toe their line. Whether their approaches made sense for all situations or not.

I think the most fundamental problem with Review-by-Orthodoxy is that people have stopped thinking about the data - which I can't understand at all. Just as you look at your own data and think about what it does and does not mean, you should look at the data in a grant or paper you are reviewing. There is no excuse for, imo, categorical dismissal with "this is meaningless and cannot be interpreted because it wasn't collected in this way, isn't perfect, isn't "pretty", etc. I hope that when I review a paper or grant, I am asking myself "what do these data mean, as collected, presented and analyzed".

But then, as I think I may have alluded to, I have a scientific orientation that seems to be interested in things that few other people are, and disinterested in stuff that gives lots of people a huge science....charge. Maybe I just wasn't trained properly in the first place....

*I do have one example of where I came sort of sideways at an orthodoxy that was fought over bitterly starting about 20 yrs prior. I approached it from "that makes no sense vis a vis the human condition, doesn't matter one whit for my intended purposes and I'm doing it the simple way". Oh, the traditionalists did not like it one bit, Sam-I-am. Hammered my grants. Hammered my manuscripts. We got through eventually, though. And within a few years, whaddyaknow? Even some of the biggest participants in the old "angels on the head of a pin" debate from back in the day were taking what looked suspiciously like my approach (in different context) and bragging about their NewModel.

I think this is mostly a problem due to inexperienced reviewers (I was one once). You feel like you need to do a good job, which means being rigorous. It's easy to rigorously point out all the flaws in an imperfect paper. But when you're confronted with a good paper it feels like you're not doing a good job unless you find *something*, *anything* wrong. So you do, and feel proud of your effort, even though in reality you were just a nitpicking asswipe.

A good editor (and this is where good editing at better journals is key) ameliorates this problem by recognizing when a reviewer's complaints are unwarranted. Good editors will say something like "Reviewer #2 raises some excellent points. Please address all of them in your response letter, with particular attention to items 1, 3, and 12."

I have seen dumb reviews happen in grant review panels too. Hopefully the other primary reviewer(s) are willing to correct an overly enthusiastic criticism (although it is impossible to un-do the damage already wrought to the application's reputation. This is why I hate that people who didn't even read an application get to vote on it). But not always.

This job trains us to find flaws. It's tough sometimes to remember to point out when things are good. But it's important to do so!

"Pointing out that these papers over here support one view of accepted interpretations/practices/understanding can trigger outrage that you don't ignore those in favor of these other papers over there and their way of doing things. "
-This is one of the areas where glamor pubz are most toxic. I've seen 3-4 society level journals outweighed by one CNS pub in a reviewer's (and PIs) eyes.

But when you're confronted with a good paper it feels like you're not doing a good job unless you find *something*, *anything* wrong.'
-Especially if it's been handed off to you by your PI. You feel like you need to demonstrate effort. I've learned to just nitpick typos in the supplemental figure legends...

I'd just like to say that it is not always inexperienced reviewers who are screwing you over in the review process. I reviewed a paper for a fancy journal recently and literally didn't find a single thing wrong with it. It was exhaustive and the data was high quality. Reviewer 2 had even more praise than me for the paper and corrected only a few typos. Reviewer 3, however, made the grand statement that the paper lacked mechanism and physiological relevance. After a private conversation with the editor asking if reviewer 3's comments had changed my opinion (they did not), the paper was still rejected. My interpretation of this interaction is that reviewer 3 must have been someone far more important and influential than me or Reviewer 2.

We all complain and whine about the review process for it’s fundamental biases of all kinds, but we keep suffering helplessly, and by that maintaining a status quo where the rich get richer. As is capitalism, it is only in favor of the rich, and in science, 'the rich' are the already established and influential scientists.

This calls not only for blogging (even though this one is awesome), but for a radical act. And the only act I can think of within the system (cause after all we like doing science) is boycotting every journal that doesn’t have a double blind review process.

My guess is that a month with a drastic submission rate, and it will do the trick. Problem is that boycotting only has an effect when it’s massive. And scientists, as we all know, are a bunch of individualists, and so it will be hard to unite. But how about trying? How about starting here?

^Double blind review is very hard to implement. Also, I'm not sure it would really help that much, since I don't think most harsh reviewers really care who they're commenting on, they're just generally truculent.