Query for manuscript reviewers

August 19, 2013

Sometimes you get a manuscript to review that fails to meet whatever happens to be your minimal standard for submitting your own work. Also something that is clearly way below the mean for your field and certainly below this journal’s typical threshold.

Nothing erroneous, of course.

More along the lines of too limited in scope rather than anything egregiously wrong with the data or experiments.

Does this make you sad for science? Angry? Or does it motivate you to knock out another LPU of your own?

Like this:

Related

13 Responses to “Query for manuscript reviewers”

Reframed this way, who’s job is it to enforce a journal’s level of innovation and breadth of story. You may think that the work is technically feasible, but wish it had the nebulous “more”. Should it be your decision to reject on those terms, or the editors.

My view is that primarily my job is to enforce that the experiments are well done, and well referenced in context to previous results. The editors may be looking for something short and to the point, but may want a long 34 supplementary figure behemoth. I tend to leave those sorts of concerns to the editorial decision, and focus on the quality of the science presented.

My goal as a reviewer is to lay out the issues for the editor clearly. I figure that it’s my place as a reviewer to inform the editor of all of the issues with the paper. This includes experimental validity, theoretical logic, correct citations, and scope. (Of course, appropriate scope depends on the journal.) But in the end, it’s the editor’s decision what to do with the paper.

Sure it makes me mad sometimes, particularly when bad papers get into good journals, but Zen Faulkes is right. Life’s too short. I was always taught to protect your own quality and let the chips fall where they may.

PS. I don’t hesitate to say that it needs a nebulous “more”. I do my best to explain what that “more” is. But if the paper is a complete story, I also say that it is a complete, if small and uninteresting, story. Again, IMHO this is an editorial call, not a reviewer call.

Seriously, though, if a piece is well done for what it is, but what it is is boring and unimportant? I review it and say it “does not contribute significant new material to the field.” To the editor, privately, I might say it’s “boring and unimportant”.

I agree with DB and others here who say it is ultimately the editor’s decision. I have reviewed papers that were very narrow in scope, but if their conclusions are supported by the data, the data are novel, everything is technically sound, etc, I can’t recommend rejection based on whether I think it is too incremental. I will mention to the editor that it is on the LPU side, and is most likely low impact. It does not make me sad – instead, it’s more encouraging to see some folks able to publish their work. Too many times I have seen papers packed with fucktonnes of supplementary, extraneous data that were obviously a product of reviewer abuse. It needs to stop – it’s just unnecessary expense, which is the last thing anyone needs when budgets are so tight right now.

It makes me happy. Unpublished science, no matter how incremental or boring, was a waste of time and money. If no one knows about it, it might as well never have been done.

So I’m glad to see people share their boring results. I’m glad that there are journals and editors willing to publish them. Of course, if a lab does nothing but produce boring stuff, then I might question why it is funded. But that’s another discussion.

I had one of these recently. The authors had shown biological phenomenon X in cell culture in a paper 5 years ago, then 2 years ago in a mouse, then last year in a pig, and now they come along with the same old thing in peripheral blood mononuclear cells (PBMCs) presumably drawn from some poor grad’ student in the lab. To them, it was “hey hey, look here, song and dance, we have proof in humans”. To me it was a big “so-what”. I would have been more surprised if they showed the phenomenon was absent in humans. As is, it fell 100% into the category of properly-done but boring-as-shit science.

As others here have noted, I deferred to the editor on suitability for publication, noting the above in my comments to them, and giving the authors a couple of things to chew on (more info’ needed on informed consent, blah blah). The editor rejected it outright. I guess that’s why they pay ’em the big bucks.

My view is that primarily my job is to enforce that the experiments are well done, and well referenced in context to previous results.

I agree with DB and others here who say it is ultimately the editor’s decision. I have reviewed papers that were very narrow in scope, but if their conclusions are supported by the data, the data are novel, everything is technically sound, etc, I can’t recommend rejection based on whether I think it is too incremental.

High quality journals explicitly ask their peer reviewers to address the importance and broad interest of manuscripts.

Recommending a different journal would be my way as well.
But what about good science buried so deep in badly designed graphs and non-fluent text, that it takes a lot of effort and time to extract it? Do you ask the authors for improvement or do you just mention it to the editors and let them decide?

I reviewed a paper about a mildly interesting new technique; while decent work, the scope was small and it was waaaay too long for the content. I told the editor this, and it turns out the journal had been thinking of creating a new short-format Technical Notes section. They asked the authors to condense the paper into this format instead, and it then became the first Technical Note to be published in the journal.

Many of the journals I review for explicitly ask in their review questions about the impact on the field and one even states that this is sufficient reason for rejecting a paper. Maybe this is unique to my field. Of course I see politics play into it. I recently rejected a paper for being incremental and the editor came back to me to review the resubmitted manuscript. The comments from the other reviewers, which I got to see on the resubmission, agreed with me that this work should not go to even a society level journal, but it’s a big name so I guess they get the benefit of a doubt and the paper will likely be cited based on name recognition.