Seriously, have you tried counting the number of figures in a JN paper lately? My impression is that whatever would have been supplemental is now part of the paper. IMO, this is worse than supplemental because half the time I'm like, "what does this figure have to do with anything?" and it takes longer to figure out what the main findings were.

Just because the reviewers ask for more experiments doesn't mean they have to end up in the paper. The data can be mentioned in the text, and indeed J. Neuro demands that for two bar only data figures. You can also submit the data in the rebuttal and say in the text 'data not shown.' If the thing they ask for was stupid and/or the results were negative/uninterpretable, it shouldn't be in the paper. But you can still show it to them.

That, MB, is pretty funny and a full reveal of the inanity of citation counting. Yes, almost by definition if you use an experimental model that a boatload of other people in your field use, you will likely get more citations to your paper. Therefore "higher impact". But very likely, the *true* scientific importance and impact is higher for the less-common model.

What's wrong with following the old engineer's adage of KISS (keep it simple, stupid) with manuscripts? Extra data that doesn't really help the story (like what many reviewers ask for but feel they should because someone else did it to them in the past) just makes papers harder to understand, and people tend simply ignore all the extraneous data anyway. When I read a paper I just want the authors to get to the damn point. I would rather read a smaller, easier to read paper in a lower-tier journal than one of these journals that make the authors stuff crap into the paper 'til it looks like a haggis that's about to burst.

If referees are asking for additional data, it would have to be a *profoundly* stupid request for it not to go in somewhere. It's the old gradeschool rule: if one person asks, three others didn't bother.

Was the SM change meant to reduce referee demands for additional experiments? Or was it just an acknowledgment that SM is part of the paper and should be refereed as such? If so, then why not just have an in-paper appendix, that way the main findings are clear?

It was eliminated because supplemental wasn't being peer reviewed in the same way that the regular material was.

I like doing away with it because it allows a complete story without being constrained by number of figures, or text because of print limitations, and a separate unreadable manuscript full of God-knows-what. Fine, data not shown is bad, but plenty of journals allow actual data in the text (or, remember tables?) even if you bastards lard it up with unreadable, parenthetical lines reporting degrees of freedom, etc.

If it's a problem, it's the reviewers fault. "I didn't submit my paper to get reviewed, I submitted it to be published."

Dr. B., the "endless experiment requests" are not "due to unlimited space." They are due to reviewers who are holding your work to a higher standard for what constitutes a J Neurosci paper than that to which you think it should be held.

Dave - no limits on figures, tables, the length of Results text, or the number of references. Intro and Discussion sessions are pretty strictly limited, though.

I like it. Serious papers get published in J Neurosci - the kind that have an audience that is not happy just reading the surface bits and respecting the eternal peace of data interred in a supplement.

Kind of weird to blame JN for not doing enough to reign in our bad behaviour.

Reviewers ask for more experiments out of habit or because they think that's what reviewing is...that's why most of them are lazy, poorly thought out, or at best irrelevant.

Reviews seem to fall into three categories:
25% 1. Mostly positive reviews that are accepts with edits and maybe some explanations.
25% 2. Critical constructive and insightful review
40% 3. Harping demands for shitty experiments x, y, and z; douchey musings about "impact" and "clearing the bar" and their own "high standards" in their reviews.
10% 4. Batshit

I think type 3 is about 40% of reviewers...it's a cultural problem of and by scientists.

"They are due to reviewers who are holding your work to a higher standard for what constitutes a J Neurosci paper than that to which you think it should be held."

Don't like J. Neurosci? Just submit to eNeuro instead! Reviewers and editor must concur on a single final list of revisions to be communicated to the author, and this policy should weed out unreasonable or unjustified critiques.

I'm curious, for people who have been submitting to JN for a while (~10 years), has there actually been a noticeable change in reviewer/editor requests, such as asking for more new data, that matches up in time with the change in supplemental material policy? I've only been part of submission there since 2010, and in terms of reviewers requesting new data, it's been mixed.

They are due to reviewers who are holding your work to a higher standard for what constitutes a J Neurosci paper than that to which you think it should be held.

@Grumble: there are "standards," and then there's "but what about ____?" I'd say the latter by far accounts for most extra experiment requests. Is there a "standard" for how many brain regions need to be processed in order for the results in the target region to be believable? How many post-manipulation euthanasia time points? How many drug doses and drug types? I'd love to know what J Neuro's "standards" are, and whether they at all reflect even a basic understanding of how much time and money these "standards" cost, and whether those costs are adequately compensated for by a paper in a journal whose impact factor has done nothing but decline in the last 7 years.

Oh, and my last ms that was rejected by JN ended up in a journal with an IF over 9, with no new experiments.

Generally the bar is increasing at all Journals. Maybe more so at JN. The problems are many...one being that the reviewers rank your paper first, before writing their comments...even if you have lots of comments, but with a high ranking you have a chance to resubmit (and pay another fee). If you have a low rank, and yet easily addressable comments, you can get rejected. You as an author don't get to see this ranking, however, making it hard to know where you stand in the revision process.

The other issue here is competition from other respectable neuroscience/neurobiology journals is heating up. There are journals now that aren't quite as crazy about the expectations that have several points of impact higher than JN, and frankly that I read more often now. With that said, JN is the gold standard for a society publication in our field, so many of us will keep trying.

It is also the job of the Senior and Review editors to regulate the review though, and not just assume that a reviewer's demands have to always be upheld. They should be weighing in more to dampen insane requests. My fear is that they are too overworked, and simply don't have time with the number of papers to make these careful decisions. The eNEURO solution proposed here, should be more common place. I hear it is working well over at eLife.

I commonly get demands for extra experiments in submissions to fairly pedestrian journals that are out of step with 1) the scope of work viewed as a typical R01 Aim and 2) the number of papers expected as acceptable productivity. Obviously we're talking the same overall population of peers doing the grant and paper reviewing.

I've only published in JN once and the paper was accepted with minor revisions (no new experiments) so I guess we lucked out. I get a feeling that many labs send their rejected Neuron or Nature Neuro papers to JN, which may contribute to the perception of there being a "higher standard" now, especially if these people are also reviewing JN papers…

@Dr B: I'm not denying that reviewer requests can get out of control. But sometimes a paper simply does not demonstrate a hypothesis conclusively enough for the kind of journal it's been submitted to. In your case, that might not have been the case -- but still, the reviewer requests, whether reasonable or not, probably had nothing to do with whether the journal allows supplemental data or not -- and everything to do with reviewers' perceptions of what makes a J Neuro paper.

@Ben Saunders: I've been publishing in J Neuro for 20+ years (i.e, since before there even WAS such a thing as online supplemental material!), and reviewing for them for 10+, and I haven't noticed any obvious difference in "more data, please" requests before vs after the change in supplemental materials policy.

Then again, I don't recall ever actually been asked to do another experiment when submitting to J Neuro. Maybe that's because I have some idea about what sort of dataset is likely to make the cut. Or I've just been lucky, or both.

"I've been publishing in J Neuro for 20+ years (i.e, since before there even WAS such a thing as online supplemental material!), and reviewing for them for 10+, and I haven't noticed any obvious difference in "more data, please" requests before vs after the change in supplemental materials policy."

Perhaps this has to do with perceived BSD-ness. It would be interesting to see an analysis of the amount of extra experiments asked for per publication as a function of senior author h-index.

I suspect that a lot of this variability is down to the editor in charge of your particular field. I haven't noticed any uptick in requests for additional experiments, but then I've also had editors override a reviewer with a comment like "we decided that additional experiments were beyond the scope of the paper".

"House of Mind: I get a feeling that many labs send their rejected Neuron or Nature Neuro papers to JN, which may contribute to the perception of there being a "higher standard" now, especially if these people are also reviewing JN papers…"

My perception has been the JN no supp policy has made the NN, and especially Neuron, rejects more obvious. No idea if that has changed perception of what "should" be a JN paper.

First time submitted to JN. Submitted revision with additional experiments. The editor sent the paper to a new reviewer and he/she asks additional experiments. In the editor's word, "he has to reject the paper because this was the revision."
Never submitting to JN again.

SM is very useful for videos and raw excel data, etc. I do look at those data myself in other journals.

Ben Saunders: "My perception has been the JN no supp policy has made the NN, and especially Neuron, rejects more obvious. No idea if that has changed perception of what "should" be a JN paper."

You may be right. I haven't been in science long enough to know what was JN's reputation before 2009 but I do feel like it's getting harder to publish there- even if the IF is not as high as other society journals like Neuropsych or Biol Psych. However, I know people that prefer getting a paper in JN rather than Biol Psych/Mol Psych etc because it is perceived as more prestigious (even if the IF is not as high)…. Does this "hierarchy" makes sense? Where do you usually send your papers?

@HOM: "Does this "hierarchy" makes sense? Where do you usually send your papers?"

I wouldn't send a paper to Biol Psych unless you threatened me with bodily harm. That this rat's ass of a journal has such a ridiculously high IF is a disgrace to science (or maybe just to the idea that IFs mean anything). Of the papers in my subsubsubfield that I trust and that I think have been most transformative, I can't think of a single one that's been in Biol Psych. I'd love to see someone explain why this silly excuse for a journal has an IF of 10 -- probably it publishes a lot of reviews, or something.

Well JNeuro does actually allow for supplementary material to be submitted during revision (but not during initial submission). It's just not hosted at the JNeuro website, it has to be hosted elsewhere (e.g., a database or author's website). I've recently had two papers accepted there, both of which we included supplemental material in our revised submission and were important for acceptance. They were a little funny about how to provide the supplemental materials to the reviewers during the revision stage; even gave us a hard time when we wanted to include a link to a large data set provisionally submitted to a public repository. It was annoying and it does seem a bit archaic as we're heading into the days of big data, whether it's genomic, proteomic or even mathematical modelling. But the studies weren't quite Neuron/NN, not really appropriate for BP/NPP, so there wasn't really a whole lot of other places to go in that realm. Maybe I'm wrong, but I think a couple of JN papers will look good on the CV come TT-faculty search time...