Hard charging early career Glam neuroscientist Kay Tye had an interesting claim on the twitters recently.

Wow, that sounds the like cheapest and shortest revision I've ever heard of. Ours are typically several (and I mean 2-4) orders of magnitude greater than that! I can't tell if you are being sarcastic because 1 week of work sounds like a fantasy I have never experienced before. https://t.co/Qmmhy28ZYU

The message she was replying to indicated that a recent request for manuscript revisions was going to amount to $1,000, making Kay's costs anywhere from $100,000 to $10,000,000. Big range. Luckily she got more specific.

Never hit 10M but have gone over 1M for sure. If you include salaries.

The bog standard NIH "major award" is the R01, offered most generically in the 5-year, $250,000 direct cost per year version. $1,250,000 for a five year major (whoa, congrats dude, you got an R01! You have it made!) award.

Dr. Tye has just informed us that it is routine for reviewers to ask for manuscript (one. single. manuscript.) revisions that amount to $1,000,000 in cost.

Ex-NIGMS Director Jeremy Berg cheer-led (and possibly initiated) a series of NIH analyses and data dumps showing that something on the order of 7 (+/- 2) published papers were expected from each R01 award's full interval of funding. This launched a thousand ships of opinionating on "efficiency" of NIH grant award and how it proves that one grant for everyone is the best use of NIH money. It isn't.

I have frequently hit the productivity zone identified in NIGMS data...and had my competing revisions criticized severely for lack of productivity. I have tripled this on at least one interval of R01 funding and received essentially no extra kudos for good productivity. I would be highly curious to hear from anyone who has had a 5 year interval of R01 support described as even reasonably productive with one paper published.

Because even if Dr. Tye is describing a situation in which you barely invest in the original submission (doubtful), it has to be at least $250,000, right? That plus $1,000,000 in revisions and you end up with at best 1 paper per interval of R01 funding. And it takes you five years to do it.

NIGMS (and some of my fellow NIH-watchers) have been exceptionally dishonest about interpreting the the efficiency data they produce and slippery as otters about resulting policy on per-PI dollar limitations. Nevertheless, one interpretation of their data is that $750,000 in direct costs per year is maximally efficient. Merely mentioning that an honest interpretation of their data ends up here (and reminding that the NIGMS policy for greybeard insiders was in fact to be about $750,000 per year) usually results in the the sound of sharpening stone on steel farm implements and the smell of burning pitch.

Even that level of grant largesse ("largesse") does not pay for the single manuscript revisions that Professor Tye describes within a single year.

I have zero reason to doubt Professor Tye's characterization, I will note. I am familiar with how Glam labs operate. I am familiar with the circle jerk of escalating high-cost "necessary" experimental demands they gratify each other with in manuscript review. I am familiar with the way extremely well funded labs use this bullshit as a gatekeeping function to eliminate the intellectual competition. I am perhaps overly familiar with Glam science labs in which postdocs blowing $40,000 on single fucked up experiments (because they don't bother to think things through, are sloppy or are plain wasteful) is entirely routine.

The R01 does not pay for itself. It does not pay for the expected productivity necessary to look merely minimally productive, particularly when "high impact publications" are the standard.

But even that isn't the point.

We have this exact same problem, albeit at less cost, all down the biomedical NIH-funded research ranks.

I have noted more than once on this blog that I experience a complete disconnect between what is demanded in peer review of manuscripts at a very pedestrian level of journal, the costs involved and the way R01s that pay for those experiments are perceived come time for competitive renewal. Actually, we can generalize this to any new grant as well, because very often grant reviewers are looking at the productivity on entirely unrelated awards to determine the PI's fitness for the next proposal. There is a growing disconnect, I claim, between what is proposed in the average R01 these days and what it can actually pay to accomplish.

And this situation is being created by the exact same super-group of peers. The people who review my grants also review my papers. And each others'. And I review their grants and their manuscripts.

We need to hold each other accountable for fantasy thinking about how much science costs. R01 review should return to the days when "overambitious" meant something and was used to keep proposed scope of work minimally related to the necessary costs and the available funds. And we need to stop demanding an amount of work in each and every manuscript that is incompatible with the way the resulting productivity will be viewed in subsequent grant review.

We cannot do anything about the Glam folks, they are lost to all decency. But we can save the core of the NIH-funded biomedical research enterprise.

I think it is time for clear guidance from NIH on this issue and some enforcement by way of mutual reinforcement of the review culture.

NIGMS data are a decent place to start, but I'm always happy to see NIH expand that on a per-IC or per-study-section basis if necessary. X number of pubs per Y direct costs per year. Use the intra-quartile range, refer to it frequently. "This study section has, in the past 5 years, given competitive renewals that average Z publications fundable scores and anything lower than N pubs has not been renewed". etc.

NIH RePorter shows Tye had at least $2.4m Direct Costs as PI for 2018. Her career funding total is >$6.3m DC since becoming independent in 2012. That's over 25 modular annual budgets ($250k ea.) or around 6 whole regular R01s.

Using Berg's metric of 7 papers per grant (or 1.75 papers per $250k modular annual budget), then Tye should have 44 papers originating from this funding. PubMed shows a total of 47 career papers, with around 20 of these as last author, not counting reviews and editorials since 2012. In other words, 0.8 papers per $250k modular annual budget.

For comparison my own rate is ~2.8 papers per $250k modular annual budget sustained over the past 15 years. And you're damn right this number is going in my next departmental performance review!

Ola, you are identifying where I get extra mad at Berg and other fans of this supposed efficiency analysis because they make me, of all people, have to defend Glam. There is just no comparing apples to oranges on paper count when you get into the heady atmosphere of a Glam lab. Perhaps lots of projects could potentially find a diamond in the rough now and again. But these day-in-day-out operations with a goal of "publish in Glam" first, and anything to do with specific science questions second, COST MONEY. And a lot of it. For relatively few citable works because six papers are buried in the supplement. And Glam labs aren't only funding this stuff on NIH RPGequivalent direct costs. There are T32 and F32 and HHMI and endowed chairs and foundation grants and god knows what other resources being poured in as well.

There is just no comparing apples to oranges on paper count when you get into the heady atmosphere of a Glam lab.

I know that you've brought up the negative consequences of Glam too many times to count. Should reviewers be critiquing productivity per grant (or funding level), and if so, would citations per grant be a better metric.? And if this is not something reviewers should consider, is this something the Program should?

Or NIH could just chuck the Investigator criterion and have reviewers judge the rest of grant at hand like NSF does (where Investigator is largely +/- are they qualified to do the work). Of course, that would require walking back from the recent people-not-projects approach.

Um, no? But yes, refusing to account for all the costs that go into a product attributed to NIH support and using that flawed analysis to support policy is a pet peeve of mine.

The only thing that makes sense is all pubs / all support.

Agreed.

where Investigator is largely +/- are they qualified to do the work

This is a potentially confusing conflation of Investigator criterion with the assessment of productivity. Or maybe I'm not being clear about the use of productivity in the specific case of competing renewals (where a list of pubs produced is included and therefore becomes highly focal) vs generally when the PI is assessed?

There is one very specific thing I would like to add to this discussion. I am a bit old school, but let me know what you think of my argument. Productivity should be about ACTUALLY ANSWERING THE QUESTION THAT YOU SET OUT TO SOLVE. Hardly anyone does this.

We always used to say "the Dean can't read, but he can count" to explain promotions in the universities. What we meant was because the Dean is very busy, primarily doing Evil, he/she appoints a legion of Evil Minions to the committees, who can't think critically about science.

This approach then crept into study sections. First, people just counted papers. Later on, (and I was guilty of this as a younger reviewer), we counted papers in Glam journals as being more important than, say, the house journal of a sub-field, and we gave the Glams extra weighting, so that was good for Kay Tye and her ilk (I am, I should say, a great admirer of her work!).

So it was about Impact. Unfortunately this got a bit out of hand over the years to the point where people (especially Glam Labs) could propose anything in the grant, then do more or less whatever they felt like with the latest Trendy Technical Toy, publish in a Hot Journal and get the grant renewed - independent of whether they solved or even addressed the question.

Three separate times in my career I have proposed a grant that was funded, posing a very specific but quite difficult question, and then we were lucky enough to be able to actually solve it, and write it up in ONE or at most TWO papers that were very comprehensive and (I think) elegant. Of course, these have become highly cited, even though they were in The Journal of Neuroscience and not in a Super Hot Journal. Renewals of the grant/s were unsuccessful because of "lack of productivity", and so I simply moved on, had another idea and repeated the process. I didn't emote about it. You just have to accept it and do something else.

I think you can see the absurdity of this situation though and the problems it has created. Framing a distinct hypothesis and then actually testing it has become a lost art in the days of "omics", "opto" and sundry other fads. Hypercompetition and over-complexity are rampant, especially within the neurosciences, and as a result many technologically driven scientists really don't know their arse from their elbow when it comes to actual problem solving skills.

What I propose is that study sections need to engage their brains to evaluate productivity. The frontal parts, I mean, not the lizard bits that you generally need to keep a lab running and to sequester or steal resources from others. The question has to be: "did they solve the problem?" and "was it actually important and interesting?" and "is it being cited already?" (this is where the modern metrics come in). A certain amount of experience and mature judgement goes into this process but it beats counting.

I am happy to discuss this with anyone. No need for anonymity here. I am a complete nobody of course - but many of my papers have been cited hundreds or thousands of times, which means that what I have written was important to someone, independent of how many tweets or likes I don't get. I do stand by my opinions and I am happy to defend this position. It may be Old School, but it is also fundamentally the most sound way to evaluate productivity.

This is a potentially confusing conflation of Investigator criterion with the assessment of productivity. Or maybe I'm not being clear about the use of productivity in the specific case of competing renewals (where a list of pubs produced is included and therefore becomes highly focal) vs generally when the PI is assessed?

No, this is just me being a junior noob that has not yet submitted a renewal or served on an NIH panel.

I think I have seen just about every way you can imagine productivity used to assess a continuation/renewal grant. Naturally we are all very frustrated when the aspects on which we have done well are not sufficient to carry the day. Naturally we are all very frustrated when those irrelevant metrics favored by those idiot bean counters appear to cost us and benefit those other, clearly inferior applications.

it is also fundamentally the most sound way to evaluate productivity.

Wrong. Citations are a function of the size of the population of researchers working in a given area of research, which follows all sorts of trendiness at times. Some of this is the sort of technological bling you seem to abhor. Citatations can also be about fame and power more than real advance. There are issues in my field that are totally important but are seemingly of current interest to only a single lab or two. So in point of fact they might be expected to receive few citations but each paper is a damn gem of advance....because nobody else is doing it. Conversely, there are papers in well populated areas that get plenty of citations due to the mass of jack russells humping the same leg...but ultimately their advance is incremental and entirely replaceable.

There is no such thing as "the most sound way" to evaluate productivity.

I never insult anyone, although I am opinionated and straightforward in expressing my thoughts, so people do take offense at times. I am sure you can relate 🙂

Your excellent point about citations and size/trendiness of the field is a fair one, but you miss the fact that within the context of a study section that represents a sub-field the playing field is usually level there, so it works.

Your point about "niche" papers in areas that are sparsely populated is a very good one. We can all think of a few of those, that are "sleepers" but eventually they are massively cited.

Saying flat out that I am "wrong" is a bit emotional, isn't it? My idea is quite reasonable. I do find it is a bit tiresome to be always shouted down by Prof., Dr., Mr., Ms. or Mrs. Shouty-Person, but I have had a lifetime of it, so one gets used to it, although it is why I avoid media.

My proposal is largely the correct way to think about this issue, but there are always some additional criteria to consider, as you correctly point out. Ultimately reviewers have to gauge "impact" in the same way that the famous British judge once defined Obscenity. "I don't know how to define it, but I know it when I see it".

😉

Thanks for letting me contribute to this in a small way. I am going to vanish again now.

Conversely, there are papers in well populated areas that get plenty of citations due to the mass of jack russells humping the same leg...but ultimately their advance is incremental and entirely replaceable.

Are you back to bashing glam again?

Seriously though, it sounds like you are suggesting that reviewers follow a Potter Stewart test ("I know it when I see it") on what constitutes adequate productivity. Maybe a more charitable way to put it is 3 blind persons measuring an elephant. I fear that subjective criteria favor the "haves" and invite implicit biases against the "have nots."

but you miss the fact that within the context of a study section that represents a sub-field the playing field is usually level there

not really. grants that use humans, NHPs, rats and mice are all reviewed together in many of the study sections on which I have participated and the citations differ tremendously on this alone. rodent researchers tend to ignore human and NHP data that are inconvenient to their attempts to claim their models are all that is needed. NHP researchers have their own little weirdnesses, not least of which is a chip on the shoulder about the aforementioned ignoring by rodent folks. human research avoids citing animal research where possible.

Saying flat out that I am "wrong" is a bit emotional, isn't it?

No.

My idea is quite reasonable. I do find it is a bit tiresome to be always shouted down

I said why I thought you were wrong and you agreed with me, at least in part. And yet you still feel compelled to take this de-legitimization strategy of claiming I am being emotional or shouting you down. You should probably think about why you take this approach to disagreement.

Thanks for letting me contribute to this in a small way.
The true strength of this blog is always in the comments. Thanks for stopping by.

Are you back to bashing glam again?

That implies that I stopped so...? but no, the tendency for scientists to crowd around the same problem using the same techniques and approaches is not unique to Glam.

Seriously though, it sounds like you are suggesting that reviewers follow a Potter Stewart test ("I know it when I see it") on what constitutes adequate productivity.

I think we can all see where this leads to bias and subjectivity and resentment. Oh right. That IS what we have when it comes to comments on productivity in grant review.

I fear that subjective criteria favor the "haves" and invite implicit biases against the "have nots."

The self-reinforcing nature of what constitutes "productivity" is a danger and this is, imo, an argument for not defining only one single measure of productivity as the be-all. I would, however, like more explicit definition of the different aspects, if you see what I mean.

I'm okay with people arguing over whether one Science paper equals X J Neuro papers. I'm okay with people arguing over whether the scientific advance in one paper from Professor Markov equals the six papers from Professor Li. And I'm okay with people arguing that the papers cover every dang thing proposed in the original application so it is more awesome than the one from the PI who only completed about half. But I'm not okay with panel review that says five papers from this grant are great productivity and 10 papers of equivalent scope, importance and impact are terrible productivity for this other grant. Consistency equals fairness. And for that, I believe, we need to be talking the same language and from the same space of consideration.

Regardless JIF or whatever other glam metric you want to use, can we all just agree that >$6m direct costs in 6 years to a junior investigator is too fucking much? An average PI with a continuously funded modular R01 would survev for 25 years on that money! FFS NIH spread the love!

Wait, 1 million dollars to do a revision?! Wtf is wrong with her field?!

In my mind:
If a paper is that deeply flawed then that is embarrassing. hey we all make mistakes, but definitely time to scrap it.

If the 1M in requests is superfluous crap that some reviewer wants then tell them no.

If editor is insisting, take it to the next glamor mag down the street.

This level of kowtowing to referees/editors is absurd...I am not in a position to pass judgment on a $2.4M dc/yr scientist, but at some point we need to have the integrity to say no and use that 1M for the trainees' next paper(s).

"There is just no comparing apples to oranges on paper count when you get into the heady atmosphere of a Glam lab."

Part of the point of RCR and other attempts to measure productivity and impact is to query whether huge, expensive, glam papers and labs really provide additional benefit proportional to their much higher cost (from the funders points of view) or if it's just manufacturing prestige for star-belly sneetches. These are non-muutally exclusive. Funders also care about prestige and having stars to point at (all the better if they are nice young people and not middle aged sex creeps under investigation). I am sympathetic to the argument that there are things that don't come out in any metrics (and they don't) that might make this worthwhile. What I can't stand is the narrative that inevitably goes with it that vertical ascenders are carrying scientific discovery forward while the rest of NIH-land is in their wake, doing the controls they forgot to do and otherwise "filling in the details."

So maybe it's worth it for a funder to have a small stable of pedigree show ponies and a much larger population of draft horses who do most of the work. But you can't rail against glam if that's the case, because it's the same damn thing. Glam is based on incentives that in large part are created by funders, and I think funders are correct and justified in considering whether it's an incentive they should provide given where the money is from and what is supposed to be for.

DM: This is obviously a difficult topic and as you point out there is no ideal way to do this. Thanks for your points, I think the ensuing discussion has been illuminating.

At the end of the day there is always going to be some degree of balancing "quantity" against "quality". I agree completely that this has to be done consistently or not at all.

The main issue I wanted to air is that many fields have senior scientists (Boomer generation) who have been funded repeatedly over many cycles, yet a really careful examination of the publication record reveals that they have NEVER actually solved the problem proposed. Although they publish, the papers are not really answering the question. Advances are mainly a question of incremental methodological advances that don't really move the field. These people suck up a lot of resources and just obstruct the process of genuine problem solving.

[…] There is a growing disconnect between what is proposed in the average US National Institute of Health grant application and what it can actually pay to accomplish, argues pseudonymous biomedical researcher DrugMonkey. (DrugMonkey blog) […]

[…] There is a growing disconnect between what is proposed in the average US National Institute of Health grant application and what it can actually pay to accomplish, argues pseudonymous biomedical researcher DrugMonkey. (DrugMonkey blog) […]

The main issue I wanted to air is that many fields have senior scientists (Boomer generation) who have been funded repeatedly over many cycles, yet a really careful examination of the publication record reveals that they have NEVER actually solved the problem proposed. Although they publish, the papers are not really answering the question. Advances are mainly a question of incremental methodological advances that don't really move the field. These people suck up a lot of resources and just obstruct the process of genuine problem solving.

This, I hope you realize, is entirely in the eyes of the beholder. All of us can point to those guys over there who have never really done anything useful and suck up a lot of NIH resources. All of us who have been PIs (and I hope this is not news to you) have detractors who would point at our body of work and insist that we are tremendous wastes of resources who never do anything to solve anything really important.

One person's shit mountain is another person's gold.

The demonstrated strength of the NIH system is its breadth and diversity, its investigator initiated approach and the resulting tolerance it has for hits and misses.

I may have my issues with Boomers but there are plenty of people in younger generations who will arrive at the end of their careers and have the *next* generation whining just as you are. This is a feature, not a bug. Top-down heavily controlled science that is dictated by the few is bound to be less effective. Because the few can (and are) wrong, blinded and biased.

"Wait, 1 million dollars to do a revision?! Wtf is wrong with her field?!"

I can tell you how it happened in one instance for a glam journal where I happened to be the reviewer who demanded all the extra work. The initial submission was from a glam lab with a nicely crafted story that was certain to excite an editor, but the science was riddled with so many fatal methodological flaws that it was a completely unconvincing piece of garbage. I stated exactly what set of experiments would need to be done to convince me, which were so extensive that I expected the editors to reject it and for the authors to move on somewhere else. Instead, I got the revised manuscript back over a year later (and I would guess well over 1M was spent in the process). I'm shocked it wasn't done well the first time.

I feel this is a common glam lab strategy. After your first try the editor will always give in to an appeal since you have such a potentially awesome and exciting story. Then it isn't too hard to beat your reviewers into submission since you now have a list of exactly what experiments and issues need to be taken care of to satisfy the eager editor. And you also need a million bucks.

Relatively recent grad here (got my BS in biology in 2016). Bounced around between various ecological jobs, recently started in biomedical, intending on grad school very soon to study evolution of cancer suppression (peto's paradox kind of stuff). This piece and all of the comments (while some of it is a bit lost on me) has caused me some anxiety for beginning my career. I've known for years now that there's a lot of politics and other b.s. in science, but wow this is nauseating. I just want to make decent money, answer some interesting questions, and contribute to the bettering of society. Is that too much to ask? Should I get out now and find another career path before its too late?

I just want to make decent money, answer some interesting questions, and contribute to the bettering of society. Is that too much to ask? Should I get out now and find another career path before its too late?

Without knowing your baseline understanding and your expectations it is hard to say. I would say that if anything you read on this blog about the careerist aspects of grant-funded academic science in the US (assuming this limitation generically applies to your plans) shocks and/or nauseates you that you may have some deeper thinking to do.

"answer some interesting questions and contribute to bettering of society" is the easy part - it is hard to be even minimally viable in academic science and not have this be true, imo.

"decent money"- most grad students get paid above minimum wage and postdocs upwards from there. some of us thought that was decent money during training. others complain vociferously that they are basically indentured servants. ymmv.

"career path"- do you want to be an old skoole hard money supported Prof with a continuously and generously funded lab for a decades long career? This may not be high-probability. Do you want to be paid for your whole working life to do something atleast vaguely related to science? better odds.

One thing to keep in mind is that many glam papers really have 3 or 4 or more non-glam papers hiding inside them. A Nature paper might have 4 figures, but each of them has 20 panels, and then there are another 20 or more supplemental figures. That is certainly the case in Kay Tye's field.

Imagine submitting a normal (non-glam) paper, and the reviews come back asking you to do about 1 month worth of work. If you feel the reviewers' demands are reasonable, you might very well decide to invest that 1 month of effort rather than move on to another, perhaps less-good journal.

Now let's say you submit 5 papers at once, and each review asks for 1 month of work. Now you have 5 months of work to do. Well, submitting 5 papers at once is not too far off from what someone in Kay's field does when submitting a single glam paper. So it's not unreasonable to expect it to cost $100,000+ just for the revisions.

Personally, I think the whole thing is ridiculous. I don't think that the quality of the work in 1 glam paper is necessarily any better if it's all together in 1 paper rather than 5 individual papers. For us readers, it's like watching someone parade around in luxury brand clothes: it just means that person spends a lot of time thinking about what others perceive. Well, science is not about what others perceive. The ridiculous fashion of one era gives way to the ridiculous fashion of the next, but objective reality remains. In the long run, it doesn't matter if the papers that uncover that reality are compressed into Nature papers like fat men in speedos, or laid out in more conventional articles.