An email from the CSR of the NIH hit late yesterday a few days ago, pointing to a number of their Peer Review Notes including one on the budget bump that we are about to enjoy.
Actually that should be "some selected few of us will enjoy" because

“While $2 billion is a big increase, it is less than a 10 percent increase, and a large portion of it is set aside for specific areas and initiatives,” said Dr. Nakamura. “Competition for funding is still going to be intense, and paylines will not return to historic averages . . .

Yeah, as suspected, that money is already accounted for.

The part that has me fired up is the continuation after that ellipsis and a continuing header item.

So make sure you put your best effort into your application before you apply.”

Counterproductive Efforts
“We know some research deans have quotas and force their PIs to submit applications regularly,” said Dr. Nakamura. “It’s important for them to know that university submission rates are not correlated with grant funding. Therefore, PIs should be encouraged to develop and submit applications as their research and ideas justify the effort to write them and have other scientists review them.”

As usual I do not know if this is coming from ignorance or calculated strategy to make their numbers look better. I fear both possibilities. I'm going from memory here because I can't seem to rapidly find the related blog post or data analysis but I think I recall an illustration that University-total grant submission rates did not predict University-total success rates.

At a very basic level Nakamura is using the lie of the truncated distribution. If you don't submit any grant applications, your success rate is going to be zero. I'm sure he's excluding those because seemingly that would make a nice correlation.

But more importantly, he is trying to use university-wide measures to convince the individual PI what is best for her to do.

Wrong. Wrong. Wrong.

Not everyone's chances at that institution are the same. The more established investigators will probably, on average, enjoy a higher success rate. They can therefore submit fewer applications. Lesser folk enjoy lower success rates so therefore they have to keep pounding out the apps to get their grants.

By extension, it takes very little imagination to understand that depending on your ratio of big important established scientists to noobs, and based somewhat on subfields, the apparent University-wide numbers are going to swamp out the information that is needed for each individual PI.

In short, this is just another version of the advice to young faculty to "write better grants, just like the greybeards do".

The trick is, the greybeards DO NOT WRITE BETTER GRANTS! I mean sure, yes, there is a small experience factor there. But the major driver is not the objective quality but rather the established track record of the big-deal scientist. This gives them little benefits of the doubt all over the place as we have discussed on this blog endlessly.

I believe I have yet to hear from a new-comer to NIH grant review that has not had the experience within 1-2 rounds of a reviewer ending his/her review of a clearly lower-quality grant proposal with "....but it's Dr. BigShot and we know she does great work and can pull this off". Or similar.

I have been on a study section round or two in my day and I am here to tell you. My experience is not at all consistent with the idea that the "best" grants win out. Merit scores are not a perfect function of objective grant quality at all. Imperfectly prepared or boring grants get funded all the time. Really exciting and nearly-perfect grants get unfundable scores or triaged. Frequently.

This is because grant review hinges on the excitement of the assigned reviewers for the essence of the project. All else is detail.

You cannot beat this system by writing a "perfect" grant. Because it may not be perfect for all three reviewers no matter how well it has been prepared and how well vetted by whatever colleagues you have rounded up to advise you.

Nakamura should know this. He probably does. Which makes his "advice" a cynical ploy to decrease submissions so that his success rate will look better.

One caveat: I could simply be out of touch with all of these alleged Dean-motivated crap apps. It is true that I have occasionally seen people throw up grant applications that really aren't very credible from my perspective. They are very rare. And it has occasionally been the case that at least one other reviewer liked something about an application I thought was embarrassingly crappy. So go figure.

I also understand that there are indeed Deans or Chairs that encourage high submission rates and maybe this leads to PIs writing garbage now and again. But this does not account for the dismal success rates we are enjoying. I bet that magically disappearing all apps that a PI submitted to meet institutional vigor requirements (but didn't really mean to make a serious play for an award) would have no perceptible effect on success rates for the rest of us. I just haven't ever seen enough non-credible apps for this to make a difference. Perhaps you have another experience on study section, DearReaders?

Finally, I really hate this blame-the-victim attitude on the part of the CSR and indeed many POs. There are readily apparent and demonstrable problems with how some categories of PIs' grants are reviewed. Newer and less experienced applicants. African-American PIs. Women. Perhaps, although this is less well-explicated lately, those from the wrong Universities.

For the NIH to avoid fixing their own problems with review (for example the vicious cycle of study sections punishing ESI apps with ever-worsening scores when the NIH used special paylines to boost success rates) and then blame victims of these problems by suggesting they must be writing bad grants takes chutzpah. But it is wrong. And demoralizing to so many who are taking it on the chin in the grant review game.

And it makes the problems worse. How so? Well, as you know, Dear Reader I am firmly convinced that the only way to succeed in the long term is to keep rolling the reviewer dice, hoping to get three individuals who really get what you are proposing. And to take advantage of the various little features of the system that respond to frequent submissions (reviewer sympathy, PO interest, extra end of year money, ARRA, sudden IC initiatives/directions, etc). Always, always you have to send in credible proposals. But perfect vs really good guarantees you nothing. And when perfect keeps you from submitting another really good grant? You are not helping your chances. So for Nakamura to tell people to sacrifice the really good for the perfect he is worsening their chances. Particularly when the people are in those groups who are already at a disadvantage and need to work even harder* to make up for it.

__
*Remember, Ginther showed that African-American PIs had to submit more revisions to get funded.

Huh????!!!!???? It's fuckin maddening to me. The NIH moans and moans about needing more money, and then when it comes (in the form of big government directives which people like Collins have been pushing for btw) they moan and moan about increased apps. Historical data show that an increased budget leads to an increase in apps, so it's to be expected. But they all seem totally unprepared.

"CIHR strongly encourages applicants to submit only their most competitive application(s). We thank you for your understanding."

It had little effect as there were over 4300 registrations (vs a typical level of 2600) and one scientist at U of Toronto submitted 7. One reason for this application pressure is the fact that the prior competition wasn't run so people had no opportunity to apply.

Watch out, virtual review is the next virus in line and its a real turkey.

There IS apparently lots of money for Alzheimer's research, though. I will be interested to see what those paylines are later this year (or the funding also dedicated?).

The idea that any of us submit multiple grants primarily because our Deans force us to is, I agree, totally bizarre. Could Nakamura have been trying to tell faculty, in a completely misguided way, "Look, you really don't have to do this, and tell your Dean I said so!". ..nah, I think he was trying to reduce the total app number.

Nakamura's statement about "counterproductive efforts" was aimed directly at deans and department chairs who explicitly reward grant submissions, and not at PIs. At my medical school, there are some departments that give salary supplements for each grant that is submitted, whether or not it is funded. Since quality doesn't matter, there is a personal financial incentive to crank out junk that doesn't have a prayer of getting funded. I can't say for sure since I have never read grants from those departments, but I wonder.

Surely Nakamura can't have thought that a gentle admonishment on a blog post would have any effect on deans' behavior. And he didn't have to throw in that condescending line "to put your best effort into your application." Really...

Back in the day when I was reviewing grants on behalf of the big guy who paid me, I recall reading a grant application in which a department chair + junior faculty (co-PI situation) seriously wrote about a half a paragraph in the Facilities and Resources section, "The Molecular Core Facility in the School of Medicine and Bla de Bla has a spectrophotometer, centrifuges, etc." My recommendation was to write, "It is difficult to assess what 'etc' refers to given the material presented in the grant application. Presumably the resources necessary to carry out the experiments exist at University of --, but they are not explained here," and to give it a crappy score.

I realize that this is incredibly snarky (and unnecessarily bitter? I dunno ... what if you are submitting a grant to someone who never heard of you and all they have is expertise and the application in effing front of them?) and I don't know whether big guy took my snark to CSR (there were other ways to kill that grant), but lil' ole me never hung out with Chair Bigwig at the Society Conference, so I don't know anything except what is in front of me plus the science I have learned. I know what instrumentation necessary to carry out the experiments in the Research Plan. I suppose it could have been incumbent upon me to look up the Molecular Core Facility in the School of Medicine at Bla de Bla in order to determine what "etc" meant, but there's an unlimited amount of space in the Facilities and Resources section for Dr. Bigwig to explain to me (and trust me I am not motivated enough to verify this, except maybe to look in the publication record or letters of support to see that there is indeed access to a particle accelerator) what effing facilities and resources are effing available to do what is effing written in the Research Plan.

Oh it must be nice to be so comfortable as to write "etc." in a grant app ...

@ E-rook
No, the budget for a 250k/yr grant for 4 years is $1m, no more. The dept might budget it overall so that the first year spend is say 230 and the final year is 270, with inflationary rises and rolled over funds, but the overall total has to round out to 1m.
I think what he was talking about is the efforts of several institutions to restore, or reverse, the commonplace cuts listed as "continuing resolution", typically 5-10% depending on institute. That lost money will be restored up to the full budget as listed on the award notice.
So really, they're just gong to pay what they said they would in the first place.

Ola, I am sorry, I think that is what I meant. My timeframe in academia is relatively short (about the timeframe of an R01), so I haven't experience of many cycles of institutional budgeting.

I was under the assumption that most institutions enforced (or at least made exception with only good cause from a rule) a 3% annual increase on a 5-year grant. I only know what I have experienced so I would be interested to learn from diverse experiences. My understanding of being part of discussions for multi-year funded proposals and noncompetitive and competitive renewal applications ... was that ... ever since "Sequestration" (or thereabouts), most noncompetitive renewals were being cut by a percentage closely matching the annual increase accounting for inflation.

So the new funds in 2017 that are available from the slightly higher budget will really go to meeting commitments (e.g., five year grants, maybe the first 2-3 years' worth of renewals had decreases) that have already been made by the ICs.

This situation (meeting commitments already made), relative to cuts on noncompeting renewals ... is better than them being cut.

I think we are seeing the same thing from slightly different perspectives.

With respect to the Commenters and DM discussing Alzheimer's disease funding ... I haven't had time to dig into the details. I think the DHHS renewal budget altered the Congressional set aside for HIV/AIDS research ... maybe the money is being found for the second phase of BRAIN Initiative or something ... I know that NIMH published a new Strategic Plan (which I confess I haven't had time to read), along with Insel's departure... there may be something in there that prompts people to submit toward a particular disease area. My goal is to read it this weekend come heck or high water...I have no financial incentive to, but I am deeply curious as to what is going on.

bit of a tangent, but regarding the (supposedly) biased reviewing of certain apps:
Here in Germany, it is an open secret that apps from well-funded places (e.g. national labs) get reviewed less favorably, so as to direct the funding to 'where it is actually needed'.

Most institutions include a 3% inflation in the budget internally, but NIH pays the budget you requested (minus whatever they have taken off the top). For modular budgets, the 3% is irrelevant. For non-modular, it will often get included. But at some institutions (like mine) you have to do internal budgeting even on modular grants. At my institution it is a major pain in the a$$ because you have to include the 3% inflation for the internal (meaningless) budgeting even on a modular grant, which can often mean that you end up with too many modules in the last year and too few in the first. Most people shift non-salary budgets around so that the modules are set for the five years.

And most budgets were cut by much more than the annual inflation 3%. Cutting one module out of a 250k grant is 10%. Cutting two modules is 20%.

DM - since you are tweeting about this, I thought I should expand on the circumstances that give rise to rewards for grant submissions. At my uni, this is something that only happens in clinical departments, where it is common to have an "incentive" portion of salary. In many (but not all) departments, this part of the salary (added to the base salary) was determined by the clinical revenue that the faculty member brought in. The reward for grants and grant submissions is a well-intentioned move to reward faculty in clinical departments who forgo clinical hours (and thus bonuses) to do research. Just as financial rewards (both in and out of academia) create a perverse incentive for over-treatment, so do rewards for grant submissions create incentives to just submit more applications. There are some departments that deal with both problems by giving bonuses to all faculty when the department overall does well financially (clinical revenue plus salary support from grants), i.e. group rather than individual incentives. None of the above applies to basic science faculty, whose salaries are, however, determined in mysterious and opaque ways.

Why wouldn't Chairs and Deans direct their faculty to throw as many grants at NIH as possible? There's nothing but upside for them, and there isn't any negative consequences for the if they deluge CSR with proposals. They way they look at it, grants submissions appear as if by magic, and sometimes they bring in indirect costs that help pad their budgets, but if they don't then they never notice because they have no investment in the proposal. By pegging 3 or more months of faculty salary to grants, they incentivize throwing spaghetti at the wall because they know that faculty have bills to pay/staff to pay and that they will work their tails off to keep their paychecks and their research staff employed. With full salary coverage you have less pressure to bring in grants and money to the department/university.

I had a dean tell me a few years ago during a lean period that I should just write better grants and move into a more fundable research area. The dean had never written a grant or run a research program in his life.

Incentives need to be targeted at the problems. If your problem is that you are submitting crap and not getting funded then providing incentives for your faculty to submit grants is going to be counterproductive. On the other hand, if your problem is that your faculty are not submitting grants but that when you get them submitted they get funded (even when the faculty "think they are not ready"), then incentives to submit are a good idea. Incentives should be targeted at data. And YMWV.

Logically, one could measure the proportion of grants funded. If you are submitting 20 grants a year and none get funded, then you're probably submitting crap.

What we did was directly measure the proportion of funded grants in various categories (people who submit lots of grants, people who submit one, people who we forced to submit grants even though they still wanted to keep revising it, etc). What we found was that the likelihood of getting funded was not different in any of those categories. (We also found that the likelihood of getting funded was higher than the national average.) We concluded that the problem was getting people to submit and not that they were submitting crap. We instituted new incentives and new pressure to submit and our overall funded faculty levels have increased significantly.

The real issue was whether our low funding at the time was due to people not spending enough time getting good grants in (so incentivize mock study sections, writing help, etc) or due to people not sending enough grants in (so incentivize submissions). Our data was pretty clear in our department. (And in fact, our submissions were getting funded well above the national average.)

At some level, what we wanted to do was to figure out why each person who was not well funded was having trouble. But it's hard to get data on each individual.

Yes, we should work at fixing NIH. (And you know I've been a strong and vocal advocate for that, both in my comments on this blog and elsewhere.) However, we can't change NIH from within our department. So the question was What can we do?

(I was addressing the issue raised earlier about whether it was a good idea to incentivize submissions and pointing out that it depended on what your problems were.)

Crap can be defined as something that is not interesting to a particular reviewer while they are reviewing it. One reviewers crap can be another reviewers treasure. Since you don't have control over who gets to review your grant, your best option is to assume that your reviewer doesn't automatically understand the significance of your special snowflake project and take time to explain it in simple language (you know, like the educators that we're supposed to be). I'm much more impressed when I review a grant from an area I'm not very familiar with educates me on a new subject than with a grant in an area that I'm familiar with that assumes I already know the field.

Follow the old engineers adage: keep it simple, stupid (KISS) and you'll do OK.

The grant I just received (fairly prestigious from a non-federal source) was "not discussed" at NIH in a study section with people that had the same background as the reviewers that loved my grant that got funded. So, my ideas were crap to some while gold to others. While I would not advocate sending out grants like holiday cards, a reasonable approach (and particularly being able to send the same grant to different places) may be beneficial or even pay off. Plus, you then get even more summary statements that you can use to improve your grant.

Speaking of pronouncements (sorry to get off topic), I would be very interested in hearing DM's take (as well as everyone else's) on the new "rigor and transparency" information that must now be included in NIH grants. On the surface, it seems like a reasonable idea, but I feel like the bigger problem is on the back end in the publications.