Do you always get about the same score on your NIH grant?

February 11, 2014

This question is mostly for the more experienced of the PItariat in my audience. I’m curious as to whether you see your grant scores as being very similar over the long haul?

That is, do you believe that a given PI and research program is going to be mostly a “X %ile” grant proposer? Do your good ones always seem to be right around 15%ile? Or for that matter in the same relative position vis a vis the presumed payline at a given time?

Or do you move around? Sometimes getting 1-2%ile, sometimes midway to the payline, sometimes at the payline, etc?

It strikes me today that this very experience may be what reinforces much of my belief about the random nature of grant review. Naturally, I think I put up more or less the same strength of proposal each time. And naturally, I think each and every one should be funded.

So I wonder how many people experience more similarity in their scores, particularly for their funded or near-miss applications. Are you *always* coming in right at the payline? Or are you *always* at X %ile?

In a way this goes to the question of whether certain types of grant applications are under greater stress when the paylines tighten. The hypothesis being that perhaps a certain type of proposal is never going to do better than about 15%ile. So in times past, no problem, these would be funded right along with the 1%ile AMAZING proposals. But in the current environment, a change in payline makes certain types of grants struggle more.

My applications at NIH get anything from triage to near-miss (my present funding comes from NSF and from a foreign peer-reviewed grant). Recently, I even got a near miss -A0 and a triaged -A1, after having duly addressed the reviewers’ concerns!

IMO the peer review at NIH is totally chaotic and subject to distributing academic welfare rather than to promoting innovation.

Ok but honest noob question: when is a triage a sign that an idea is just really stupid, and when is it just chance? Did a triage on my A0 K99 mean that I shat the bed, or was that the idiocy of the reviewers to not recognize my young and innovative genius? Because I had been operating under the assumption that I shat the bed, and my resubmission reflects that. But you folks alleging that scoring is just random…well, it gives one pause.

I accept that I am entering a career where I have to ignore feedback and be self-reinforcing to the point of obstinacy. But how obstinate should one be? Should I have been raised by people/a culture that made me think that I deserve things simply because I want them? Because that ship has sailed.

Like others are saying, my scores are all over the place. I can’t predict at all how things will turn out. Proposals that I think are great, and which colleagues have read and think are great, get triaged. Proposals full of half-ass shit thrown together just to get something submitted do fine. It’s like backwards world most of the time. Even scores for the same proposal can be remarkably inconsistent. The same proposal gets ones from some people and fives from others, for the same criteria. Even shit that doesn’t change, like me and the institution, get different scores from proposal to proposal. It’s like they’re rolling dice half the time.

As a reviewer, I get it. Everything is relative. Your great proposal will suffer if the other couple in my stack happen to be better. Or your crappy proposal might seem fine if the other ones hugely suck. There is no external standard, especially among ad hoc reviewers.

I have learned to just not obsess. It’s a lottery ticket. And most importantly, I suck up to every potential reviewer that I meet. I am now mister ‘everyone is awesome!’, and it’s helped better than any grantsmanship or real science.

Scores are all over the place. I think that grants that can be great but perceived impact is just not as great as major powerhouse or super technical labs, these are the ones that are in the biggest trouble in the current climate.

This is kind of a silly question. I mean, what do you think the answer is going to be? No-one is ever going to get the same score every single time. Particularly when the percentile system is as borked as it currently is. A bunch of folks get a 2.0 (from the 1,2,3, or 2,2,2 “spread”) and you all cluster round the same percentile, and then a bunch of folks get closer to a 3.0 because they got a 2,2,3 or a 2,3,3 and they all cluster round the next percentile. There’s no way you’re going to get the same percentile every time or anywhere near it.

Grant writer perspective: younger PI’s have more random scores, more established PI’s have more uniform scores. I think a lot of the review is affected by name recognition. Also, significance matters a whole lot. I have seen some mediocre grants get close to funding even though their approach was terrible, just because the reviewers thought they were asking an important question.
Resubmitting triaged grants is a bad idea. The reviewers see where the grant was scored before and I almost think they get blinders on when they see the ND on the first submission. I have had a handful of grants that went from triage-to-funding but none in the last two years.

@ Pinko Punko. So, are you saying then that NI’s/ESI’s are largely fuckked (despite the supposed extra percentile pickups)? I can’t get a straight answer from any of several PO’s whether ESI’s are reviewed in a separate pile or same pile as the rest of the “established” PI grants, but my guess is the latter, since my scores are completely skeet shot scattered and complete crap to try to “address” the reviewer’s critiques.

There’s no way you’re going to get the same percentile every time or anywhere near it.

Then assertions* that “the NIH won’t fund [basic, clinical, human, this, that, t’other] type of research” seem flawed. Such assertions seem to be saying that no matter how awesome a grant you write, there will be a floor on how good your score can be. I am exploring the idea, stipulating this is true, that perhaps a moving payline over time could cause a particular score floor to all of a sudden be out of the money.

*Note, I think the assertion is stupid on the face of it an a few minutes with keyword searching on RePORTER can quickly dissolve any such claims.

Well, mildly related… My first and only RO1 went in when I was a research a$$ prof (yeah, what was I thinking?) and got 30%ile with kudos for significance and the main criticisms being subtle experimental design issues. I got a real job at at a non-R1 institution and reassembled the grant as an AREA R15, thinking the assembled special emphasis R15 study section might dig the idea that I was giving good undergrads some experience at patch clamp electrophys (hell, one actually got some recordings!). It bought me a ticket to triagesville on significance 😦

I’ve gotten 3 scores so far, all have been within 8 pts of each other (overall priority score), so I’ve been fairly consistent. The one with the best score was funded, but I daresay it was a generous “pickup” during council.

No discernible pattern for me. Last year I had two that were within a point on the impact/priority score and a third that was triaged. Year before, 1 triaged and one not-at-all-fundable score. Year before, just triage. Have an ESI R01 reviewed soon, hoping to at least dodge triage.

My best score (13) was from a special emphasis panel. Triaged on all my 4 other apps (so far) at regular SSs. I only have the emotional capacity for two more cycles of triaging (and bizzare & ad hominem criticisms) before I go pursue my dream as a well-educated ski lift operator.