In his latest column at ASBMB Today, Steve McKnight attempts to further his assertion that peer review of NIH grants needs to be revamped so that more qualified reviewers are doing the deciding about what gets funded.

He starts off with a comment that further reveals his naivete and noobitude when it comes to these issues.

Reviewers judge the application using five criteria: significance, investigator, innovation, approach and environment. Although study sections may weigh the importance of these criteria to differing degrees, it seems to me that feasibility of success of the proposed research plan (approach) tends to dominate. I will endeavor to provide a quantitative assessment of this in next month’s essay.

The NIH, led by then-NIGMS Director Berg, already provided this assessment. Ages ago. Try to keep up. I mention this because it is becoming an obvious trend that McKnight (and, keep in mind, many of his co-travelers that don't reveal their ignorance quite so publicly) spouts off his ill-informed opinions without the benefit of the data that you, Dear Reader, have been grappling with for several years now .

As reported last month, 72 percent of reviewers serving the HHMI are members of the National Academy of Sciences. How do things compare at the NIH? Data kindly provided by the CSR indicate that there were 7,886 reviewers on its standing study sections in 2014. Evaluation of these data reveals the following:

48 out of 324 HHMI investigators (15 percent) participated in at least one study section meeting.
47 out of 488 NIH-funded NAS members (10 percent) participated in at least one study section meeting.
11 of these reviewers are both funded by HHMI and NAS members.

These 84 scientists constituted roughly 1.1 percent of the reviewer cadre utilized by the CSR.

This tells us nearly nothing of importance. How many investigators from other pertinent slices of the distribution serve? ASBMB members, for example? PIs from the top 20, 50, 100 funded Universities and Medical Schools? How many applications do NAS / HHMI investigators submit each year? In short, are they over- or under-represented in the NIH review system?

Anyway, why focus on these folks?

I have focused on the HHMI investigators and NAS members because it is straightforward to identify them and quantify their participation in the review process. It is my belief that HHMI investigators and NIH-funded members of the NAS are substantively accomplished. I readily admit that scientific accomplishment does not necessarily equate to effective capacity to review. I do, however, believe that a reasonable correlation exists between past scientific accomplishment and capacity to choose effectively between good and poor bets. This contention is open for debate and is — to me — of significant importance.

So confused. First, the supposed rationale that these elite scientists are readily discernible folks amongst a host of well qualified so that's why he has used them for his example, aka the Street Lamp excuse. Next we get a ready admission that his entire thesis he's been pursuing since the riff-raff column is flawed, followed immediately by a restatement of his position based on..."belief". While admitting it is open to debate.

So how has he moved the discussion forward? All that we have at this point is his continued assertion of his position. The data on study section participation do exactly nothing to address his point.

Third, it is clear that HHMI investigators and NIH-funded members of the NAS participate in study sections charged with the review of basic research to a far greater extent than clinical research. It is my belief that study sections involving HHMI investigators and NAS members benefit from the involvement of highly accomplished scientists. If that is correct, the quality of certain basic science study sections may be high.

Without additional information this could be an entirely circular argument. If HHMI and NAS folks are selected disproportionally for their pursuit of basic science (I believe they are, Professor McKnight. Shall you accept my "belief" as we are expected to credit yours? or perhaps should you have looked into this?) they of course they would be disproportioanlly on "basic" study sections. If only there were a clinically focused organization of elite good-old-backslappers-club folks to provide a suitable comparison of more clinically-focused scientists.

McKnight closes with this:

I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists — irrespective of age, gender, ethnicity, geographical location or any other variable. In subsequent essays, I will offer ideas as to how the NIH system of grant award distribution might be altered to meet this goal.

Nope. We want the funding to go to the most important science. Within those constraints we want the funding to go to highly qualified scientists but we recognize that "the most qualified" is a fool's errand. Other factors come in to play. Such as "the most qualified who are not overloaded with other research projects at the moment". Or, "the most qualified who are not essentially carbon copies of the three other folks funded in similar research at the moment".

This is even before we get into the very thorny argument over qualifications and how we identify the "most" qualified for any particular purpose.

McKnight himself admits to this when he claims that there are lots of other qualified people but he selected HHMI/NAS out of mere convenience. I wonder if it will eventually trickle into his understanding that this mere convenience pollutes his entire thinking on this matter?

Another lovely quote was "Second, even if all HHMI investigators and NIH-funded NAS members were to participate in NIH study sections, they would constitute only 9 percent of the full roster (707 out of 7,886). The only way to change this percentage would be to reduce substantively the total number of reviewers. I will address this concept in next month’s essay."

From this I get the impression that McKnight feels that only those very accomplished scientists (eg the HHMI/NAS old boys/girls club types (average age probably over 60)) are truly capable of identifying significant vertically ascending science that should be funded. This is just an ever-so-very-slightly more politically aware continuation of the riff-raff screed of yore.

I wonder if next month's article will talk about how the NIH should move to a review system where the proposals should just have a title and a biosketch (and maybe an abstract for appearances) and do away with the research plan entirely. That way a reduced review cadre of truly elite scientists (707) can quickly sift through the chaff and find the important people... er... projects to fund.

I do so look forward to McKnight showing that his NAS/HHMI reviewers pay no attention to the Approach criterion and only promote the most highly Innovative applications in maverick decisions that differ from the riffraff on the exact same panels.

What Philapodia said. It is foolishness to take him at his word when he alleges his focus on HHMI/NAS is because of "convenience." It is because of his implicit assumption that those are the only people who should be reviewing science.

Notable is the way in which he implies that HHMI policies are superior:

"The HHMI employs a smaller review team to disburse its funds ($20 million disbursed per reviewer); the NIH employs a much larger team ($1.3 million disbursed per reviewer)."

Assuming his numbers are accurate (as opposed to reflecting his belief of $$ per reviewer), his implication is clear: the NIH is putting the distribution of its precious funds in the hands of a wider number of [crappy riffraff] reviewers.

Let me reach with my riffraff brain for the vertically ascending inference: the NIH is funding things it oughtn't because some shiteasse at a state school who is interested in boring, unimportant things gave it a good score. We could turn a blind eye to this nonsense when funding was less of a chore to obtain for Good People Who Deserve It. Now, however, it is time to put away childish things and focus our limited federal funds on the kinds of science HHMI and the Glam Mags think are important.

Therefore, we must sharply reduce the ratio of shiteasse-riffraff: Distinguished Member of NAS/HHMI reviewers in NIH study sections, to the benefit of Truth, Discovery, All Mankind, and Steve McKnight.

(NB, DM, that clinical research is Not Vertically Ascending and thus probably not deserving of funds. The lack of HHMI and NAS members on clinical study sections tells us this. Clinicians are riffraff too - in fact, they are the definition of Amateur Scientists. /sarcasm)

The more I think about it, I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists, is the most revealing and the most fundamentally misplaced statement McKnight has made so far.

He really doesn't understand what the NIH is supposed to be doing. At all.

"I wonder if next month's article will talk about how the NIH should move to a review system where the proposals should just have a title and a biosketch (and maybe an abstract for appearances) and do away with the research plan entirely."

I've been advocating for this forever, but...

"That way a reduced review cadre of truly elite scientists (707) can quickly sift through the chaff and find the important people... er... projects to fund."

...that would be a disaster. If reviewers are a broad cross-section of scientists, as they are now, then the biosketch-and-an-abstract structure could work -- not as a replacement for the entire NIH funding system, but as a type of application that mid- and late-career researchers could apply for.

I wonder if next month's article will talk about how the NIH should move to a review system where the proposals should just have a title and a biosketch (and maybe an abstract for appearances) and do away with the research plan entirely

I say we hand out grants based on who is hot and who is not. Photos are all that is required. Problem solved.

Timely article, and actually does a semi-decent job of presenting the options. Poll results are interesting as 75% voted for more 'staff scientist' positions. I voted for 'reduce the number of postdocs', which so far only has received 5% of the vote. I voted this way because I know that staff scientist positions on a large enough scale to make a difference are a fantasy and, thus, not a solution. As hilariously alluded to in the following passage:

.....Petsko says that funding agencies could step in and enforce change, by demanding that universities direct a portion of their overhead payments — money given to the university rather than the lab — towards creating more staff-scientist positions.

Fucking unbelievably naive. Cue NIH response: we don't have the power or desire to tell institutions what to do. End of story. Next solution, please.

"I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists — irrespective of age, gender, ethnicity, geographical location or any other variable. "
-Wow. "Qualifications" is just another term for glamour and PI pedigree, right? I mean, you'd actually have to read the proposal beyond the CV to get at actual qualifications, right?
Also, doesn't the NIH have a geographical, ethnic and gender mandate? As well as ESI...
Is he arguing these programs are inhibiting his meritocratic paradise?

Also, doesn't the NIH have a geographical, ethnic and gender mandate? As well as ESI...

policies, yes. not sure what you mean about the mandate. Zerhouni put into place the policy that ESI *success rates* should equal those of established investigators. AFAIK, the other issues are only in terms of the make up of study sections, and there are some geographically targeted funding opportunities, but there is no mandate that the University of East SouthWest Dakota get as many grants as Harvard if that is what you mean. I am not sure there is any gender mandate anymore but if there were I imagine it would be in terms of keeping the success rates close.

Is he arguing these programs are inhibiting his meritocratic paradise?

Of course he is arguing that any type of affirmative action to combat old and established biases that favor straight white males safely ensconced in the most heavily funded and most active research institutions are inhibiting awards to the most qualified scientists.

This individual wants to change the principles that have guided biomedical research in the US for decades; namely, the concept that you are only as good as your next project. The NIH has always funded ideas. That may be good or bad, but has worked better than anywhere else in the world.

I know the European system and is very different. What you did in the past or your connections matter much more than what you propose to do. I think our system forces you to be sharp and has worked at identifying the best science until the budget was strangled.

HHMI funds very few people, only after they got the CNS trifecta, is extremely political (primarily focused on pedigree), and it is unclear whether the trumpeted accomplishments of their funded researchers are limited to multiple publications in top-glamour journals that actually have little impact in the field in the long-term. Nothing against their model if it just complements with very limited efforts the NIH mission.

Dudes, you need to look no further than your friendly purveyor of clumpy oil and pipe pieces to your north (Winter is Coming). Read up on the CIHR "Foundation scheme" and then calculate how long it will take for it to infest the arteries of NIH. It has three mind-boggling stages. The first is precisely the scenario envisaged by Philapodia. I predict this will be further reduced to insertion of your h-index as eligibility to apply. The second stage, assuming you are invited to apply after the aforementioned filter, artificially breaks up each element of concept, approach and expertise into character limited sections (a page or two) meant to adequately summarize what might be the equivalent of 3 RO1s. These two stages are reviewed virtually with very directed reviewer questions that leave reviewers confused. The last stage is a review of the reviewers! This is face to face and is meant to identify outlier reviews. The Stage 3 reviewers are not meant to read the application. This is not a joke.

These 84 scientists constituted roughly 1.1 percent of the reviewer cadre utilized by the CSR.

Replace black with female and the effect is the same. There is no effin' reason whatsoever to highlight the lack of inclusion of demigods on study section, because there is no evidence whatsoever that they're "better" reviewers. There is however, very good reason to highlight the lack of POC, URM and wimmin, who are (in my limited experience) often much better reviewers, using objective measures such as actually reading the effin' grant and bothering to write more than stock-critiquesTM.

The line that is really misguided is the "accomplished scientists". Essentially, he is making the assumption that past accomplishments = good reviewer. I can tell you from experience, this is not the case. In my opinion there is no overt predictor of anything on a CV that make's someone a good reviewer. Although receiving competitive funding in the past is usually requisite, how well someone reviews and identifies the best science is dependent upon the amount of time they spend reviewing.

Here's a crazy thought. If only we had a journal that we could look at, from the folks who are in the National Academy of Sciences where they could publish work they deem worthy. Things that are new and exciting to them. Things that would ignite people's curiously but may be untraditional or unfunded.

The Academy members could use the platform not to doll out favors, but to elevate the discourse. Yes.....if there was such a journal we could look to it as an example of how NAS members reach across disciplines to demonstrate they are about scholarship, not cronyism. And THEN we could have faith they should be in charge of not just a journal, but up to the task fo also doling out money for research impartially. Wouldn't that be great?

Wait....they do have a journal don't they? What's it called? PNAS? Humm....let me google 'cronyism and PNAS' and see what happens....

While it is all good to call out this drivel for what it is, are we sure that the powers that be do not see it otherwise? Given the outsize influence that the HHMI/NAS crowd wields with the NIH, I really worry when the well-heeled decide that the system is broken (McKnight) and the down-trodden need help (Alberts/Tilghman/Kirschner et al.) The entire power of the American scientific enterprise lies in its democracy. Despite study section conservatism and the "bunny hopping" mentality, good science gets funded and does have strongly positive outcomes.

We had one of these dudes recently serve ad hoc for two cycles, as there is definitely already (and has been for a long time) a strong push within CSR to try to get them to serve as charter members. He was simply incorrigible in his insistence on giving exceptional scores to exceptional scientists whose grant applications were far from exceptional. During discussion, he just kept saying, "You *know* that Professor Whitey McDoodenstein is going to keep doing great science." Obviously, he will not be asked to become a charter member, because he refused to do the reviewing job properly.

McKnight can gibber all he wants, but the reasons there aren't more of these dudes on study sections are (1) most of them refuse to serve and (2) most of the ones who agree to serve just give handjobbes to their buddies, and so are useless and get booted. Of course, what McKnight really wants is for all his buddies to get more handjobbes, but fucke him.

" He was simply incorrigible in his insistence on giving exceptional scores to exceptional scientists whose grant applications were far from exceptional."
-So what's the point of an "exceptional" grant application if it's sort of understood that the actual experiments proposed in the grant are at, best, a rough outline of what will actually be done? Or did you mean exceptional in some other way?
I think part of the anger towards these guys is that they're affronted by the hoop jumping the rest of PIs do, which makes them clueless and out of touch, but we should remember that it doesn't make hoop jumping any more meaningful.

I've heard multiple people argue for merit-based awards that would fund the "person not the project". How would that scheme be any different than the one's these BSDs on SS seem to be espousing?

The cool thing about study sections/grant panels is that the performance of the individual guest reviewer is apparent to the rest of the committee - especially their biases, attitudes, arrogance, empathy, etc. This is an essential social element of peer review because we entirely depend on the integrity of the peers and correlation between productivity, track record, etc and ability to review effectively has a correlation coefficient of 0.5 or less.

This is also why much of the review burden is placed on a smaller fraction of scientists than you'd expect. Indeed, good reviewers, by definition, are still able to caste aside poor personal experience with a "guest" reviewer when that persons application comes up.

Of course, there is an significant advantage in regularly serving on a study section in terms of experience in grants and exposure to (sometimes) great writing, new science, techniques, as well as some level of respect for your own reviewing "chops". However, some of the best grant reviewers I know struggle with their own grant support. This might relate to imposter syndrome, hyper criticality of their own work, etc. I don't know.

@jmz4gtu: "I've heard multiple people argue for merit-based awards that would fund the "person not the project". How would that scheme be any different than the one's these BSDs on SS seem to be espousing?"

If you have a *diverse* group of scientists evaluating persons(-not-projects), then presumably their opinions would reflect that diversity. If you invited a bunch of BSD glamhounds to be reviewers, the reviews would be skewed towards favoring those sorts of schmucks. But it doesn't have to be that way. If *you* were tasked with evaluating a set of applicants based on past work, would *your* critiques necessarily be skewed towards favoring BSDs?

NIH could also take steps towards mitigating BSD bias by making it explicit that where results are published is not to be used as a criterion for evaluation. Among other things, they could make it a rule that critiques have to describe the impact of an applicant's prior work in terms of how it specifically led to other work. In the same way that NIH requires specific criteria to be used for evaluation now, an appropriate set of criteria could be developed for evaluating people's records. Those specific criteria now mitigate against BSD bias; there is no reason why they couldn't be equally effective when they involve evaluating past performance.

This is the conversation no one wants to have...we like to assume everyone is participating in good faith to make some version of the "best" scientific review process given the circumstances. We can acknowledge there is going to be a spectrum of opinions and biases, but still believe that everyone is advocating for something they think is going to improve or at least sustain the system.

This is not the case. McKnight is a gift because he is so incompetent at applying the veneer of good faith argument to his transparent cronyism (always cut in before getting out the rollers, dude).

There has been steady success since the 1970s at slowly weakening the old boys club. Of course it remains, but it has been substantially weakened. What CPP describes (and what Jim describes CIHR is doing in Canada) is not breaking grant review by mistake, it's the counterattack from the old boys: purposefully eliminating scientific review in favor of cementing the old boys club as official policy.

I have been following this discussion, but have been staying on the sidelines for reasons that may be obvious. Now, I do feel compelled to respond to some of the issues raised.

First, there have been individuals pointing out some of the limitations of Steve McKnights analytical approach but they have not had much success. One point of particular concern is the use of the NAS as a surrogate for scientific accomplishment and then concluding that these folks are well represented on basic science but not more clinical study sections. Other groups such as the IOM or the American Society for Clinical Investigator would seem to be equally justifiable (whatever that means) and were suggested.

Second, I largely disagree with mH in that I think Steve McKnight is sincerely concerned about the quality of review and the allocation of NIH resources and is not just interested in favoring his cronies. Rather, I think he is trying to understand why some of his highly accomplished peers are struggling with funding when they haven't in the past. He is examining the review system as a cause and not considering seriously enough that the level of competition is now much higher than it used to be and that the days of submitting a mediocre proposal and expecting to get a fundable score are over, no matter who you are. I, personally, find this lack of consideration of possible explanations more troubling that a simple call to cronyism.

Finally, thanks, DM for pointing out the earlier analyses of the importance of approach in NIH scoring. It is frustrating that such analyses get overlooked when people who have not been thinking about these issues get engaged in them.

"He is .... not considering seriously enough that the level of competition is now much higher than it used to be and that the days of submitting a mediocre proposal and expecting to get a fundable score are over, no matter who you are. "

DH, I think this is why there needs to be visible commentary by the people who have been talking and thinking about this for a long time. McKnight and others like him most likely do not read DM's or your blog (unless he's actually posting as CPP!), and there are probably less than 1000 scientists who actively follow your blogs, so how are they to know about this unless we bring the discussion to them? Seems like you have generated a lot of good data for a paper in C/N/S that brings these discussions more to the forefront.

Datahound, who knows about McKnight's inner feelings, and honestly who gives a flying fucke? Fact #1 is that the only reason he gives a shitte about any of this is because his buddies are starting to feel some pain. And Fact #2 is that his supposed solutions are specifically designed to favor his buddies.

Whether he is delusional and thinks he is "fixing the system" doesn't mean fucke all. Every delusional self-interested zealot thinks they are "fixing the system". McKnight is no different than a right-winger bleating about voter fraud and championing voter ID laws that will have the effect of disproportionately disenfranchising poor non-white voters. Maybe those fuckewittes really believe shitttons of Mexicans are voting and this is why no Real American can get elected president. But who cares what these fuckers think? What matters is what they say and do.

"I think he is trying to understand why some of his highly accomplished peers are struggling with funding when they haven't in the past."

His a priori assumption is that OF COURSE his peers' proposals should be funded. He is not seeking understanding of this, he assumes it is because the process is flawed and is seeking any means to correct the situation: get the money to the previously successful and to hell with the riff-raff (all the people whose names he doesn't know).

This leads to his dismissal of the idea that the Center for Scientific Review should care about the "details of the proposed research plan" and instead should care whose name is at the top. He has made it clear what names those should be in earler posts, before he learned not to dismiss out of hand anyone who he hasn't known since the 80s. That, to me, is flat out cronyism. The fact that he may cherish the delusion that his generational peers are "best" does not make it less so.

If McKnight held deep and honest concerns over "quality" in peer review, I'd expect to find more essays he may have written over the years on how grant view could be improved to ensure fairness for women, minorities, new investigators, etc, who have all fought uphill against obvious bias over large parts of his career.

Nope. Looks like what lights his fire is the temerity of mere study sections denying funding to the Greats who deign to submit proposals.