About this Author

College chemistry, 1983

The 2002 Model

After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: derekb.lowe@gmail.com
Twitter: Dereklowe

May 18, 2011

Funding People, Not Projects?

Posted by Derek

Tim Harford (author of The Undercover Economist and The Logic of Life has a new book coming out, called Adapt. It's about success and failure in various kinds of projects, and excerpts from it have been running over at Slate. The first installment was a look at the development (messy and by no means inevitable) of the Spitfire before World War II (I'd also add the de Havilland Mosquito as another example of a great plane developed through sheer individual persistence). And the second one is on biomedical research, which takes it right into the usual subject matter around here:

In 1980, Mario Capecchi applied for a grant from the U.S. National Institutes of Health. . .Capecchi described three separate projects. Two of them were solid stuff with a clear track record and a step-by-step account of the project deliverables. Success was almost assured.

The third project was wildly speculative. Capecchi was trying to show that it was possible to make a specific, targeted change to a gene in a mouse's DNA. It is hard to overstate how ambitious this was, especially back in 1980. . .The NIH decided that Capecchi's plans sounded like science fiction. They downgraded his application and strongly advised him to drop the speculative third project. However, they did agree to fund his application on the basis of the other two solid, results-oriented projects. . .

What did Capecchi do? He took the NIH's money, and, ignoring their admonitions, he poured almost all of it into his risky gene-targeting project. It was, he recalls, a big gamble. If he hadn't been able to show strong enough initial results in the three-to-five-year time scale demanded by the NIH, they would have cut off his funding. Without their seal of approval, he might have found it hard to get funding from elsewhere. His career would have been severely set back, his research assistants looking for other work. His laboratory might not have survived.

Well, it worked out. But it really did take a lot of nerve; Harford's right about that. He's not bashing the NIH, though - as he goes on to say, their granting system is pretty similar to what any reasonable gathering of responsible people would come up with. But:

The NIH's expert-led, results-based, rational evaluation of projects is a sensible way to produce a steady stream of high-quality, can't-go-wrong scientific research. But it is exactly the wrong way to fund lottery-ticket projects that offer a small probability of a revolutionary breakthrough. It is a funding system designed to avoid risks—one that puts more emphasis on forestalling failure than achieving success. Such an attitude to funding is understandable in any organization, especially one funded by taxpayers. But it takes too few risks. It isn't right to expect a Mario Capecchi to risk his career on a life-saving idea because the rest of us don't want to take a chance.

Harford goes on to praise the Howard Hughes Medical Institute's investigator program, which is more explicitly aimed at funding innovative people and letting them try things, rather than the "Tell us what you're going to discover" style of many other granting agencies. Funding research in this style has been advocated by many people over the years, including a number of scientific heroes of mine, and the Hughes approach seems to be catching on.

It isn't straightforward. You want to make sure that you're just not just adding to the Matthew Effect by picking a bunch of famous names and handing them the cash. (That's the debate in the UK after a recent proposal to emulate the HHMI model). No, you're better off finding people with good ideas and the nerve to pursue them, whether they've made a name for themselves yet or not, but that's not an easy task.

Still, I'm very happy that these changes in academic funding are in the air. I worry that our system is sclerotic and less able to produce innovations than it should be, and shaking it up a bit is just what's needed.

Another layer to this situation is that what's considered "too avant garde" or "too pedestrian" today will be different tomorrow, whereas the decision to fund or not fund a grant is a definite event that takes a lot of time and effort to mitigate.

I would also caution against not going too far in the direction of making the criteria of fund-worthiness tilt too far toward "novelty" (whatever that means on any particular day). Otherwise, a lot of important work that fills in important blanks, which is almost universally considered non-novel, will go neglected in favor of the new and shiny. I wish I knew what the right balance would be. Perhaps we'll know it when the level of dissatisfaction is minimal, a non-zero figure to be sure.

In theory, academic institutions in their tenuring decisions take this approach -- giving a long-term position to people who they think will be most productive over the long haul.

I believe a graduate student group at Harvard Medical School once sold a T-shirt with the names of scientists rejected for tenure there who went on to win a Nobel Prize. In a similar vein, while HHMI is a great force in biomedicine, it would be interesting to see how well it has done at choosing winners.

"...a sensible way to produce a steady stream of high-quality, can't-go-wrong scientific research..." is an oxymoron. If it's research, you can't entirely know where it's going, and it can always go wrong. That's the irreducible conumdrum of funding research.
If you insist on the "can't go wrong" you get Development (at best) but not Research.

Are tax payers, Congress, and others willing to risk failure? Although the rare successes are trumpeted as to why we should invest in risky research endeavors, are we prepared for the failures? (Many of us may be, but I don't think the general public would be.)

Who's innovative? Most tenure positions are given to sycophants who pursue politics with an almost religious zeal. The most successful academics I know did the least work in the labs. They merely cherry pick ideas from their students and post-docs.

As the lab increases in size, the academic 'gets richer' and seemingly more intelligent.

Any academic who does not pursue the interests of the senior faculty will not get tenure. I think the solution is to abolish the tenure system and
develop an R&D system outside the university system.

Right now any individual with an innovative idea in the hard sciences is at the mercy of generally evil academics who pursue a narrow agenda.

I am an "Angel Investor" in Silicon Valley. Over the years our organization has developed a few basic principles, and one of the most important is: Invest in the people, not the product. Of course, the propositions we see are way over on the D side of the R/D continuum, but it's hard for me to believe that the "people" get less important, rather than more, as you move to the R end of the line.

HHMI has funded some excellent people from the very beginning of their careers (Young Investigator/Early Career Award grants). They also push institutions not to force researchers into spending their time in the classroom, making that a condition of grants.

Their total spend (~$35m/yr) is around NIH's paperclip budget, so that's not a real solution.

I call BS on the whole article. He asked NIH for money, he got it (!!!!), he did some work related to the described aims, and then won the Nobel prize. Ergo the NIH funding system works. If you're going to split hairs about which bits of the proposal were better than others, at least let us see the "pink sheets" in which the reviewers liked Aims 1 & 2, but trashed Aim 3. As it stands, the available evidence (hearsay) doesn't add up to much.

Maybe when I win the Nobel prize (haha!) I can look back in my memoir and claim that I once had a grant where the reviewers didn't like Aim 3, but I got it anyway so who cares. This whole story is a non-event.

Be honest - has anyone ever had a grant where the reviewers liked all 3 aims?

Pushing against CR @#5:
When you submit an RO1 to the NIH today you really have to have 60% or better of the data already completed. So, while "success is(was) almost assured" isn't technically accurate, the phrase "Success is assuredly likely" is very accurate. And to hit that sweet spot with regularity it helps if your proposal is derivative rather than exploratory.

And that doesn't even get at the COP (circle of patronage) that forms within study sections composed of established individuals awarding grants to their known peers... rinse and repeat reciprocally.

@stakx I don't think the answer should be as reductionist as 'success' or 'failure.' Look at the report from Battelle that was published last week on the economic impact of the Human Genome Project. The past year has seen genomics come under a lot of fire from people claiming it was a waste of money and it achieved nothing. Except it turns out that every dollar invested by tax payers generated $141 dollars in the wider economy. Sure, you can quibble about the exact number, perhaps it's a third of that, maybe even a tenth, but either way that's a very significant positive impact for the nation even in the absence of a bunch of disease cures.

Critics of the report say that the methods used to calculate these numbers, despite being common practice in such studies, are flawed. For example, some of the costs of the project--such as the salaries of those working on it--are counted as benefits.

"What they did is conventional and reasonably done, for what it is," says economist Bruce Weinberg at Ohio State University in Columbus. "But at a deeper conceptual level, it's not very consistent with economic logic. All those guys who wound up sequencing the genome? Those aren't the benefits, those are the costs of sequencing the genome."

Interesting post, especially considering the previous post on the hypersensitive BS detector. It can be quite difficult to identify the truly innovative person or idea, especially early on. As a result almost all organizations & people I know fall back on the shorthand of 'big-time papers' and/or 'big-time mentors'. This attitude is pervasive in all the hiring environments I've been in - with very mixed results.

Stuart Kaufman and John Holland have shown that the ideal way to discover paths through a shifting landscape of possibilities is to combine baby steps and speculative leaps. The NIH is funding the baby steps.

Now we can argue whether this is included as a sap to the other side, but it's an argument I agree with. You can't get big leaps without a pool of people making incremental improvements in the supporting science and technology. In addition to Harford's main argument for enough freedom for the right people to make the leap.

See also: The Multiple Discovery theory of scientific discovery vs the Great Man theory. It's not a particularly new argument, or unique to a particular science. This one just has a big pile of money at stake.

This isn't about what is 'required' today by NIH reviewers; but rather a comment on what is research. If success is in any way 'assured' then it is not research.

And, your 60% is quite an exaggeration. Yes preliminary data is required, but 60%? Come on. Circle of patronage? Sorry your grants were not funded, it is tough out there (I know from experience). Just try harder, have a better idea.

An idea I've tossed out over the years is to have a private foundation invest in funding rejected NIH/NSF grant proposals with high variance in reviewer ratings. No rewriting needed, just ship us the exact proposal along with the reviewer comments and we'll pick the high-variance ones we like.

The theory behind this is that reviewer variance might be a proxy for "innovative & not-too-crackpotty." Of course if the reviewers at the funding agencies start taking our foundation's behavior into account, we get a tricky game-theory problem. So we'd want to be a small enough funding source that we don't have a big overall effect on their upstream behavior.