Where did all the risk-takers go?

Has the system driven risk-takers out of scientific research?

In a recent letter to The Guardian, a very prominent group of scientists made the case for allowing more mavericks in science. I have a lot of sympathy with them, in part because they are right. The scientists who make the largest contributions are also the ones who do stuff no one else thought of, or work on a problem that everyone else thought was uninteresting. They are the ones willing to break the rules to do an experiment that they thought was absolutely necessary.

The impact of risky research cannot be overstated, so a plea to nurture risk-takers seems obvious. But, exactly who are we pleading with here? Most governments are largely hands-off when it comes to disbursing funds for scientific research. The people who decide if Jane Scientist gets some money are John and Jenny Scientist. So, why are scientists themselves so unwilling to give money to truly risky ideas?

Paying for risk

I don't think there is any single answer to this, but let me outline a few ideas that define the problem. My first successful grant application was, as the head of the grant committee told me afterward, funded because they always try to fund a couple of things that they think won't work. Now, every time I tell this story, everyone says "Oh, that's great, that is what we need more of." But, there is a difference between throwing 10 to 20 thousand dollars at a young researcher like me to try a few quick experiments and throwing a major grant 200 to 500 thousand dollars and putting the success of a PhD student on the line in the process.

That is the changing nature of science. Back when a lot of these breakthroughs occurred, a working scientist was, well, a working scientist. Millikan did his oil drop experiment himself. Feynman actually did much of his own work. Most scientists may have had an assistant or two, but they spent much of their time at work doing science for most of their careers.

That is very different from today, when most senior scientists do not go in the lab and do not perform experiments themselves. And that is a huge shame: to develop years of experience doing experimental work, only to stop once you've trained an assistant or two. Today's mavericks can't be a mavericks because they have delegated responsibility for science to trainees who are too busy developing skills to perform extraordinary experiments.

Now, the scientists among you will be saying "but I still have great and radical ideas while not in the lab. I simply cannot get them funded." Why do other scientists refuse to fund truly radical ideas? Unfortunately, one person's radical idea is another's pedestrian approach that only involves sticking old science in the microwave and re-heating it. It is only in retrospect that we can see which ideas turned out to be truly important and change a field forever. Trying to make that judgement in advance when evaluating grants is an impossible task.

Weighing risks

The challenge of identifying the truly novel ties into the problem of peer review and criticism in science. Peer review and, in particular, open criticism are what drive science forward. My science is made more robust by having to produce data to refute criticism leveled at me. As the stories of Shechtman, h. Pylori, and arsenic life tell us, this process is both frustrating—it takes a long time for truly radical results to get absorbed into the mainstream—and highly effective at rooting out mistakes in science.

But the key word above is open. I must be aware of criticism before I can answer it, and I have to be willing to accept that people will try to find flaws in my results. This is what makes the recent movement toward post-publication peer review so exciting: science will move faster.

(Post-publication peer review is simply having the broader scientific community look for problems after a paper is published; typically, only two or three peer reviewers get a chance to look before. With more eyes, problems become apparent more quickly. It's what recently identified problems with some high-profile stem cell work.)

But, in the case of deciding what science to do—which scientists shall we give money to this year?—this style of criticism runs into problems for a very simple reason: the results simply aren't in yet. Because of this, your grant typically relies on some preliminary results and a healthy dose of scientific reasoning.

Essentially, a grant application with reasoning that is immune to criticism is one that explores nothing new and takes no risks. The more radical the approach, the riskier the project, and the easier it is to find fault. The risky ones are the projects that want to head into the unknown and find gold in them thar hills. But, most hills don't have gold in them, and it is damned easier to argue that the hill you are headed for has no gold. Not only that, the plain in front of the hill has nothing to recommend it, so even the journey to the hills will be uneventful and boring.

So, there are plenty of grounds to attack any risky grant proposals. And, for the truly radical idea, there is no defense available. You cannot prove that gold is likely because it really is new territory. When funding rates are 25 percent and under (and in some areas funded by the NIH, rates are less than 10 percent), a reviewer who thinks your idea is too risky is the difference between funded and unfunded. The same is true if the reviewer says that your work is dull, so scientists have to walk a fine line that disguises the risks and makes the dull look interesting. The important thing is to not have any significant complaints to answer—because, even if you can answer them, they make a lasting impression and reduce your chances of getting funded.

A waste of time

What this all means is that the vast majority of peer review for science funding is a waste of everyone's time—at least in terms of deciding who should or should not get funding. But, while the criticism that comes from grant reviewing is not useful for deciding who gets funded, it is useful to the scientist; the criticism will allow a more carefully designed experiment, which is useful.

The whole grant evaluation procedure, though, stems from a mistaken belief. Most scientists believe that they can recognize a good idea when they see one. I am not convinced of that. Indeed, the vast majority of working scientists will tell you of great ideas they had that failed to get funding because someone else didn't recognize them. They will also tell you in exquisite detail how a rival lab went on to publish exactly what they proposed to do. We all recognize this effect: we all rate our own brain as above average.

Not that this is a deliberate choice. The government gives funding agencies a relatively free hand. Go forth, fund great science, they say. Sometimes they say go forth, fund great cancer research, or quantum physics, or whatever. But generally, they let us scientists decide and often ask that we take some risk and fund curiosity-driven research. The funding agencies themselves tell us that they want some projects that are relatively risky and exploratory. The scientists who review proposals like innovative proposals.

Why don't we end up with this risky research, then? To an extent, it goes up in a puff of self-inflicted pressure. We all know that not everyone agrees on what is innovative and risky. So, when you are asking for money, you err on the side of caution. The pressure to be conservative is the pressure to survive, since researchers are only as good as their last successful grant application. So most scientists are suppressing their ambitions a bit in order to maximize their chance of occasional funding. Or, perhaps more accurately, they break their ambitions up into tiny digestible nibbles so that they have many rather conservative applications that are headed toward something innovative.

Maybe the editorial writers are overreacting then? Maybe innovative and risky science is still being done? If so, it just isn't being done or presented in a single giant leap. Rather it is broken up piecemeal and presented as small, rather bland appetizers.

Even if that isn't happening, the question, of course, is how do we modify the system to allow more risk? The simple answer, I believe, is that we can only do this if we also accept that some truly dreadfully bad ideas get funded. By this, I mean that to accept the risk of the truly, unexpectedly great, we also accept the risk of funding perpetual motion machines. I don't think anyone is willing to go there, to be honest. And that might just mean we are stuck with the system we've already got.

Chris Lee / Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He lives and works in Eindhoven, the Netherlands.