Subscribe to Edge

You can subscribe to Edge and receive e-mail versions of EdgeEditions as they are published on the web. Fill out the form, below, with your name and e-mail address and your subscription will be automatically processed.

Email address *

Your name *

Country *

NOTE: if you use a spam-filter that uses a challenge/response or authenticated e-mail address system, you must include "editor@edge.org" on your list of approved senders or you will not receive our e-mail.

From top to bottom of the profession, scientists are forsaking their chosen vocation in greater numbers than ever before, in favour of a more dependable and less stressful source of income. What is the basis of this stress and uncertainty, which so severely depletes the ranks of that indispensable community who seek to further humanity's understanding of nature, and thereby our ability to manipulate nature for the greater good? At the sharp end, it is the members of those ranks—scientists themselves—via the convention of apportioning funding by peer review of grant applications.

Only at the sharp end, of course: I certainly do not lay blame at scientists' feet. In fact, I don't really lay blame anywhere: the issue is that the prevailing system evolved in a different time, and in circumstances to which it was well suited, but has signally failed to adapt—indeed, has shown itself intrinsically non-adaptable—to present conditions. What is needed is a replacement system, which solves the problems that everyone in science agrees exist today but which still distributes funds according to metrics that all constituencies agree is fair.

The basic obstacle to doing this is that the overall merit of the contemporary peer-review system is apparently a local maximum: numerous tweaks have been proposed, but all have resisted adoption because they do more harm than good. But is it a global maximum: is it, as Churchill described democracy, the worst option except for all the others, or could a radical departure rank more highly by all key measures? Here I sketch a possible option. I am not sure it ticks all the boxes (though I do quite like it), but I do claim it shows sufficient promise as a candidate that the scientific community should no longer acquiesce in the current system on the assumption that nothing better is possible.

First, briefly: what's so wrong with peer review of grant applications these days? Two words: pay line. Peer review evolved when the balance between supply and demand of public research funds was such that at least 30% of applications could be funded. It worked well: if you didn't really know how to design a project, or how to communicate its value to your colleagues, or how to perform it economically, these failings would emerge and you would learn how to avoid them until eventually those colleagues would recommend to the government that you be given your chance. But these days, the corresponding percentage is typically in single digits. Does that mean you just have to be really good? I wish.

What it actually means is that you have to be not only really good but also really persistent, and moreover—and this is by far the worst aspect—really, really convincing in your argument that the project will succeed. What's so bad about that? Simply that some projects are (much) easier than others, and the hard ones tend to be those that determine the long-term rate of progress of a discipline, even though they have a significant failure rate. As such, a system that overwhelmingly neglects high-risk high-gain work hugely slows scientific progress, with catastrophic consequences for humanity. Also, cross-disciplinary research—work drawing together ideas not previously combined, which historically has been also exceptionally fruitful—is almost impossible to get funded, simply because no research panel ("study section", in NIH vernacular) has the necessary range of expertise to understand the proposal's full value.

I claim that this would be largely solved by a system based on peer recognition rather than peer review. When a scientist first applies for public research funds, his or her career would be divided into five-year periods, starting with the past five years (period 0), the coming five (period 1), etc. Period 1 is funded at a low, entry-level rate on the basis of simple qualifications (possession of a doctorate, number of years of postdoctoral study, etc), and without the researcher having provided any description of what specific research is to be undertaken. Period 2's funding level is determined, as a percentage of total funds available for the scientist's discipline of choice, again without any description of what work is planned to be performed, but instead on the basis of how well cited was his or her work performed in period 0.

This decision is made at the end of period 1 year 4, based on all citations since period 0 year 2 (so a total of eight years) to papers published in period 0 year 2 through period 1 year 1 (five years, approximating the interval when work done during period 0 will have been published). Citations are weighted according to whether one is a first/senior/middle author; self-citations are not counted; only papers reporting new research that depended on research funds are counted. Consideration is given to seniority and level of funding during the relevant period, according to a formula applied across the board rather than by discretion. Funding for period 3 is determined similarly, at the end of period 2 year 4, on the basis of work performed during period 1, and so on. Flexibility is incorporated concerning front-loading of funds to year 1 of a given period, to allow for large capital expenditures.

This improves on the current system in many ways. Zero time is spent preparing and submitting (and re-submitting…) descriptions of proposed research, and zero money on evaluating such proposals. Bias against high-risk high-gain work is greatly reduced, both by the lack of peer review and also because funding periods exceed the currently-typical three years. Significance of past work is evaluated after an appropriate period of time, not by such "first-impression" measures as the impact factor of journals where one has just published. One can also split one's application across multiple disciplines, with funding level from each discipline proportionated accordingly, removing the bias against cross-disciplinary research. Finally, one has a year at the end of a period to plan what work one will do in the next period, in full knowledge of what resources will be at one's disposal.

Researchers are of course free to seek additional funds from elsewhere (and indeed, some public funds could still be apportioned via the traditional method). Thus, this need not even be a particularly massive dislocation: it could easily be phased in.