How we review peer review

In case you have never attended an open house workshop at the NIH's Center for Scientific Review, in which people who participate in NIH peer review gather to discuss how the process is going and could improve, here's how it typically goes:
Tuesday morning (December 18), about 80 stakeholders such as study section leaders in the Biomolecular group (about one-sixth of the entire CSR) gathered in a large auditorium of the Natcher Building on the NIH campus to consider two questions.
1. W

By Alison McCook | December 20, 2007

In case you have never attended an open house workshop at the NIH's Center for Scientific Review, in which people who participate in NIH peer review gather to discuss how the process is going and could improve, here's how it typically goes:
Tuesday morning (December 18), about 80 stakeholders such as study section leaders in the Biomolecular group (about one-sixth of the entire CSR) gathered in a large auditorium of the Natcher Building on the NIH campus to consider two questions.
1. What will be the most important questions and/or enabling technologies you see forthcoming within the science of your discipline in the next 10 years?
2. Is the science of your discipline, in its present state, appropriately evaluated within the current study section alignment? Suggestions?
(Note: Concurrent to the CSR review, the NIH is conducting an agency-wide review of peer review. You can linkurl:weigh in;http://www.the-scientist.com/news/display/54009/ on some of the changes the agency is considering.)
The attendees divided into three groups: Biophysics and Cell Biology (BCB); Biochemistry, Molecular Genetics, Genetics, and Evolution (BMGE); and Computations, Modeling, Large Data Sets, Theory, Genomics, and Proteomics (CMTP). Each group had only one hour to debate each question - with a handful of participants doing most of the talking, as with most group discussions - then feverishly distill the discussion into three essential points, tweaking the wording. Everyone filed back into the auditorium, where study section chair co-facilitators had about five minutes each to present the three points their group designed, followed by approximately 15 minutes for the audience to ask questions or react.
Amazingly, the groups often appeared to come to some consensus. The word of the day was "integration" - of tools, multidisciplinary approaches, datasets, experimentalists with computational modeling, knowledge about single molecules/cells/organs, etc. There was much talk about the importance of training people to use bioinformatics tools to evaluate data that have already been accumulated, rather than simply adding more data to the pile.
There were definitely points with no consensus, however - are NIH peer reviewers mentoring applicants, meaning providing guidance in how to improve the application, or simply delivering essentially a yay/nay vote? Should they do more mentoring, or less? During the BMGE discussion of question 2, an attendee asked: What happens if peer reviewers recommend a particular experiment, and the applicant goes to the trouble of doing that experiment, and the grant still isn't approved? Reviewers can't mentor applicants "in good conscience if they can't guarantee the application will get funded," he argued.
Around lunchtime, CSR Director Toni Scarpa reviewed the changes the agency was already implementing, partly a result of the previous open houses held throughout the year: Shortening the review cycle, reducing the number of initial review groups (IRGs), conducting more reviews electronically.
Last year, the CSR received 80,000 grant applications, reviewed 52,000, recruited 18,000 reviewers, and conducted 1,800 review meetings.