Stakeholder Perceptions of Peer Review at the National Institutes of Health Center for Scientific Review

Mary Ann Guadagno,1 Richard K. Nakamura1

Objective To identify best practices for the successful peer review of grant applications and areas for improvement at the National Institutes of Health (NIH) Center for Scientific Review (CSR), the following questions guided an evaluation study: (1) to what extent are current CSR practices for peer review optimal for achieving its mission? and (2) what are the areas of success and improvement in the quality of peer review?

Design Pilot assessments were conducted to develop a short “Quick Feedback” survey instrument with four 7-point Likert-type scale statements ranging from “strongly agree” to “strongly disagree,” measuring key features of peer review, and an open text box for comments. During 1 grant cycle between 2015-2016 and 2016-2017, 2 surveys were sent to 10,262 and 10,228 reviewers, respectively, in all CSR study sections. In 2015, a survey was sent to 916 NIH Program Officers (POs), and a replication survey was sent to POs in 2016 to 905 POs. During 2015, 27 focus groups were conducted with 4 stakeholder groups, and 10 personal interviews were completed with NIH Institute Directors. Focus group participants were selected from NIH databases to ensure diversity. Interrater reliability between coders was 95.8%.

Results The 2015-2016 reviewer survey yielded a response rate of 47.1% (4832 of 10,262), and the 2016-2017 reviewer survey yielded a response rate of 47.0% (4807 of 10,228). The 2015 PO survey had a response rate of 38.0% (348 of 916), and the 2016 replication PO survey yielded a response rate of 37.0% (335 of 905). Nonrespondents were not substantially different from respondents. “Quick Feedback” surveys with reviewers in both years reported a high level of satisfaction with the peer review process. More than 80% of reviewers indicated they either “strongly agreed” or “agreed” that panels were doing a good job in terms of scoring and discussion and CSR did a good job relative to the quality of the rosters and assignments (Figure). Program Officers were less favorable than reviewers in both years, with only 43% to 57% of POs responding favorably. Program Officers’ dissatisfaction with review meetings focused on insufficient reviewer expertise in general and technical and logistical challenges at meetings more specifically. Focus group results supported these findings. Areas for improvement included reducing the burden of peer review for all stakeholders, technical and logistical issues during meetings, need for clearer communication, and more guidance on preparing applications.

The 2015-2016 reviewer survey yielded a response rate of 47.1% (4832 of 10,262), and the 2016-2017 reviewer survey yielded a response rate of 47.0% (4807 of 10,228). The 2015 Program Officer (PO) survey had a response rate of 38.0% (348 of 916), and the 2016 replication PO survey had a response rate of 37.0% (335 of 905). IAM indicates internet-assisted meeting; VAM, video-assisted meeting.

aStrongly agree or agree refers to a 1 or 2, respectively, as assessed on a 7-point Likert-type scale.

bIAM reviewers not included in 2016.

Conclusions A comprehensive evaluation using systematic surveys, focus groups, and interviews has resulted in useful suggestions for improving best practices for peer review by stakeholders in real time. Areas of success and suggestions for improvements by stakeholders are being addressed by leadership.

Conflict of Interest Disclosures: Both authors are federal employees at the National Institutes of Health. Survey research was conducted as part of their federal employment responsibilities.

Funding/Support: Focus groups and personal interviews were funded by the NIH Evaluation Set-Aside Program (14-5725 CSR), administered by the Office of Program Evaluation and Performance of the National Institutes of Health.

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the abstract.