JAMA Forum: When Publicity Preempts Peer Review

About 2 months ago, the media was full of news stories about a study called the Systolic Blood Pressure Intervention Trial (SPRINT). This media attention wasn’t because of the publication of a peer-reviewed article. It was because the National Institutes of Health held a media briefing about ending the trial early.

At the time, we knew that SPRINT was a randomized controlled trial of more aggressive targets for blood pressure control in adults aged 50 years or older with hypertension. Participants also had an increased risk of cardiovascular events but were excluded if they had diabetes, a history of stroke, or polycystic kidney disease. Patients were randomly assigned to treatment with a target of 120 mm Hg or 140 mm Hg systolic pressure. The main outcome of interest was cardiovascular outcomes and kidney disease.

The study was supposed to be completed in 2017, but the data safety and monitoring committee thought that the benefits were so clear that the study was halted early. It seemed unethical to continue the study, with some participants assigned to the higher blood pressure target when the lower one was so clearly beneficial.

Although this was certainly good news and the results would be of interest to many, I was concerned, as were many others, about how the announcement was made. At this point, the trial’s results had not been fully analyzed by the study team. More importantly, their meaning and context had not yet been subject to peer review.

We live in an age in which the barriers to information dissemination are disappearing. This does not mean, however, that all information should be disseminated immediately. Peer review exists for a reason.

When the results were announced, we were missing a lot of important information. What were the specific inclusion and exclusion criteria? Were there any adverse outcomes? What did the study population look like? What were the absolute changes in outcomes? Were there limitations?

None of this could be gleaned from the press briefing or understood from the news stories that covered it.

It’s hard to take the media to task for any of this. If a major study ends early and the funders hold a briefing, then reporters are going to cover it. But releasing the results of a study in this manner is problematic because it doesn’t allow for the careful reasoning and full information necessary to place it in context.

Some argued that the data and all relevant information should have been released broadly immediately. Although this would have allowed us a fuller picture of what was actually discovered, I’m not sure that’s the right answer either.

“Official” publication after peer review serves many purposes. The first is that peer review allows for an unbiased judgment about the validity of the results. It also allows multiple parties to judge whether the conclusions drawn from the results are warranted. It forces an acknowledgement of limitations. It ensures that a description of the methods is complete. It also allows the work to be placed in context, with accompanying editorials, if warranted.

All of these actions are important. They allow researchers to be more careful in their work, to try to make sure researchers are accountable, and to be sure their work is reproducible. That has been a growing concern recently.

Just recently, the final results of SPRINT were published, along with 2 editorials. Multiple revisions were requested by the editors in order to improve the manuscript. We now know that those in the group with the treatment target of 120 mm Hg systolic pressure experienced a significantly reduced rate of myocardial infarction, acute coronary syndromes, stroke, heart failure, or death than those in the group with the higher treatment target (1.65% versus 2.19% per year). This means that, by my crude calculation (the authors use a more sophisticated method), the number needed to treat is 185.

We also now know that serious adverse events, including hypotension, syncope, electrolyte abnormalities, and acute kidney injury or acute renal failure occurred more frequently in the treatment group (4.7% versus 2.5%). This means the (crude) number needed to harm is 45.

It’s likely that people at high risk will choose the increased risk of 1 of these adverse events to avoid a major cardiac issue or death, and that’s a reasonable choice. But a full understanding of the benefits and risks revealed by SPRINT is necessary for anyone to make treatment decisions for individual patients. None of that was possible until now.

The time from the media briefing to full publication was only 2 months. But most submitted reports of medical findings take much longer to review, accept, edit, and publish. The model of peer review and publication hasn’t changed as much as many would like, given advances in technology.

Harlan Krumholz, MD, editor of the journal Circulation, argued in a recent editorial aptly titled “The End of Journals,” that journals need to innovate. They are too slow, expensive, and limited. They remain stuck on a print model word count. They are too narrowly defined by their editors, by those they assume are their readers, and not enough by the scientific community as a whole.

He has a point. Journals have been sometimes slow to adapt to the digital world, although there are exceptions. This piece, for instance, is part of JAMA’s commitment to adapt to the new world through the JAMA Forum. It’s certainly not the only of JAMA’s new efforts, and this journal is not the only publication changing to fit the Internet age.

New technology doesn’t remove the need for many other functions of editorial oversight, however. Just because information can move faster doesn’t mean it always should. Peer review, and the roles journals play in releasing scientific information to the public, may not always make us happy. But they surely still serve a purpose.

***

About the author: Aaron E. Carroll, MD, MS, is a health services researcher and the Vice Chair for Health Policy and Outcomes Research in the Department of Pediatrics at Indiana University School of Medicine. He blogs about health policy at The Incidental Economist and tweets at @aaronecarroll.

About The JAMA Forum:JAMA has assembled a team of leading scholars, including health economists, health policy experts, and legal scholars, to provide expert commentary and insight into news that involves the intersection of health policy and politics, economics, and the law. Each JAMA Forum entry expresses the opinions of the author but does not necessarily reflect the views or opinions of JAMA, the editorial staff, or the American Medical Association. More information is available here and here.