Striking study results, little reliability

Dramatic findings in medical research often don't hold up, an analysis shows.

If a medical study seems too good to be true, it probably is, according to a new analysis.

In a statistical analysis of nearly 230,000 trials compiled from a variety of disciplines, study results that claimed a "very large effect" rarely held up when other research teams tried to replicate them, researchers reported in Wednesday's edition of the Journal of the American Medical Assn.

"The effects largely go away; they become much smaller," said Dr. John Ioannidis, the Stanford University researcher who was the report's senior author. "It's likely that most interventions that are effective have modest effects."

Ioannidis and his colleagues came to this conclusion after examining 228,220 trials grouped into more than 85,000 "topics" -- collections of studies that paired a single medical intervention (such as taking a nonsteroidal anti-inflammatory drug for postoperative pain) with a single outcome (such as experiencing 50% relief over six hours). In 16% of those topics, at least one study in the group claimed that the intervention made patients at least five times more likely to either benefit or suffer compared with control patients who did not receive the treatment.

For The Record Los Angeles Times Saturday, October 27, 2012 Home Edition Main News Part A Page 4 News Desk 2 inches; 61 words Type of Material: Correction Medical study results: In the Oct. 24 Section A, an article about medical studies said that follow-up studies of the drug Avastin showed that it did not help breast cancer patients live longer with their disease without getting worse. Those studies did find small improvements in progression-free survival but did not show significant improvements in overall survival or quality of life.

In at least 90% of those cases, the team found, including data from subsequent trials reduced those odds.

The analysis revealed several reasons to question the significance of the very-large-effect studies, Ioannidis said.

Studies that reported striking results were more likely to be small, with fewer than 100 subjects who experienced fewer than 20 medical events. With such small sample sizes, Ioannidis said, large effects are more likely to be the result of chance.

"Trials need to be of a magnitude that can give useful information," he said.

What's more, the studies that claimed a very large effect tended to measure intermediate effects -- for example, whether patients who took a statin drug reduced their levels of bad cholesterol in their blood -- rather than incidence of disease or death itself, outcomes that are more meaningful in assessing medical treatments.

The analysis did not examine individual study characteristics, such as whether the experimental methods were flawed.

The report should remind patients, physicians and policymakers not to give too much credence to small, early studies that show huge treatment effects, Ioannidis said.

One such example: the cancer drug Avastin. Clinical trials suggested the drug might double the time breast cancer patients could live with their disease without getting worse. But follow-up studies found no improvements in progression-free survival, overall survival or patients' quality of life. As a result, the U.S. Food and Drug Administration in 2011 withdrew its approval to use the drug to treat breast cancer, though it is still approved to treat several other types of cancer.

With early glowing reports, Ioannidis said, "one should be cautious and wait for a better trial."

Dr. Rita Redberg, a cardiologist at UC San Francisco who was not involved in the study, said devices and drugs frequently get accelerated approval on the basis of small studies that use intermediate end points.

"Perhaps we don't need to be in such a rush to approve them," she said.

The notion that dramatic results don't hold up under closer scrutiny isn't new. Ioannidis, a well-known critic of the methods used in medical research, has written for years about the ways studies published in peer-reviewed journals fall short. (He's perhaps best known for a 2005 essay in the journal PLoS Medicine titled, "Why Most Published Research Findings Are False.")

But the scope of the JAMA analysis sets it apart from Ioannidis' earlier efforts, said Dr. Gordon Guyatt, a clinical epidemiologist at McMaster University in Hamilton, Canada, who was not involved in the work.

"They looked through a lot of stuff," he said.

Despite wide recognition that big effects are likely to disappear upon further scrutiny, people still "get excited, and misguidedly so" when presented with home-run results, Guyatt said.

He emphasized that modest effects could benefit patients and were often "very important" on a cumulative basis.