Failed Experiments Move Science Forward

Researchers don’t dream of negative studies, but experiments that don’t go as expected and trials that yield negative results are critical for moving science forward. To highlight this important part of the research process, we asked research scientists to speak about their own experiences with “failure.” Our first contributor is Michele Heisler, a health services researcher who develops and tests health system-based interventions.

There is a certain moment that every researcher who develops and evaluates health care interventions both eagerly anticipates and dreads. It is the moment that comes after years of planning and honing the intervention to be tested, after securing funding, recruiting study participants, conducting the intervention, ensuring it is being conducted as envisioned, meticulously gathering and recording data, and desperately trying to keep people involved in the study. After years of hard work, you and maybe some members of your research team gather in a room around a single computer. The code is written. All the relevant data is entered. You push the run button, the computer whirs, and output appears on the screen. You take a deep breath and lean over to review it.

And there it is. The results are often there in a single regression. You peer down and sigh. It didn’t work. The patients who received your amazingly crafted, brilliant, sure-to-succeed intervention did not do any better on your primary outcome than patients who got the usual care or some other approach that you didn’t think would work as well. All those years of work and in a single minute you realize you have what is called a “negative study.” Your hypothesis was wrong. You can’t blame it on poor fidelity to your imagined intervention: you carefully assessed fidelity, and it was delivered as intended. You can’t blame it on lack of engagement: most of the participants engaged in the intervention as well as you could have hoped. It just didn’t work any better than the alternative.

The first time this happened to me, I felt crushed by a sense of failure. How could it not have worked? A similar intervention had worked beautifully with patients with a different health condition. I had felt so sure this would be an effective intervention. And all those years spent on this just to find it didn’t work? I gave myself a bracing self-lecture on why randomized controlled trials and equipoise, the principles used to assign patients to different treatments in trials, were so very important. I reminded myself that I was an objective researcher seeking truth and not an advocate for certain approaches until they were rigorously tested—and even then continuing to question and challenge. But, it was only after I sulked for a while and then buckled down to try to make sense of the results and write them up that I began to see the importance of these “negative” findings. Happily, we did gather qualitative data from participants about their views and experiences that we were able to scour. As I delved into the data, the reasons the intervention didn’t work as we had hoped began to become clear. The reasons were fascinating and unexpected, but made so much sense in retrospect. My team and I became excited about the lessons learned from the failure of the intervention, the reasons it failed, and how we needed to change and adapt our approach to incorporate these lessons.

We want things to work. We believe in our ideas. Until they are rigorously tested, we just know our brilliant interventions will work as we imagined. I still dread the moment when truth about success or failure is irrevocably flashed on the computer screen. I now firmly believe, though, that the lessons from the failures may be as crucial as the “successes” to inform interventions that will improve health.

Want more on this subject? Read the second contribution to the series, in which surgical oncologist Anees Chagpar explains why she considers her non-significant and negative studies to be important parts of her publication history.

A version of this article entitled “Failure moves research forward” was originally published by ResearchGate.

Goodbye to the days when you had to be affiliated with a university or research institute in order to access awesome, cutting-edge scientific research. GotScience.org is a digital publication that delivers comprehensible science to the public—for free.

GotScience.org is a volunteer-powered project of Science Connected, a nonprofit organization dedicated to increasing public understanding of science. GotScience.org translates complex research findings into accessible insights on science, nature, and technology.

In our work to increase public understanding of science, we uphold the highest possible standards of scientific and journalistic integrity. We do not sensationalize, cherry-pick, or misrepresent the research reports. We do not report pseudoscience or mistake correlation for causation. We source peer-reviewed academic journals and follow the Code of Ethics of the Society of Professional Journalists.

Science Connected is a
501(c)(3) nonprofit publisher of science nonfiction for a general audience and resources for science teachers. We are dedicated to increasing public understanding of science and creating equal access to science education.

Most academic papers are still tucked away in journals behind expensive paywalls. So how do we find that research and deliver it to you for free? Our team receives press releases about the latest scientific discoveries. Our writers start with the press releases, then read the research papers, reach out to the scientists with questions, fact-check everything, and finally write up the results for you to read. We do this in cooperation with scientists, universities, research labs, museums, and publishers.