Academic spin: How to dodge & weave past research exaggeration

Yesterday’s uplifting emphasis at the quadrennial medical editors’ scientific meeting was bad research (“Bad research rising”). This morning’s motivational agenda focused on measuring some of the main techniques for jazzing up research results.

Yesterday’s uplifting emphasis at the quadrennial medical editors’ scientific meeting was bad research (“Bad research rising”). This morning's motivational agenda focused on measuring some of the main techniques for jazzing up research results.

It started with making research look more independent by not declaring authors' commercial conflicts of interest (COI). Kristine Rasmussen told us that in Denmark, doctors have to apply for permission to collaborate with industry. That enabled Rasmussen and her colleagues to study whether or not doctors who did clinical trials were declaring their commercial relationships with drug manufacturers.

Out of 171 trial authors, 11% did not disclose a COI with the trial sponsor or drug manufacturer, and another 26% did not disclose that they had a commercial relationship with the manufacturer of another drug for the same use. Lively discussion ensued. From the audience, Leslie Citrome remarked that some academic departments are involved in so many industry trials that they should now be regarded as contract research organizations rather than academia.

Later, we heard from Serina Stratton that out of 313 trials studied, 36 required sponsor/manufacturer approval for text or publication and 6 had gag orders. Leading to some inevitable questions: why aren't all academic institutions protecting researchers and trial participants from industry restrictions on academic freedom - and why aren't potential participants being warned about this before they agree to be in a trial?

Isabelle Boutron, on 9 September 2013 in Chicago at the 7th International Congress on Peer Review and Biomedical Publication

On to what data are reported: Jamie Kirkham found that 77% of systematic reviews they looked at had identified at least one trial in which it was suspected that data on harms had been held back. Scary.

Soon it was Isabelle Boutron's turn to tackle researcher spin about their trials. She defines this as using techniques to make the intervention studied look more beneficial than it was. You can do this by distracting people from non-significant important results with data on less important outcomes. Or using words that make exaggerated claims that data don't actually support.

Today she told us about a trial they did to see whether readers get caught out by the spin. They re-wrote abstracts to take out the spin and randomized 300 people to get either the spun or cleaned-up version. Not surprisingly, they found that the spin was successful in leading readers to believe the intervention was more beneficial than it was.

So how can we as readers protect ourselves from this fate? Understanding more about concepts like statistical significance is important - it's one of the main tools of the trade. I've written some more tips here: "They would say that, wouldn't they?" Don't rely on an abstract: you really do need to check the data and fine print. Check if there's a systematic review on the subject: that can help you see how the result fits in with other research. And be wary if they don't tell you what the range of other opinions on the topic is, or if most of the references they use come from their own team. "Even if I say so myself" isn't usually a good basis to support someone's interpretation.

We're about halfway through this conference now, and it's been keeping a cracking pace. Research on the accessibility of research and post-publication discussion about research lie ahead.

ABOUT THE AUTHOR(S)

Hilda Bastian

Hilda Bastian was a health consumer advocate in Australia in the '80s and '90s. Controversies riddled with ideology and vested interests drove her to science. Epidemiology and effectiveness research have kept her hooked ever since.

Newsletter

Get smart. Sign up for our email newsletter.

Read More

Bad research rising: The 7th Olympiad of research on biomedical publication

By Hilda Bastian on September 8, 2013

2

Next

Opening a can of data-sharing worms

By Hilda Bastian on September 10, 2013

2

Every Issue. Every Year. 1845 - Present

Neuroscience. Evolution. Health. Chemistry. Physics. Technology.

Subscribe Now!Academic spin: How to dodge & weave past research exaggerationYesterday’s uplifting emphasis at the quadrennial medical editors’ scientific meeting was bad research (“Bad research rising”). This morning’s motivational agenda focused on measuring some of the main techniques for jazzing up research results.

Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers.