Archives

Follow us on Twitter

A sweeping analysis of more than 5,000 papers in eight leading medical journals has found compelling evidence of suspect data in roughly 2% of randomized controlled clinical trials in those journals.

Although the analysis, by John Carlisle, an anesthetist in the United Kingdom, could not determine whether the concerning data were tainted by misconduct or sloppiness, it suggests that editors of the journals have some investigating to do. Of the 98 studies identified by the method, only 16 have already been retracted. [See update at end.]

…has developed and reﬁned a novel use for the statistical analysis of baseline data to identify instances where sampling in clinical trials may not have been random, suggesting the trial was either not properly conducted or was inaccurately reported. Essentially, Carlisle’s method identiﬁes papers in which the baseline characteristics (e.g. age, weight) exhibit either too narrow or too wide a distribution than expected by chance, resulting in an excess of p values close to either one or zero.

For the new paper, Carlisle used a p value cutoff of one in 10,000 — in other words, an extremely narrow, or wide, distribution of characteristics.

For the latest effort, a look at studies published between 2000 and 2015, Carlisle also included JAMA and the New England Journal of Medicine. Those journals had 12 and 9 potentially problematic papers, respectively, with fewer of them retracted — one from each journal — compared to those in anesthesiology. Carlisle writes:

It is unclear whether trials in the anaesthetic journals have been more deserving of retraction, or perhaps there is a deﬁcit of retractions from JAMA and NEJM.

As Andrew Klein, editor of Anesthesia (which was itself one of the included journals), tells Retraction Watch:

Different journals will retract at different rates and speeds (depending on many factors, including how long any investigation takes).

Carlisle contacted the editors of all eight journals, and all responded “swiftly and positively,” Klein said. JAMA editor in chief Howard Bauchner told Retraction Watch that he informed Carlisle that he would read the paper and consider next steps. He said he still believes in a process, and that while allegations should be investigated, he can’t draw conclusions based upon allegations. And the New England Journal of Medicine said they could not comment until they have a chance to discuss the supplemental data from the study, which goes live today.

Klein said he is meeting with editors of all of the anesthesiology journals today to discuss the findings:

No doubt some of the data issues identified will be due to simple errors that can be easily corrected such as typos or decimal points in the wrong place, or incorrect descriptions of statistical analyses. It is important to clarify and correct these in the first instance. Other data issues will be more complex and will require close inspection/re-analysis of the original data.

The results of the new analysis suggest that the Carlisle Method would have identified problems in papers by two other anesthesiologists — Joachim Boldt and Scott Reuben — with large numbers of retractions, Klein said. The method was not yet available when those problems came to light.

…dishonest authors could employ techniques to produce data that would avoid detection. We believe this would be quite easy to achieve although, for obvious reasons, we prefer not to describe the likely methodology here.

Klein tells Retraction Watch:

By publishing the Carlisle Method, we are aware that we may be providing dishonest authors a chance to analyse the tool and an opportunity to adapt their methods to avoid detection. However, the analogy I use is plagiarism detection software, which is freely available. Despite this, every week we still receive many submissions which contain significant plagiarism and are therefore rejected immediately. Overall, as we have stated before, trying to replicate ‘nature’ and the random distribution or patient characteristics is extremely difficult and the Carlisle Method is designed to pick up non-random distributions.

Klein said it was not clear whether other journals would have the same rates of potentially problematic data, since they have not yet been analyzed.

According to Loadsman and McCulloch:

With the proven utility of applying the method to previous studies we have no doubt more authors of already published RCTs will eventually be getting their tap on the shoulder. We have not yet heard the last word from John Carlisle!