Reason: This article has been retracted at the request of the authors, because of a data entry error, which is fundamental to the study findings. As background, this was a clinical study where a specific variable was tested in a large database. The process involved merging a variable (presence or absence of syncope) from one electronic source with an alternate electronic database of patients with pulmonary arterial hypertension and assessing associations and outcomes. In proceeding to design a follow-up study to this work, Dr. Le went back to the original source file to abstract new data. In doing this she identified a ‘cut-and paste’ error in which the column of syncope data was transferred incorrectly where syncope/no syncope variables were assigned to wrong subjects. This led to a critical error that then got carried forward and a fundamental misclassification of syncope in the final study group. This error fundamentally affects the results, which now do not fully support the conclusions.

The paper has been cited once, according to Thomson Scientific’s Web of Knowledge.

We’ve tried to contact Le and Kane for more information, and will update with anything we find out.

The Journal of the American College of Cardiology, like many journals, struggles with limited resources that allow errors to pass through, according to an April 17 editorial from Anthony N. DeMaria, the journal’s editor-in-chief. In “Scientific Misconduct, Retractions, and Errata”, DeMaria wrote:

If we can believe that over 10% of investigators are aware of scientific misconduct, either we as editors have been extraordinarily discerning of such transgressions during the review process, or we have occasionally been duped. This perhaps would not be surprising given the limited arsenal available to us to identify misconduct.

Kane’s group voluntarily came forward to admit their errors, and there was no misconduct reported.

In a broad spectrum of clinical PAH patients, syncope is infrequent, associated with markers of right heart dysfunction and is strongly and independently predictive of poor outcome.

Cardiology Todayfeatured the research on April 18, 2011 during its coverage of the The International Society for Heart & Lung Transplantation 31st annual meeting and scientific sessions. They quoted Le:

Presyncope/syncope is associated with markers of increased disease severity in newly diagnosed PAH patients. However, it was not predictive of unadjusted survival.

James Young, a Cardiology Today section editor, wrote a brief editorial to accompany the online article:

To me, the interesting aspect of this data was validation of something that has been repeatedly mentioned by astute clinicians of yesteryear: the relationship of presyncope and syncope to severity of PAH. It was an elegant analysis of an important registry and raises the question of a pathophysiologic link of syncope/presyncope to worsening PAH and not just something that is a consequence of PAH.

Essentially the same issue as destroyed Potti. Clinicians and data are a very dangerous mix. The very people who most dislike statistical analysis do it and publish. Would they get a biostatistician to do surgery? Seems unlikely. A cardiologist doing data analysis? Exactly the same thing.

Not so. Potti was destroyed by a repeated series of efforts to deceive, which including inflating statistical significance. The retractions followed in-depth analysis by others. In this case it sounds like a genuine error of data entry with no intent to deceive, and the problem was self-reported by the original authors. Nothign really linking the 2 cases other than the perp’s were both MDs.

Did you actually read the Baggerly and Coombes paper? They did not start off trying to deceive people. They started off trying to address an issue. They furnished their data to B & C until issues arose.

“7. Discussion.
7.1. On the nature of common errors. In all of the case studies examined
above, forensic reconstruction identifies errors that are hidden by poor
documentation. Unfortunately, these case studies are illustrative, not exhaustive;
further problems similar to the ones detailed above are described
in the supplementary reports. The case studies also share other commonalities.
In particular, they illustrate that the most common problems are
simple: e.g., confounding in the experimental design (all TET before all
FEC), mixing up the gene labels (off-by-one errors), mixing up the group
labels (sensitive/resistant); most of these mixups involve simple switches or
offsets. These mistakes are easy to make, particularly if working with Excel
or if working with 0/1 labels instead of names (as with binreg). We have
encountered these and like problems before.”

It is clear that Potti began with data handling errors. They then attempted to cover up.

Maradit-Kremers is a clinical epidemiologist. She doesn’t do surgery. My point is, this wasn’t a bunch of clinical cardiologists who decided to play with a big spreadsheet and a statistics program. This WAS a team with biostatistics experience. Just a mistake I suspect, but not a mistake due to lack of research qualifications.

“The Journal of the American College of Cardiology, like many journals, struggles with limited resources that allow errors to pass through, according to an April 17 editorial from Anthony N. DeMaria, the journal’s editor-in-chief. In “Scientific Misconduct, Retractions, and Errata”,…”

There’s nothing in this editorial that suggests that JACC is struggling with “limited resources”, unless you understand “resources” to mean the “limited arsenal available to us to identify misconduct”. As far as tangible resources are concerned, my impression from working with scientific society-sponsored cardiology journals in Spain and elswhere is that the sponsoring societies are very well endowed in terms of funding.

So perhaps what some of the better-funded medical journals do (or fail to do) about peer review and misconduct depends more on their policies and priorities rather than on potential limitations in available knowledge and skills. It’s hard for editors to deal with retractions, but not all journals can claim “limited resources” as a reason for inaction or procrastination.

“Limited resources” is a very useful excuse. It has general utility, and can garner some sympathy points too.
“Blame the government” might trip off the tongue next. Perhaps it might better if journals with “limited resources” closed, or pooled their resources and merged with other journals, and stopped polluting the scientific literature. I know that cardiology has many aspects to it, but there are so many journals. Eventually the information has to be useful and fit into a medical doctor’s brain (contrary to many reports, they do have brains).

Cut-and-paste errors and partial sorts are common mistakes in a spreadsheet, and are terribly difficult to spot afterwards. As a Biostatistician, I try to caution researchers about this, because I have discovered similar errors on several occasions. It would be quite possible for a subtle error of this sort to slip past me unnoticed, because I generally have no familiarity with or access to the original data that would allow me to spot discrepancies.

Use a database if you can – REDCap is becoming a very good choice for this.
Avoid all spreadsheet cut/paste operations if at all possible.
When sorting a spreadsheet, verify that ALL columns are selected for sorting (Excel automatic expansion can fail if there are empty columns between data columns).
Consult a statistician about how to set up a good database in the first place.

Kudos to the authors for noticing this and owning up to it. It happened to me once, although luckily I detected it before publication, but it’s a terrible feeling as the realization dawns.

Excel is especially dangerous because it allows you to select only some columns and then reorder or shift them, which can lead to misaligned rows; this is something that SPSS, for example, just doesn’t allow you to do (with good reason)!