ACC News Story

I have always found it an interesting experience to be at a crossroads and find an article whose title jumps out and addresses the precise subject that is causing my discomfort. When I saw Chris’s recent commentary in CardioSource WorldNews (“Why Is It So Hard to Change Our Minds?” from the August issue), I immediately thought of the clinical trials I had designed or participated in whose results were ignored—or worse—by my colleagues.

Designing a clinical trial to investigate a topic that the cardiology community largely considers closed or answered is hazardous, and not to be taken lightly. In spite of being physicians and scientists, we do not lightly change “truths” learned over a career, or that somehow support us financially or intellectually. In this environment, it could be considered hazardous for Judith Hochman, MD, to have started the Occluded Artery Trial (OAT).

As some may remember, OAT tested the hypothesis that late opening of an occluded infarct artery was beneficial. Before we started the trial, about half of all post-MI patients in the United States and Canada in whom an occluded infarct artery was found too late for myocardial salvage had the artery opened. The trial was a tough, multi-country, multi-year slog, but we finished. The interventional community was surprised to find that there was no difference between leaving the artery closed or opening it in stable patients.1 The ACCF/AHA Guidelines Committee made late opening of an occluded infarct artery in a stable patient a Class 3 recommendation. We then used data from the National Cardiovascular Data Registry® to see if clinical practice changed post-OAT.2 And it did not change one bit. So, an NIH-sponsored randomized clinical trial run by a respected team obtained a result that went against the grain, reduced revenue, and caused referring physicians anguish. Not opening the artery? This was too much to handle. Our community of cardiologists, physicians, and scientists wrongly chose not to change practice. A little starch in the spine is good when the data support an unpopular course, but the starch was not universally applied. Unfortunately, sometimes minds will change only when reimbursement goes away—a sad truth.

More recently, I have found myself at the center of a really unpopular trial. In 1999, a patient asked me if he should receive ethylene diamine tetra-acetic acid (EDTA) chelation therapy; I said, “Of course not.” I then committed the cardinal sin of looking into the literature, deciding I had no basis for my answer, and 3 years later “winning” a $30 million Request for Application from the National Center for Complementary and Alternative Medicine (NCCAM) and NHLBI to carry out the Trial to Assess Chelation Therapy (TACT). A stellar team—including the Duke Clinical Research Institute for stats and data, superb NHLBI and NCCAM Program Officers, Brigham and Women’s Hospital for events adjudication, a great Data and Safety Monitoring Board, and an alliance of conventional cardiologists and chelation practitioners—set out to find out whether, in post-MI patients, EDTA chelation reduced a combined cardiovascular endpoint. This trial was a 10-year odyssey. Most of us on the team did not realize that, for many mainstream cardiologists and academic opinion leaders, an anti-EDTA chelation bias had a quasi-religious fervor. At every step we were dogged by self-appointed anti-chelation watchdogs filing Freedom of Information Act requests for our protocols and finances, publishing inflammatory half-truths in internet pseudo-journals, dragging in the Office for Human Research Protections (OHRP), and getting the attention of journalists who should have known better. The OHRP was consummately professional and the trial continued. We delivered 55,222 placebo or EDTA infusions in 1,708 patients, followed patients for a median of 55 months, adjudicated events, and wrote up our results.3

This is where it gets even stranger. We found that EDTA chelation significantly reduced combined cardiovascular events by 18% overall and by 39% in diabetics. If we had been testing a new antiplatelet agent we would have been heroes, but this was chelation therapy. During the presentation at the American Heart Association meeting, the discussant misstated we had changed our primary endpoint and we were excoriated for a change in sample size, while another investigator whose trial had changed sample size sat cheerfully at the presenters’ table. The Program Chair also felt compelled to state that the AHA does not support chelation therapy. Publishing our results has also been a journey, although I can say that the JAMA editors evaluated the data and our responses dispassionately and on their merit.

But enough complaining. Have I learned anything? I think so. Change does not come easily. All of us think we are scientists, but we apply scientific reasoning best when the results of the experiment suit our biases—or our pocketbooks. We can be brutal when a good experiment shows the “wrong” result. I also learned that there are some real professionals out there who will look at an unexpected result skeptically, but evaluate data dispassionately. The personal lesson for me has been to learn to be humble about unlikely hypotheses and unexpected results.