Review

This is the second post in a series* prompted by an essay by statistician Stephen Simon, who argued that Evidence-Based Medicine (EBM) is not lacking in the ways that we at Science-Based Medicine have argued. David Gorski responded here, and Prof. Simon responded to Dr. Gorski here. Between that response and the comments following Dr. Gorski’s post it became clear to me that a new round of discussion would be worth the effort.

Part I of this series provided ample evidence for EBM’s “scientific blind spot”: the EBM Levels of Evidence scheme and EBM’s most conspicuous exponents consistently fail to consider all of the evidence relevant to efficacy claims, choosing instead to rely almost exclusively on randomized, controlled trials (RCTs). The several quoted Cochrane abstracts, regarding homeopathy and Laetrile, suggest that in the EBM lexicon, “evidence” and “RCTs” are almost synonymous. Yet basic science or preliminary clinical studies provide evidence sufficient to refute some health claims (e.g., homeopathy and Laetrile), particularly those emanating from the social movement known by the euphemism “CAM.”

It’s remarkable to consider just how unremarkable that last sentence ought to be. EBM’s founders understood the proper role of the rigorous clinical trial: to be the final arbiter of any claim that had already demonstrated promise by all other criteria—basic science, animal studies, legitimate case series, small controlled trials, “expert opinion,” whatever (but not inexpert opinion). EBM’s founders knew that such pieces of evidence, promising though they may be, are insufficient because they “routinely lead to false positive conclusions about efficacy.” They must have assumed, even if they felt no need to articulate it, that claims lacking such promise were not part of the discussion. Nevertheless, the obvious point was somehow lost in the subsequent formalization of EBM methods, and seems to have been entirely forgotten just when it ought to have resurfaced: during the conception of the Center for Evidence-Based Medicine’s Introduction to Evidence-Based Complementary Medicine.

Thus, in 2000, the American Heart Journal (AHJ) could publish an unchallenged editorial arguing that Na2EDTA chelation “therapy” could not be ruled out as efficacious for atherosclerotic cardiovascular disease because it hadn’t yet been subjected to any large RCTs—never mind that there had been several small ones, and abundant additional evidence from basic science, case studies, and legal documents, all demonstrating that the treatment is both useless and dangerous. The well-powered RCT had somehow been transformed, for practical purposes, from the final arbiter of efficacy to the only arbiter. If preliminary evidence was no longer to have practical consequences, why bother with it at all? This was surely an example of what Prof. Simon calls “Poorly Implemented Evidence Based Medicine,” but one that was also implemented by the very EBM experts who ought to have recognized the fallacy.

There will be more evidence for these assertions as we proceed, but the main thrust of Part II is to begin to respond to this statement from Prof. Simon: “There is some societal value in testing therapies that are in wide use, even though there is no scientifically valid reason to believe that those therapies work.”