“Ridiculing RCTs and EBM”

Last week Val Jones posted a short piece on her BetterHealth blog in which she expressed her appreciation for a well-known spoof that had appeared in the British Medical Journal (BMJ) in 2003:

Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials

Dr. Val included the spoof’s abstract in her post linked above. The parachute article was intended to be humorous, and it was. It was a satire, of course. Its point was to call attention to excesses associated with the Evidence-Based Medicine (EBM) movement, especially the claim that in the absence of randomized, controlled trials (RCTs), it is not possible to comment upon the safety or efficacy of a treatment—other than to declare the treatment unproven.

A thoughtful blogger who goes by the pseudonym Laika Spoetnik took issue both with Val’s short post and with the parachute article itself, in a post entitled #NotSoFunny – Ridiculing RCTs and EBM.

Laika, whose real name is Jacqueline, identifies herself as a PhD biologist whose “work is split75%-25% between two jobs: one as a clinical librarian in the Medical Library and one as a Trial Search Coordinator (TSC) for the Dutch Cochrane Centre.” In her post she recalled an experience that would make anyone’s blood boil:

I remember it well. As a young researcher I presented my findings in one of my first talks, at the end of which the chair killed my work with a remark that made the whole room of scientists laugh, but was really beside the point…

This was not my only encounter with scientists who try to win the debate by making fun of a theory, a finding or …people. But it is not only the witty scientist who is to *blame*, it is also the uncritical audience that just swallows it.

I have similar feelings with some journal articles or blog posts that try to ridicule EBM – or any other theory or approach. Funny, perhaps, but often misunderstood and misused by “the audience”.

Jacqueline had this to say about the parachute article:

I found the article only mildly amusing. It is so unrealistic, that it becomes absurd. Not that I don’t enjoy absurdities at times, but absurdities should not assume a life of their own. In this way it doesn’t evoke a true discussion, but only worsens the prejudice some people already have.

Jacqueline argued that two inaccurate prejudices about EBM are that it is “cookbook medicine” and that “RCTs are required for evidence.” Regarding the latter, she made reasonable arguments against the usefulness or ethics of RCTs for “prognostic questions,” “etiologic or harm questions,” or “diagnostic accuracy studies.” She continued:

But even in the case of interventions, we can settle for less than a RCT. Evidence is not present or not, but exists on a hierarchy. RCT’s (if well performed) are the most robust, but if not available we have to rely on “lower” evidence.

BMJ Clinical Evidence even made a list of clinical questions unlikely to be answered by RCT’s. In this case Clinical Evidence searches and includes the best appropriate form of evidence.

where there are good reasons to think the intervention is not likely to be beneficial or is likely to be harmful;

where the outcome is very rare (e.g. a 1/10000 fatal adverse reaction);

where the condition is very rare; [etc., for a total of 6 more categories]

In asserting her view of another inaccurate prejudice about EBM, Jacqueline took Dr. Val and Science-Based Medicine to task:

Informed health decisions should be based on good science rather than EBM (alone).

Dr. Val: “EBM has been an over-reliance on “methodolatry” – resulting in conclusions made without consideration of prior probability, laws of physics, or plain common sense. (….) Which is why Steve Novella and the Science Based Medicine team have proposed that our quest for reliable information (upon which to make informed health decisions) should be based on good science rather than EBM alone.”

Methodolatry is the profane worship of the randomized clinical trial as the only valid method of investigation. This is disproved in the previous sections.

The name “Science Based Medicine” suggests that it is opposed to “Evidence Based Medicine”. At their blog David Gorski explains: “We at SBM believe that medicine based on science is the best medicine and tirelessly promote science-based medicine through discussion of the role of science and medicine.”

While this may apply to a certain extent to quack[ery] or homeopathy (the focus of SBM) there are many examples of the opposite: that science or common sense led to interventions that were ineffective or even damaging, including:

As a matter of fact many side-effects are not foreseen and few in vitro or animal experiments have led to successful new treatments.

At the end it is most relevant to the patient that “it works” (and the benefits outweigh the harms).

Furthermore EBM is not -or should not be- without consideration of prior probability, laws of physics, or plain common sense. To me SBM and EBM are not mutually exclusive.

Jacqueline finished by quoting a few comments that had appeared on the BMJ website after the parachute article. Some of them (not all, I’m happy to report) revealed that their authors lacked a sense of humor. Another argued that “EBM is not RCTs.” Still others argued that RCTs are valuable for precisely the reason illustrated by Jacqueline’s examples listed above: that even some, seemingly safe and effective treatments—based on science or common sense or clinical experience—have eventually been shown, when subjected to RCTs, to behave otherwise. No one at SBM would argue the point.

Science-Based Medicine is Not Opposed to Evidence-Based Medicine

I am confident in asserting that we at SBM are in nearly complete agreement with Jacqueline regarding how EBM ought to be practiced. We are, I’m sure, also in agreement that many objections to EBM are specious. Among these, soundly criticized on this site, are special pleadings and bizarre post-modern arguments. The name “Science-Based Medicine” does not suggest that we are opposed to EBM. What it does suggest is that several of us consider EBM to be incomplete in its gathering of evidence, incomplete in ways that Jacqueline herself touched upon. I explained this in a series of posts at the inception of SBM in 2008,† and I discussed it further at TAM7 last summer. As such, Managing Editor David Gorski invited me to respond to Jacqueline’s article. I am happy to do so because, in addition to clarifying the issues for her, it is important to review the topic periodically: The problems with EBM haven’t gone away, but readers’ memories are finite.

Let me begin by asserting that everyone here agrees that large RCTs are the best tools for minimizing bias in trials of promising treatments, and that RCTs have repeatedly demonstrated their power to refute treatment claims based solely on physiology, animal studies, small human trials, clinical judgment, or whatever. I made the very point in my talk at TAM7, offering the Cardiac Arrhythmia Suppression Trial and the Women’s Health Initiative as examples. We also agree that there are some situations in which RCTs, whether for logistical, ethical, or other reasons, ought not to be used or would not yield useful information even if attempted. Parachutes are an example, but there are subtler ones, e.g., the efficacy of pandemic flu vaccines or whether the MMR vaccine causes autism. As we shall see, however, the list of exceptions offered by Jacqueline and BMJ Clinical Evidence is neither a formal part of EBM nor universally accepted by EBM practitioners.

To reiterate: The most important contribution of EBM has been to formally emphasize that even a high prior probability is not always sufficient to establish the usefulness of a treatment—parachutes being exceptions.

EBM’s Scientific Blind Spot

Now, however, we come to an important problem with EBM, a problem not merely of misinterpretations of its tenets (although such are common), but of the tenets themselves. Although a reasonably high prior probability may not be a sufficient basis for incorporating a treatment into general use, it is a necessary one. It is, moreover, a necessary basis for seriously considering such a treatment at all; that is, for both scientific and ethical reasons it is a prerequisite for performing a randomized, controlled human trial. Rather than explain these points here and now, I ask you, Dear Reader, to indulge me by following this link to a post in which I have already done so in some detail. I’ll wait here patiently.

……………….

Are you back? OK. Now you know that we at SBM are in total agreement with Jacqueline that EBM “should not be without consideration of prior probability, laws of physics, or plain common sense,” and that SBM and EBM should not only be mutually inclusive, they should be synonymous. You also know, however, that Jacqueline was mistaken to claim that EBM already conforms to those ideals. It does not, and its failure to do so is written right into its Levels of Evidence scheme—the exceptions that she offered, including those quoted from BMJ Clinical Evidence, notwithstanding. You know all of this because you’ve now seen several examples (there are many more) from that wellspring of EBM reviews, Jacqueline’s own Cochrane Collaboration. (There is another, more subtle reason for prior probability being overlooked in EBM literature, but it is an optional exercise for the purposes of today’s discussion).

EBM and Unintended Mischief

The problems caused by EBM’s scientific blind spot are not limited to the embarrassment of Cochrane reviews suggesting potential clinical value for inert treatments that have been definitively refuted by basic science, although that would be sufficient to argue for EBM reform. The Levels of Evidence scheme has resulted in dangerous or unpleasant treatments being wished upon human subjects in the form of RCTs, cohort studies, or case series even when existing clinical or scientific evidence should have been more than satisfactory to put such claims to rest. The Trial to Assess Chelation Therapy (TACT)—the largest, most expensive, and most unethical trial yet funded by the NCCAM—was originally justified by these words in an editorial in the American Heart Journal in 2000, co-authored by Gervasio Lamas, who would later become the TACT Principal Investigator:

The modern standard for accepting any therapy as effective requires that there be scientific evidence of safety and efficacy in a fair comparison of the new therapy to conventional care. Such evidence, when widely disseminated, leads to changes in clinical practice, ultimately benefitting patients. However, the absence of a clinical trial does not disprove potential efficacy, and a well-performed but too small “negative” trial may not have the power to exclude a small or moderate benefit of therapy. In other words, the absence of evidence of efficacy does not constitute evidence of absence of efficacy. These concepts constitute the crux of the lingering controversy over chelation therapy…

Such an argument, with its obvious appeal to the formal tenets of EBM, was made and accepted by the NIH in spite of overwhelming evidence against the safety and effectiveness of Na2EDTA chelation treatments for atherosclerotic vascular disease, including the several “small” disconfirming RCTs, comprising approximately 270 subjects, to which Dr. Lamas alluded. It was also accepted in spite of its violating both the Helsinki Declaration and the NIH’s own policy stipulating that preliminary RCTs should demonstrate efficacy prior to a Phase III trial being performed.

A 2006 Cochrane Review of Laetrile for cancer would, if its recommendations were realized, stand the rationale for RCTs on its head:

The most informative way to understand whether Laetrile is of any use in the treatment of cancer, is to review clinical trials and scientific publications. Unfortunately no studies were found that met the inclusion criteria for this review.

Authors’ conclusions

The claim that Laetrile has beneficial effects for cancer patients is not supported by data from controlled clinical trials. This systematic review has clearly identified the need for randomised or controlled clinical trials assessing the effectiveness of Laetrile or amygdalin for cancer treatment.

Why does this stand the rationale for RCTs on its head? A definitive case series led by the Mayo Clinic in the early 1980s had overwhelmingly demonstrated, to the satisfaction of all reasonable physicians and biomedical scientists, that not only were the therapeutic claims for Laetrile baseless, but that the substance is dangerous. The subjects did so poorly that there would have been no room for a meaningful advantage in outcome with active treatment compared to placebo or standard treatment—as we have recently seen in another trial of a quack cancer treatment. The Mayo case series “closed the book on Laetrile,” the most expensive health fraud in American history at the time, only to have it reopened more than 20 years later by well-meaning Cochrane reviewers who seemed oblivious of the point of an RCT.

A couple of years ago I was surprised to find that one of the authors of that review was Edzard Ernst, a high-powered academic who over the years has undergone a welcomed transition from cautious supporter to vocal critic of much “CAM” research and many “CAM” methods. He is now a valuable member of our new organization, the Institute for Science in Medicine, and we are very happy to have him. I believe that his belated conversion to healthy skepticism was due, in large part, to his allegiance to the formal tenets of EBM. I recommend a short debate published in 2003 in Dr. Ernst’s Focus on Alternative and Complementary Therapies (FACT), pitting Jacqueline’s countryman Cees Renckens against Dr. Ernst himself. Dr. Ernst responded to Dr. Renckens’s plea to apply science to “CAM” claims with this statement:

In the context of EBM, a priori plausibility has become less and less important. The aim of EBM is to establish whether a treatment works, not how it works or how plausible it is that it may work. The main tool for finding out is the RCT. It is obvious that the principles of EBM and those of a priori plausibility can, at times, clash, and they often clash spectacularly in the realm of CAM.

I’ve discussed that debate before on SBM, and I consider it exemplary of what is wrong with how EBM weighs the import of prior probability. Dr. Ernst, if you are reading this, I’d be interested to know whether your views have changed. I hope that you no longer believe that human subjects ought to be submitted to a randomized, controlled trial of Laetrile!

When RCTs Mislead

Finally, for the purposes of today’s discussion, let me reiterate another point that must be considered in the context of establishing, via the RCT, whether a treatment works: When RCTs are performed on ineffective treatments with low prior probabilities, they tend not to yield merely ‘negative’ findings, as most physicians steeped in EBM would presume; they tend, in the aggregate, to yield equivocal findings, which are then touted by advocates as evidence favoring such treatments, or at the very least favoring more trials—a position that even skeptical EBM practitioners have little choice but to accept, with no end in sight. Numerous such examples have been discussed on this website.

The first sentence that I ever posted on SBM, a quotation from homeopath David Reilly, was a perfect illustration of this misunderstanding:

Either homeopathy works or controlled trials don’t!

Dr. Reilly was correct, of course, but not in the way that he supposed. If there is anything that the history of parapsychology can teach the biomedical world, it is the point just made: human RCTs, as good as they are at minimizing bias or chance deviations from population parameters, cannot ever be expected to provide, by themselves, objective measures of truth. There is still ample room for erroneous conclusions. Without using broader knowledge (science) to guide our thinking, we will plunge headlong into a thicket of errors—exactly as happened in parapsychology for decades and is now being repeated by its offspring, “CAM” research.

Conclusion

These are the reasons that we call our blog “Science-Based Medicine.” It is not that we are opposed to EBM, nor is it that we believe EBM and SBM to be mutually exclusive. On the contrary: EBM is currently a subset of SBM, because EBM by itself is incomplete. We eagerly await the time that EBM considers all the evidence and will have finally earned its name. When that happens, the two terms will be interchangeable.

110 thoughts on “Yes, Jacqueline: EBM ought to be Synonymous with SBM”

Actually, Laika reminds me a lot of, well, me about two years ago. When this whole SBM thing got started, Kim may recall that we had some rather epic e-mail “discussions” (arguments, actually) in which I made the same sorts of points Laika did. I was laboring under a bit of a delusion, namely the idealized way that EBM should work. Ultimately, what Kim and some of the original SBM bloggers persuaded me of is that that’s not how EBM does work in practice. By relegating prior scientific knowledge to the lowest level of evidence and the RCT as the highest, EBM does in practice very often devolve into methodolatry, a.k.a. the profane worship of RCTs as the source of all medical truth. Indeed, Tom Jefferson is an excellent example of that.

Also, by completely discounting prior probability, as Kim so aptly describes, nothing is off the table for clinical trials, not even pseudoscience as ridiculous as homeopathy, and, as John Ioannidis has shown, the more improbable a hypothesis on prior scientific probability, the more likely RCTs examining that hypothesis will produce false positives. It’s not 5% as one would expect from setting statistical significance at the p=0.05 level; the percentage of false positives is much, much higher.

No, EBM must become synonymous with SBM. EBM advocates think that it is, and in words it is, but in practice it is most definitely not.

I was not quite ready to have my brain blown this morning. Not knowing the history of EBM, I assumed it was synonymous with SBM. I’ve discussed that I read SBM with my doctor and she replies using the words “evidence based medicine.” It makes me wonder if there is in fact a philosophical gulf between our meanings or if she’s using the term as a synonym. I’ve been reading daily since I learned of your blog and want to thank you for taking the time to help inform the willing to learn.

By relegating prior scientific knowledge to the lowest level of evidence and the RCT as the highest, EBM does in practice very often devolve into methodolatry

+++++++++++++

I think that EBM needs to make more room for basic science.

It drives me potty to have to have the discussion about whether or not we must have a section on homeopathy (invariably “has not been found to work, or been found not work”) in various guidelines – to which I always ask whether they are going to include the eating of custard creams (my personal prescription for any ill).

However, you are mistaken if you think that that it is “prior scientific knowledge” that is relegated to the lowest level of evidence. In fact, basic science isn’t really ‘in’ the evidence hierarchy, but in some sense informs the whole heirarchy.

What was actually relegated to the lowest level is “authority-based” medicine, aka guidelines produced by GOBSAT describing their own practice without reference to any other evidence (these are now “Good Practice Points”) and individual reminisence of “a case I once saw.” And deservedly so. To confuse this with “prior scientific knowledge” is a mistake.

From a laymen’s perspective, a most interesting window into “things I most definitely do not understand”. Oh well, maybe someday if I keep slogging away reading this blog I will come closer to comprehension…or perhaps in another life.

you are mistaken if you think that that it is “prior scientific knowledge” that is relegated to the lowest level of evidence…What was actually relegated to the lowest level is “authority-based” medicine…

Would that this were true, but it isn’t. Here is the entire text of the lowest level:

Expert opinion without explicit critical appraisal, or based on physiology, bench research or “first principles”

Clearly, “physiology, bench research or “first principles”” is the only reference to basic science in the EBM Levels of Evidence scheme. That it is considered to be no more valuable than authority-based medicine is unfortunate.

(not my university, just a nod to the way in which hierarchies of evidence are taught and popularised – something like a continuum from “most subject to systematic and/or random bias” through to “least subject to bias”). Obviously, I missed (or didn’t take on board) the official version of EBM levels of evidence since I had not cottoned on to their inclusion of basic science at the lowest level. (Perhaps there is more than one model knocking around…)

Clearly, “physiology, bench research or “first principles”” is the only reference to basic science in the EBM Levels of Evidence scheme. That it is considered to be no more valuable than authority-based medicine is unfortunate.

Yep. Under the EBM paradigm, the known implausibility of homeopathy based on how it violates well-established laws of physics and chemistry and contravenes very well-established science is of no more value in determining whether homeopathy “works” clinically than mere expert opinion.

I have to say I had no previouse knowledge of ebm or that it is different to sbm. It shocks me that anyone would ever put one above the other when it comes to evidence and basic science.
If you ever find a discrepancy between the two you have either screwed up or made history.

I look forward to a comparative intervention that will answer that question.

Remember that the whole point is to improve patient care.

FDA definition of fraud quoted in Sept/Oct 93 NCAHF Bulletin Board: The deceptive promotion, advertisement, distribution or sale of articles, intended for human or animal use, that are represented as being effective to diagnose, prevent, cure, treat or mitigate disease (or other conditions), or provide a beneficial effect on health, but which have not been scientifically proven safe and effective for such purposes. Such practices may be deliberate, or done without adequate knowledge or understanding of the article. (Quoted from a letter from M L Frazier – Director, State Information Branch 6/18/93).

However this nerd balked at
“Although a reasonably high prior probability may not be a sufficient basis for incorporating a treatment into general use, it is a necessary one.”
Holy Shmoly, and I thought sufficient evidence was enough to overturn any prior opinion that is not dogmatic, or have you reinvented that part of mathematics? I admit “reasonably high” is so fuzzy that it’s hard to make anything stick, but the point is that it is stuff like this that has some of us shaking in our boots. We worry that you aren’t competent to help with the calculations. Come friend, let’s be in this together.
Nerd goes wild, accusing docs “Which one of you will deliver us real priors instead of fuzzy chin-scratching stuff. Then we can incorporate them. Alternatively, we summarize the evidence, and you provide your own prior to get your own posterior.” Sadly, for most meta-analysis, obtaining a bayes factor is near impossible. If we could get it, we could show how various priors map to posteriors. I agree some discussion of the prior, even if fuzzy, is warranted, but may be very controversial, and hey, we’re mostly nerds – not qualified. Who will we obtain our priors from? Formally, that is difficult.

Second part:
General worry is that docs take license to do something, almost anything, to help patients, ignoring evidence – I do not accuse the experts here of that, but it sounds fishy sometimes. Docs want that license I think – admit it.
Behold: We wonder what to do for patient. Four things:
1) We treat with might-work-drug-without-statistical-evidence and patient gets better. Patient thanks us. We get paid. We may do it again.
2) We treat, no improvement. At least we tried. We get paid. We might do it again.
3) Do nothing. Patient gets better. No thanks. No money.
4) Do nothing, patient still bad. Patient hates us or dies. No money.
We see that it is very difficult to do nothing, even for perfectly scientific docs – and how many of those are there?

If there is not good evidence, maybe it is just an experiment on a human subject. That would be called research.

So where does all the debunking of positive “alternative” studies come in? Every time we rummage through them looking for arcane defects we reinforce the impression that all will be well for these methods if only they can produce enough good quality studies. It implies (wrongly) that we are required to accept the conclusions of studies if we cannot find fault with them.

Why bother with all that if we are going to reject such studies on other grounds?

PS The one Dana produced comparing individualised homeopathy and antidepressants looks to be of good quality, from a quick examination. I predict that we will be seeing lots and lots of such studies and that they will further challenge what we previously thought was a secure evidence base for medical practice, providing also a clear divide between proper medicine and that which we deem “quackery”.

My interpretation is that both treatments are probably working as placebo *under the conditions of the study*, but confidence in such drugs will be undermined if the findings are replicable in other quality studies.

An excellent and cogent analysis, and yet I still feel a little unsatisfied. I struggle with the subtle but important difference between EBM and SBM myself, and I can’t quite get over the sense that SBM replaces the weakness of EBM with a different weakness.

I agree that EBM can be SBM if it is applied in the way that Jacqueline describes it, and I agree that this often isn’t the case. I also agree that it is irrational and inappropriate to insist on RCT evidence before dismissing therapies like homeopathy that require jettisoning basic physics and chemistry to be true. Just as CAM advocates accuse us of methodolatry for denying the validity of their “clinical experience” and requiring better quality evidence, so they also hypocritically encourage methodolatry by insisting that we cannot dismiss their claims without definitive RCTs, which as Dr. Gorski points out are almost never definitive, especially in proving a negative. We need not be the charicature they paint us as nor give in to their disingenuoous pleas for a “fair trial.”

Which all lead to the “But….” There is some validity to the argument that deciding what to test and what not to test on the basis of current knowledge (aka prior probability) introduces a tremendous bias into the process of discovery and development of new interventions. As Dr. Bradfgord-Hill put it, “What is biologically plausible depends upon the biological knowledge of the day.” While most improbable new idea ultimately fail, some are actually correct, and our understanding of what is probable is not always accurate.

Similarly, Bayesian attacks on the worship of significance testing always strike me as spot on in their critique of statistical methodolatry, and then they fail to be convincing when they move on the “solve” the problem by introducing a “fudge factor” for the prior probability of a hypothesis. Our use of statistical methods is often inappropriate and overzealous, or just plain wrong. But it is as popular as it is not only because of the veneer of mathematical certainy it imparts to reserach findings but also because of the salutary recognition that we need correctives for our subjectivity and bias.

So what’s the answer? Where’s the perfect balance between healthy open-mindedness and a postmodernist relativism that refuses to judge at all? Do we try to practice reasonable EBM and accept that some nonsense will make its way into, but hopefully not through, the filter of the evidence hierarchy? Or do we more aggressively apply our a priori judgements to new ideas, thus wasting less time and fewer resources testing the implausible but also making it harder to promote and develop surprising but actually good new ideas?

As a skeptic I spend a lot of my time trying to convince people that their undestanding and analysis of “the facts” is unreliable and that they should accept the verdict when their commonsense and what they think they know is proven wrong by clinical evidence. I’m reluctant to then turn around and say that in case X no such evidence is worh developing because my understanding of the facts shows X to be impossible. Science is needed because we are all vulnerable to the cognitive errors and biases that lead to poor judgement, even those of us who know we are. Isn’t too much prior plausibility consideration just an invitation to trust our usually pretty good but sometimes totatlly mistaken preconceptions?

a [foreign] word that doesn’t occur on this web post, as far as I can tell, when searching the post right now.

Because, likely, philosophy terms like epistemology are dirty words [too intellectual, in USA circles — I mean. I don’t mean to sound like a snob, but ‘American typical thought’ — likely outside of the skeptic population but not necessarily — is quite blunted].

But, here-right-now, I’ll be pedantic [and I will leave the shores of the US of A to indeed engage my higher mental faculties]:

I’ve a college degree that I graduated from in New York State [ironically] which has the motto “love of learning is the guide of life” aka PBK [an admirable ideal!].

So, learnedly, I’d like to say:

there is evidence and there is science.

I know you, Dr. A., already know this because I heard you speak [live] at the recent TAM SBM conference.

It is not enough to be ‘empirical’ and ‘in evidence’.

The larger, more rigorous issue is:

is it scientific.

Both are, formally, empirical.

But only the latter possesses high cognitive functionality, in my opinion [it discerns].

“The scientific” represents a consensus or the process of establishing such, but evidence belongs to a larger kind [e.g., it includes anecdote].

So, there is this essential question: “evidence, but of what kind?”

“Kind” here concerns the context of measurement.

I seek here to sound like the kind of philosophical kill-joy who deflates the balloons of the naive-and-confident.

I don’t include you, Dr. A., in that group.

But, my larger question is:

“are we going to THINK?”

I am completely in agreement, as Dr. A. has said:

“EBM by itself is incomplete. We eagerly await the time that EBM considers all [ALL!!!] the evidence.”

“I can’t quite get over the sense that SBM replaces the weakness of EBM with a different weakness.”

It cannot be weak if it brings additional evidence to the table and is inclusive of RCT data.

But I have some uneasiness about some aspects. While worthy ideas, the concepts of prior plausibility and Bayesian analyses can also look like too pat quasi-scientifc solutions for an awkward problem i.e. that “impossible” methods sometimes seem to work better than placebo within RCTs (mainly, and very importantly for the understanding of the phenomenon, with subjective clinical outcomes.)

I have previously pointed out that in practice, over time, as more studies are performed and they are of a generally better quality, the results of RCTs do tend to gravitate towards what other knowledge would predict.

EBM can surely be predicted over time to give the results expected for an inactive treatment. How could we assert otherwise? It merely needs multiple studies, performed and analysed by neutral automata, with every study published whatever it shows. Our argument is not with the RCT, but with how some people give undue significance to the inevitable occasional “positives”.

We skeptics have emphasized many times that it is the replicability of results in quality studies , especially from different researchers, that gives them force. We thus already have reason to be wary of individual RCTs, or ones from the same source. What alternative method consistently reaches high standards in these respects?

I prefer a direct, explanatory approach to one that can can look contrived and even seems to undercut the irrefutible logic of the clinical trial process but only when the results don’t suit. I know that may not be intended.

“Which all lead to the “But….” There is some validity to the argument that deciding what to test and what not to test on the basis of current knowledge (aka prior probability) introduces a tremendous bias into the process of discovery and development of new interventions. As Dr. Bradfgord-Hill put it, “What is biologically plausible depends upon the biological knowledge of the day.” While most improbable new idea ultimately fail, some are actually correct, and our understanding of what is probable is not always accurate. ”

I tend to agree. I do think that RCTs have their weaknesses, namely the inability to decide complex medical problems (beyond this drug for that drug, simple surgical techniques). However, I’m skeptical of prior plausibility being introduced as a solution because it introduces bias into the scientific process. If forced to choose, I’d probably go with EBM, partly because it seems such proponents are more skeptical of mainstream practices in medicine than what I’ve seen here (depression drugs or tamiflu, for example). Of course “evidence” automatically takes into account scientific evidence (what other kind of evidence is there? anecdotes?). I have no interest in CAM personally, but I do believe it should be studied, only because so many people are using these practices.

What bothers me about EBM most is that, in the real world, it is very expensive and very slow. And kills.

Say we have a disease, which to avoid confusing it with real cases, I will call Tumples.

SBM can get on with providing a possible working treatment for Tumples based on general principles. A few tests are done, it appears to work well and it becomes the standard treatment. In the meantime it can discard some other treatments as unlikely to ever be worth bothering with. Meanwhile EBM is still complaining that the SBM treament has no basis in real science, which needs more evidence first, and starts testing the suggested treatment and all other alternatives.

If 20 years later EBM can prove some nasty side-effect to the SBM treatment in a small but significant number of cases. It crows that SBM has failed, and that EBM is intellectually superior.

Meanwhile, of course, many people have been cured of Tumples. They have SBM to thank, not EBM, which has cured almost no people in its application of scrupulous purity.

Great!

We have too many illnesses and conditions to waste our time checking out ones that have no chance of working. All those tests of homeopathy have been stopping useful research into possible working treatments.

EBM seems to me to appeal to philosophers of medicine. It has spotless integrity and logic. Meanwhile in the real world we mostly get by without that, using rather more practical considerations of time and cost.

I would like to see any EBM advocates show how EBM actually is cheaper, quicker and saves more people. That is, makes it more useful. Rather than get wounded about assaults on their purity.

“I’m skeptical of prior plausibility being introduced as a solution because it introduces bias into the scientific process.”

But plausibility is an essential part of both scepticism and science.

If you propose a treatment like homoeopathy for which there is no known mechanism, and hence zero plausibility, then the evidence in support must be overwhelming. Otherwise scientific paradigms would blow about like chaff in the wind.

For an everyday example: If I told you I saw a dog run across main street today, you would probably accept my word for it. If I told you I saw an elephant cross main street, you would probably require some corroborating evidence like the circus being in town. If I told you I saw an alien cross main street, you would probably require not only photographic and video evidence but the alien himself.

A marginally positive trial of homoeopathy would be analogous to being handed a grainy blurred photograph of an alien spacecraft. Not enough.

“I have no interest in CAM personally, but I do believe it should be studied, only because so many people are using these practices.”

Yes, that is the only reason.
Just to point out that those who use CAM do so with disregard to the consideration of prior probability. Presumably this is the reason why you don’t use CAM.

“I’m skeptical of prior plausibility being introduced as a solution because it introduces bias into the scientific process.”

BillyJoe:
“But plausibility is an essential part of both scepticism and science. ”

“If you propose a treatment like homoeopathy for which there is no known mechanism, and hence zero plausibility, then the evidence in support must be overwhelming. Otherwise scientific paradigms would blow about like chaff in the wind.”

Yes, I understand that (the more the outlandish the claim, the higher the standard of evidence), but there’s nothing in EBM that prevents that concept. It’s just not the only concern. For some doctors, prior plausibility = my personal anecdotes. For example, with vertoboplasty, after it had been shown not to work, many doctors genuinely believed it had, based on their personal experiences with success. They just couldn’t believe the research results. (There’s been a lot of talk about cognitive dissonance and vertebroplasty, see if I can find something).

Vertebroplasty, cognitive dissonance, and evidence-based medicine: What do we do when the ‘evidence’ says we are wrong?
R. DOUGLAS ORR, MD* 2010 (good commentary below too)

I think Mark P hit the nail on the head that the main disadvantage is that EBM is much slower and more expensive (although I don’t know about killing people!). The standard for “proof” is much higher (aka methodolatry). I’m a cautious person and would have to accept a fairly high standard of evidence (probably RCT, if it’s possible) before taking a drug or surgery. I also happen to be a bit of an idealist, rather than a realist, so that’s another one of my biases.

The risk of sbm, though, is getting it wrong because all of the evidence isn’t in (first do no harm), wasting money and risking lives with unproven procedures and drugs. Another risk is that the medical field is constantly changing their recommendations, and people start to be disbelieving.

So, advantages and disadvantages to go around. I can understand the concerns on both sides.

“Yes, that is the only reason.
Just to point out that those who use CAM do so with disregard to the consideration of prior probability. Presumably this is the reason why you don’t use CAM. ”

I don’t think the average person who uses CAM gives a crap about either EBM or SBM. I doubt they even know what prior plausibility is, or look at studies at all. Then again, most people who use conventional medical treatment don’t either. My main reason for avoidance of homeopathy is that I can’t think of a single darn reason why it should work, despite my lack of medical knowledge. So yes, that’s true.

“I don’t think the average person who uses CAM gives a crap about either EBM or SBM. I doubt they even know what prior plausibility is, or look at studies at all.”

That is what I meant by “with disregard to the consideration of prior probability”. Perhaps I should have said “oblivious to”. Sorry

“My main reason for avoidance of homeopathy is that I can’t think of a single darn reason why it should work, despite my lack of medical knowledge. So yes, that’s true.”

That is prior probability. Welcome to SBM.

“there’s nothing in EBM that prevents that concept.”

According the author of this article, prior probability is not part of EBM. In fact, his contention is that EBM specifically excludes consideration of prior probability. I am presently reading some of the references to get a handle on this, because my understanding was that SBM was merely an extension of EBM, not antithetical (that is probably too harsh a word) to it. Indeed, if you read the referenced articles on p values and hypothesis testing, you will see that his idea contention seems to bear out.

“For some doctors, prior plausibility = my personal anecdotes.”

Unless I’ve completely misread this, personal experience is not part of the definition of prior probability. If it is, it should not be because it is almost useless as a form of evidence.

“For example, with vertoboplasty, after it had been shown not to work, many doctors genuinely believed it had, based on their personal experiences with success.”

Yes, case in point.

Personal experience, while persuasive, is much too unreliable to use as evidence that a treatment is effective.

There is a radiologist in Australia who spend half his life doing vertebroplasties. He featured on the news late last year complaining bitterly when the objective evidence did not bear out his subjective impressions of the usefulnes of this procedure.

“That is what I meant by “with disregard to the consideration of prior probability”. Perhaps I should have said “oblivious to”. Sorry “

”

I know what you meant. I was agreeing, but going a step further. My point was that not only do they not look at prior probability, they don’t look at RCTs either. However, people like Mr. Ulman (unlike the average CAM user who doesn’t care) desperately try to find research evidence, explain the (bogus) physiological reasons, and use personal experience (anecdotes) to defend homeopathy.

“My main reason for avoidance of homeopathy is that I can’t think of a single darn reason why it should work, despite my lack of medical knowledge. So yes, that’s true.”

“That is prior probability. Welcome to SBM.

”

That’s why I wrote it was true (that I was considering prior probability). This is not inconsistent with EBM.

“there’s nothing in EBM that prevents that concept.”

“According the author of this article, prior probability is not part of EBM. In fact, his contention is that EBM specifically excludes consideration of prior probability. I am presently reading some of the references to get a handle on this, because my understanding was that SBM was merely an extension of EBM, not antithetical (that is probably too harsh a word) to it. Indeed, if you read the referenced articles on p values and hypothesis testing, you will see that his idea contention seems to bear out. “

”

This is from the blog post from here that Dr. Atwood linked to, from the Figure: Oxford Centre for Evidence-based Medicine Levels of Evidence (May 2001). Specifically, this standard is to be considered below that of clinical trials.

“”Expert opinion without explicit critical appraisal, or based on physiology, bench research or “first principles””

”

The argument isn’t that EBM doesn’t consider prior probability, it’s that clinical trials are given MORE weight than the above. But what clinical trials are performed (or should) consider prior probability. If multiple large RCTs show a benefit to a given intervention, I’m not sure that either clinical experience or biological plausibility should be given weight above that. With CAM, Ionnidas makes good points about expecting posive results merely due to chance sometimes, but that is why RCT results have to be REPLICABLE. If there really is no reason to be concerned about the lack of basic science behind homeopathy, for example, well done large scale RCTs will back that up.

To me, “expert opinion without explicit critial appraisal” mean anecdotes from doctor’s experiences from many patients. If you read the cognitive dissonance-vertebroplasty journal article I linked to, doctors were considering their own expert opinion ABOVE THAT of two major clinical trials. EBM was meant to be an opposition to exactly that scenario. If it fails to address CAM concerns adequately, then I can agree that it is a weakness. But the strength is that it emphasizes controlled trials above that of expert anecdotes. There are many who argue (on this site in the last few weeks, in fact) that if an RCT contradicts established medical opinion, or “standard of care” then it can be ignored.

“”Unless I’ve completely misread this, personal experience is not part of the definition of prior probability. If it is, it should not be because it is almost useless as a form of evidence.”

”

EBM is a reaction to exactly that problem in medicine. Personal experience shouldn’t be part of prior probability (if by which you mean “current knowledge”), but in practice, sometimes it is. In addition, it is limited by the current biological knowledge of the day, which can in fact be incomplete.

Regarding bayesian statistics, my understanding is that the the factors that go into such analyses are subjective. But I am unqualified to surmise about stats, as I am still learning. I do wonder why it has not replaced frequentist stats (again), even thought it was proposed about 300 years ago.

Mostly, I agree with mckenziedvm that I have a vague sense that SBM is replacing one weakness for another in EBM. I have to say that I learn a whole heck of a lot from Dr. Atwood’s posts though.

Billyjoe:
“That is what I meant by “with disregard to the consideration of prior probability”. Perhaps I should have said “oblivious to”. Sorry ”

I know what you meant. I was agreeing, but going a step further. My point was that not only do they not look at prior probability, they don’t look at RCTs either. However, people like Mr. Ulman (unlike the average CAM user who doesn’t care) desperately try to find research evidence, explain the (bogus) physiological reasons, and use personal experience (anecdotes) to defend homeopathy.

Me:“My main reason for avoidance of homeopathy is that I can’t think of a single darn reason why it should work, despite my lack of medical knowledge. So yes, that’s true.”

BillyJoe:”That is prior probability. Welcome to SBM.

That’s why I wrote it was true (that I was considering prior probability). This is not inconsistent with EBM.

Billyjoe:
“According the author of this article, prior probability is not part of EBM. In fact, his contention is that EBM specifically excludes consideration of prior probability. I am presently reading some of the references to get a handle on this, because my understanding was that SBM was merely an extension of EBM, not antithetical (that is probably too harsh a word) to it. Indeed, if you read the referenced articles on p values and hypothesis testing, you will see that his idea contention seems to bear out. ”

This is from the blog post from here that Dr. Atwood linked to, from the Figure: Oxford Centre for Evidence-based Medicine Levels of Evidence (May 2001). Specifically, this standard is to be considered below that of clinical trials, but still present:

“Expert opinion without explicit critical appraisal, or based on physiology, bench research or “first principles””

The argument isn’t that EBM doesn’t consider prior probability, it’s that clinical trials are given MORE weight than the above. But what clinical trials are performed (or should) consider prior probability. If multiple large RCTs show a benefit to a given intervention, I’m not sure that either clinical experience or biological plausibility should be given weight above that. With CAM, Ionnidas makes good points about expecting posive results merely due to chance sometimes, but that is why RCT results have to be REPLICABLE. If there really is no reason to be concerned about the lack of basic science behind homeopathy, for example, well done large scale RCTs will back that up.

To me, “expert opinion without explicit critial appraisal” mean anecdotes from doctor’s experiences from many patients. If you read the cognitive dissonance-vertebroplasty journal article I linked to, doctors were considering their own expert opinion ABOVE THAT of two major clinical trials. EBM was meant to be an opposition to exactly that scenario. If it fails to address CAM concerns adequately, then I can agree that it is a weakness. But the strength is that it emphasizes controlled trials above that of expert anecdotes. There are many who argue (on this site in the last few weeks, in fact) that if an RCT contradicts established medical opinion, or “standard of care” then it can be ignored.

Billyjoe:
“”Unless I’ve completely misread this, personal experience is not part of the definition of prior probability. If it is, it should not be because it is almost useless as a form of evidence.”

EBM is a reaction to exactly that problem in medicine. Personal experience shouldn’t be part of prior probability (if by which you mean “current knowledge”), but in practice, sometimes it is. In addition, it is limited by the current biological knowledge of the day, which can in fact be incomplete.

Regarding bayesian statistics, my understanding is that the the factors that go into such analyses are subjective. But I am unqualified to surmise about stats, as I am still learning. I do wonder why it has not replaced frequentist stats (again), even thought it was proposed about 300 years ago.

Mostly, I agree with mckenziedvm that I have a vague sense that SBM is replacing one weakness for another in EBM. I have to say that I learn a whole heck of a lot from Dr. Atwood’s posts though.

“EBM favors equivocal clinical trial data over basic science, even if the latter is both firmly established and refutes the clinical claim.”

“EBM correctly recognizes that basic science is an insufficient basis for determining the safety and effectiveness of a new medical treatment, it overlooks its necessary place in that exercise.”

SBM: PP is a necessary but not sufficient ingredient – RCTs are also needed.
EBM: PP is neither sufficient nor necessary – even a marginal effect demonstrated in an RCT trumps zero PP.

I think I have that right.

“If you read the cognitive dissonance-vertebroplasty journal article I linked to, doctors were considering their own expert opinion ABOVE THAT of two major clinical trials. EBM was meant to be an opposition to exactly that scenario… the strength is that it emphasizes controlled trials above that of expert anecdotes.”

Yes, no disagreement here.
I haven’t read that article, but I remember the story from last year – just shook my head when the radiologist insisted on the elevation of his personal experience above the results of the RCTs.

“As Steven Goodman and other Bayesian advocates have argued, we do not now avoid subjective beliefs regarding clinical hypotheses, nor can we. There is not a legitimate choice between making subjective estimates of prior probabilities and relying exclusively on “objective” data, notwithstanding the wishes of those who zealously cling to the “frequentist” school.”

And:

“Bayesian analysis does not depend exclusively on unique or known prior probabilities, or even on ranges that a group of experts might agree upon. The power of Bayes’ Theorem is to show how data from an investigation alter any prior probability to generate a new, “posterior probability.””

You can actually use an arbitrary prior probability, say 50%. It doesn’t matter because every trial you do will adjust it towards its real value. For example, if you arbitrarily set the prior probability of homeopathy at 50%, every trial that you do (and analyse with bayesian statistics) will (presumably) produce a posterior probability that tends towards zero (where common sense says it should reside).

You can also choose several prior probabilities and calculate the respective posterior probabilities using Bayesian statistics.

So, yes, it is subjective (or arbitrary) but it doesn’t matter. With each trial it approaches its real value.

Pretty neat hey?
(I’m on a learning curve, though, so don’t take what I say as gospel)

vitamin C can prevent and treat scurvy regardless of whether any observer has identified the existence of vitamin C, let alone its role in the human body.

As far as homeopathy goes, there’s no good evidence that it works, therefore no mechanism to explain. It is neither contributor nor challenge to existing knowledge of how the world works.

That is not the case with all things for which we don’t yet understand the mechanism, such as the effect of touch on the development of babies in neonatal care. We have some empirical evidence that massage is good for growth and development in preterm/low-birth weight babies. We don’t have an understanding of why this should be so but going about looking for the mechanism of action would contribute to our knowledge of the world.

Plonit,
(I’m not quite sure what you are arguing against here so you ahve me at a bit of a disadvantage)

“However, however….vitamin C can prevent and treat scurvy regardless of whether any observer has identified the existence of vitamin C, let alone its role in the human body.”

Well lemons treated scurvy (vitamin C came later). When James Lind conducted his trial he divided his patients into 6 groups, one of which had their diets supplemented with oranges and lemons. James Lind must have decided that the prior probability of the effectiveness of oranges and lemons in the treatment of scurvy was high enough to include it in his trial. He didn’t do a Bayesian analysis, of course, but then the effect was not marginal either. In fact, with only 12 patients, this would these days be called a pilot study – with effect so great that a subsequent RCT would probably be unethical.

“As far as homeopathy goes, there’s no good evidence that it works, therefore no mechanism to explain. It is neither contributor nor challenge to existing knowledge of how the world works.”

“That is not the case with all things for which we don’t yet understand the mechanism,”

The existence of a mechanism would increase the prior probability but, no matter what prior probability you use – whether arbitrary or informative – the use of Bayesian statistics would produce posterior probabilities that would tend towards the real values with each succesive trial.

This is an interesting conversation. May I just suggest that one issue with SBM is that there’s a tendency to presume that all that is considered SBM is always objective. This is not always the case. To take a recent discussion here as an example, in doing studies regarding mental illness (or chronic pain) there are (almost) always subjective elements that can’t be made objective (at this point in time at least).

First there’s the actual nature of how we classify and decide what is and isn’t a mental illness or mood disorder. Putting aside purely neurological disorders that are diagnosed by objective tests for the moment since those aren’t usually what’s being studied – it’s more often depression, bipolar, anxiety disorders, etc. The DSM is very “postmodern” in many, many ways – this doesn’t mean it’s not useful or necessary, it just means it’s a construct (I’m in no way implying all mental illness is a construct, I’m simply pointing out that the DSM is – see current controversies regarding the creation of the DSM-V).

Secondly there’s the nature of data collection – this includes both subjective reporting by the patient and the more (but not entirely) objective reporting by the psychiatrist (who is using the DSM as his guideline) or, in the case of chronic pain, the pain specialist. Add in the powerful role placebo can play since both are at least partially about subjective experience and reporting, and we’ve moved quite far away from any possibility of the purer kind of objectivity of an in vitro study of a virus, for instance. (Of course, the results of in vitro studies often don’t play out in the real world and real bodies as neatly as they do in test tubes – complexity is a bitch.) SBM is always out best attempt at objectivity using the scientific method, it’s not infallible – particularly when we’re applying it to areas that involve a high level of subjectivity in the first place. Now, I’m by no means implying that SBM is in any way useless and we shouldn’t apply the scientific method to studying these areas. We just need to be aware of what we’re studying, the current limitations of SBM in some areas and not ourselves become worshipers who have a blind faith in SBM.

Equally, when it comes to something like mental health, where we still have very incomplete knowledge and cultural and identity constructs are actually a part of the whole ball of wax and subjectivity is important, we need to be cautious about being reductive and acknowledge the nature of our classifications and the subjectivity inherent in some studies. After all, mental health is ultimately about the patient’s experience as well as their functionality. We need to be cautious about being reductive simply because we’re uncomfortable with uncertainty and we feel that science can provide us with certainty or a level of objectivity that simply doesn’t exist due to the very nature of what’s being measured and studied. This is in no way a dismissal of neurobiology, an area where we’ve increased understanding dramatically since we’ve had the tools to look at the brain (though there are still limitations to our tools and studying people in a clinical setting). It’s simply a call to recognize that when we study the brain and mind, particularly in terms of subjective experiences and people’s real lives and interactions with the world, that a bit of humility and recognition of our own cognitive quirks is necessary. (Including the very natural human desire for certainty and to affirm our world view.)

James Lind must have decided that the prior probability of the effectiveness of oranges and lemons in the treatment of scurvy was high enough to include it in his trial.

+++++++++++

Scurvy was regarded as a disease of putrefaction, and acids were deemed ‘preservative’ by analogy to their use in preservation of food. Wrong on both counts (acids in general don’t work, and scurvy is not a ‘disease of putrefaction’). The failure of vinegar in treatment of scurvy demonstrated the falsity of the theory. The choice of citrus fruits was accidental – they were known to be acidic, it’s just lucky that they also contained vitamin C. It goes to show that a wrong theory can contribute to the development and testing of an empirically observed treatment effect.

Before Lind’s experiments there was prior plausibility (theory about putrefaction and acid as preservative). After Lind’s experiment that plausibility disappeared (because acids in general were now known to be ineffective). Thus the only basis for endorsing citrus fruits for scurvy were the results from controlled empirical observation, until such time as a mechanism could be postulated.

Anyhow, if there were a level-playing field for homeopathy and other treatments, not only the sense of demanding RCT evidence, but also in restricting use to clinical trials until such time as efficacy and safety were demonstrated, the problem would largely go away. (as the profitable market in homeopathic remedies would no longer be available). So, when homeopaths demand “equal treatment” I think we should embrace that: demonstrate in good trials that your product works for particular indications and you can make it available (either OTC or by prescription), otherwise you can’t.

Sometimes the science is plausible and but the evidence says “No, doesn’t work.” Then you go with the evidence.

Sometimes the evidence says “The treatment works”. The science says “The treatment shouldn’t work” or “We don’t know why the heck that would work.” In which case you need to either investigate the evidence to see if it is faulty or you need to investigate the science to see what mechanism is not considered or not discovered.

Is this correct?

Question. Within these systems, how does a doctor treat a patient with a condition that does not have good studies to demonstrate an “evidence based approach”?

# Ploniton 07 Feb 2010 at 10:02 am
“Scurvy was regarded as a disease of putrefaction, and acids were deemed ‘preservative’ by analogy to their use in preservation of food. Wrong on both counts (acids in general don’t work, and scurvy is not a ‘disease of putrefaction’). The failure of vinegar in treatment of scurvy demonstrated the falsity of the theory. The choice of citrus fruits was accidental – they were known to be acidic, it’s just lucky that they also contained vitamin C. It goes to show that a wrong theory can contribute to the development and testing of an empirically observed treatment effect.”

Perhaps there was anecdotal evidence as well. From this account, I would wonder if the link to look at citrus was color related. Green limes, etc. That is just a guess.

” The first clue to the treatment of the disease occurred during Jacques Cartier’s arrival in Newfoundland in 1536. The French explorer was advised by the native Indians to give his men, who were dying from an epidemic, a potion made from spruce tree needles. The foliage, rich in vitamin C, cured most members of Cartier’s crew.

Although numerous indications began to appear that linked scurvy with diet, this knowledge had to be rediscovered many times until the nineteenth century.”

Here are a couple of links regarding the controversies regarding DSM-V (which reflect ongoing controversies within psychiatry, including those regarding drug company influence in creating biomythologies – it doesn’t get much more “postmodern” than this – and definitions of mental disorders that aren’t purely biological).

One by the head of the Task Force for DSM-IV, that looks at why psychiatric diagnosis is not (yet) a purely objective and biologically-based process.

“The DSM-V goal to effect a “paradigm shift” in psychiatric diagnosis is absurdly premature. Simply stated, descriptive psychiatric diagnosis does not now need and cannot support a paradigm shift. There can be no dramatic improvements in psychiatric diagnosis until we make a fundamental leap in our understanding of what causes mental disorders. The incredible recent advances in neuroscience, molecular biology, and brain imaging that have taught us so much about normal brain functioning are still not relevant to the clinical practicalities of everyday psychiatric diagnosis. The clearest evidence supporting this disappointing fact is that not even 1 biological test is ready for inclusion in the criteria sets for DSM-V.”

My point in posting this isn’t to denigrate neurobiological research in any way. I’m not only fascinated by cognitive and neuroscience but greatly appreciative of how it’s blown some culturally based prejudices in psychiatry out of the water. However, I find the constant and somewhat naive equation of neuroscience with psychopharmacology is problematic (whether done by non-expert medical professionals, media or anyone else). Particularly in light of drug company influence on psychopharmacology and the strong arm tactics used by the industry to try to manipulate academic research. This is in no way meant as an indictment of everyone working in this field. Psychopharmacology is important, too important to be run and corrupted by commercial interests that have their own “postmodern” and unscientific agenda.

Within these systems, how does a doctor treat a patient with a condition that does not have good studies to demonstrate an “evidence based approach”?

+++++++++++

You have to massively widen the participation in clinical trials.

Thus, your default should be that “X [undertested treatment] should not be done outside the context of clinical trials” but you also make the proper testing of X part of the normal course of clinical practice. That is, any clinician treating a condition for which there is not good evidence surrounding treatment should have access to a database of clinical trials and should be able to enroll their patient into one. That should be the norm of how we ‘consume’ medicine – that our experiences of treatments should contribute to systematic knowledge most of the time, rather than rarely. We need to get away from uncontrolled, undiseminated experiements (which is basically what the clinician in doing when working without good evidence) by levelling the ethical playing field,. We should be as honest with patients about areas of uncertainty in ordinary practice as we are in formal clinical trials. Of course, that honesty would also educate patients on the need for clinical trials and incentivise their participation in them.

A nice description of how that might work in practice is included in Testing Treatments

Question. Within these systems, how does a doctor treat a patient with a condition that does not have good studies to demonstrate an “evidence based approach”?

+++++++++

You massively increase clinical trials and make them the normal response to areas of uncertainty in health care. A nice example of how this might work this is included in Testing Treatments, Chapter 8 Blueprint for a Revolution

That’s an interesting idea, but I think there would be substantial difficulty in its implementation. The amount of effort required by the treating physician and/or his staff in a clinical trial is quite significant. It wouldn’t be just a trivial add-on to their normal treatment.

You could try to minimized the extra work for the doc, but I fear that would greatly increase the risk of bias and unreliable data, making the overall results uninterpretable.

michelleinmichigan,

I agree with your summing up. I also think we have to distinguish between a given treatment and the proposed mechanism for its action. The example of lemons and scurvy is a good example. The proposed mechanism (acidity) was wrong, but the treatment itself still worked.

Another good example may be the energy healing being discussed in the other thread. We can say with virtual certainty that Reiki practitioners can’t heal people by manipulating invisible energy fields with their hands. However, that doesn’t mean that Reiki sessions can’t possibly be beneficial by other mechanisms (positive social contact, reducing stress, improving the patent’s emotional state, and other sorts of ‘placebo’ effects).

As for how a doc treats a patient when there’s no well-established evidence based approach, she just has to do her best with whatever information is available to her. I don’t see what other choice there could be. Furthermore, I’d argue that such cases are where science-based medicine is most critical. If there’s no RCT-proven treatment, I’d say that the most scientifically plausible treatment is the way to go.

The amount of effort required by the treating physician and/or his staff in a clinical trial is quite significant. It wouldn’t be just a trivial add-on to their normal treatment.

++++++++

The alternative is to enroll people in uncontrolled, undocumented, undiseminated trials (aka “standard of care” based on inadequate evidence) usually without their knowledge or consent – and with no benefit to the wider population. Therefore, the additional effort required to give patients the opportunity to enroll in clinical trials to address areas of uncertainty in healthcare is both worthwhile and ethical. The issue is then how to incentivise participation in clinical trials.

We could, I suppose, shake our heads at those doctors who still want to believe that vertebroplasty works for the pain from osteoporotic vertebral fractures.

Or, we could wonder why those subjected to sham vertebroplasty had a mean 4.3 point reduction in pain scores on a 10 point scale at three days, and a 50 per cent reduction in opioid use at one month. Remember these patients supposedly had uncontrolled pain of a mean 16 weeks duration!

And those are mean scores! On the data supplied some patients reported a 7.7 point reduction in pain at three days .

Are, then, the vertebroplasty doctors partly right? “It works”, but not in the way they thought?

Even if certain well-known illusions are contributing to the results, there is plenty of room herein for clinically meaningful placebo benefits in an otherwise disabling condition.

Vertebroplasty, probably even in any sham form, is too invasive to be employed for its placebo benefits. But a now considerable body of data like this might warrant a softer reaction generally to medical activities that are not well proven, just in case they are bringing genuine benefit to a few suffering people.

One of the worst features of EBM may yet prove to be the tyranny it has had over much everyday medical practice at a time when there are a great many conditions where we lack entirely effective and safe treatments and some patients need any help they can get.

We also take it for granted that the placebo controlled trial can decide whether treatments “work” or not. That may only be true when there is no subjective element to the outcomes being measured. When there is such an element clinical studies can have a different meaning relating to HOW treatments may work, i.e. whether they have intrinsic medical activity.

It is a pity there was not a waiting list (no treatment) arm to the NEJM study.

One of the worst features of EBM may yet prove to be the tyranny it has had over much everyday medical practice at a time when there are a great many conditions where we lack entirely effective and safe treatments and some patients need any help they can get.

+++++++++

Is using a potentially ineffective, unsafe treatment helping a patient? It could be, but it could also be doing harm. I would say this is only ethical if you are honest about the uncertainty surrounding the treatment.

pmoran – “We also take it for granted that the placebo controlled trial can decide whether treatments “work” or not. That may only be true when there is no subjective element to the outcomes being measured. When there is such an element clinical studies can have a different meaning relating to HOW treatments may work, i.e. whether they have intrinsic medical activity.”

Yes, well said. I’m starting to think that we may need another term to use alongside “placebo effect”. There seems to be a “social effect” that’s a part of many CAM treatments, and also many mental health interventions by doctors. The mere act of being listened to or cared for – being treated – seems to offer many people some measure of relief. And then there’s the interesting idea that some people are prone to the placebo effect while others are immune to it (certainly a confounding factor for studies if it is an idea that holds weight).

Plonit – “Is using a potentially ineffective, unsafe treatment helping a patient? It could be, but it could also be doing harm. I would say this is only ethical if you are honest about the uncertainty surrounding the treatment.”

You might be surprised to know that some chronic pain patients ask for a treatment even when they know it’s a placebo and the evidence doesn’t support it working. (Talking outside of CAM here, obviously we’re aware that many people seek out CAM treatments that are purely placebos.) Clearly high risk procedures or surgeries are unethical to perform but there are plenty of other dramatic rituals that can serve the same purpose. Anthroscopic knee surgery was another area where placeob surgery was shown to be no better than the actual surgery. The placebo effect isn’t just active in woo or CAM, it’s also active in regular medicine.

Plonit, I don’t know the answers, although I suppose I sometimes adopt tentative stances for the sake of debate.

I do sense that placebo potential is critical to what constitutes acceptable medical practice in societies with pluralist applications of medicine, and especially in areas where suffering outlasts the available EBM-endorsed methods.

If placebo treatments are benefiting some people, and are on offer, and in widespread use, then we are faced with a standard cost/risk/benefit judgment of the type that we insist applies whenever anyone points out the risks of many drugs.

It is not clear to me where “but it only works AS placebo” fits into this kind of scenario. I also suspect medicine is far too complex a field and blurry at its edges for absolutes to apply.

We have always known that medical practice is something more than the mere application of science, and we may now be very close to scientifically measuring and characterising those aspects that were previously often described as the “art of medicine” — or at least getting an impression of its strength and scope.

pmoran – “We also take it for granted that the placebo controlled trial can decide whether treatments “work” or not. That may only be true when there is no subjective element to the outcomes being measured. When there is such an element clinical studies can have a different meaning relating to HOW treatments may work, i.e. whether they have intrinsic medical activity.”

Well said. It appears that the subjective and experiential aspects of specialties like chronic pain and psychiatry – which are both specialties that have been considered slightly suspect by doctors and medical researchers in other fields historically – still make many people uncomfortable. After all, both pain and depression are all in your head! On the other hand, a broken arm or cancer is indisputable and can be proven to be real or fake.

The thing is, even if we can and do develop purely biological tests for depression or pain, the measure of effectiveness of treatment will continue to have aspects of subectivity because both chronic pain and mental health are always going to be at least partially about the patient’s experience (the other aspect is their functionality, which is a social measure and contextual so also not as simple as a broken arm).

I think that SBM and EBM are complimentary and iterative processes where SBM tries to interpret, generalize and apply EBM findings and EBM tests the assumptions and practices of SBM.

EBM tests specific questions and tries to determine how the answers are distributed across different populations with the meta-analysis of multiple RCTs being one of the best ways of answering specific questions. SBM tries to understand those answers in the context of basic science and in a more generalized environment than EBM’s question. The experience of SBM trying to understand EBM answers leads to new and more specific EBM questions.

In a practical sense EBM answers population questions and SBM tries to apply those answers to a given patient or situation.

I have responded to RCT or review results at times by changing my practice or at other times by shaking my head in wonder at how a given study findings could possibly be real.

“We could, I suppose, shake our heads at those doctors who still want to believe that vertebroplasty works for the pain from osteoporotic vertebral fractures.

Or, we could wonder why those subjected to sham vertebroplasty had a mean 4.3 point reduction in pain scores on a 10 point scale at three days, and a 50 per cent reduction in opioid use at one month. Remember these patients supposedly had uncontrolled pain of a mean 16 weeks duration! ”

I do wonder. And then I wonder too, if acupucture as placebo is less invasive and cheaper ( in general).

Although I’m against CAM, I can’t say what I would do if I were in chronic pain that medical science was not able to help- can’t imagine anything worse. My husband had a friend (in his 30s) who committed suicide because of chronic, unable to get out of bed, back pain.

Although I’ve seen evidence both ways, I also wonder if you’re honest with a patient about the treatment being a placebo, if it would work as well. Some initial trials of both acupucture and vertebroplasty showed benefits, presumably because there was no adequate sham placebo. Patients weren’t fully blinded.

One argument is about scientific validity. This is pure science, and difficult to achieve even in physical sciences, much less biologic sciences. The RCT is the most rigorous method in medical science for achieving scientific validity. However, scientific validity is not necessary for all deductions.

The second argument is about how we translate scientific observations (meaning reproducible observations by independent observers, but without experimental manipulation to show cause and effect), and RCTs into a medical decision. Making a decision for a patient is more like predicting the weather than determining scientific validity by sampling statistics. For that decision, the doctor must utilize information from various sources (including the patients values) to suggest the optimum intervention. RCTs randomize individual factors which may have some bearing on a priori probabilities. In formulating a strategy for a patient, you cannot ignore those factors. As you consider more and more individual factors, the patient becomes unique, the decision then becomes clinical art rather than science. Basically the good clinician can formulate a fuzzy value for a priori probability and make the best decision from that judgment. There are a few instances in medicine where accurate a priori probabilities can be obtained from history, laboratory values, and physical exam findings (appendicitis is one, but the reliability of the observer remains an issue). For most cases the a priori probabilities are are a fuzzy estimate based on the clinician’s experience. Good clinicians usually beat any Bayesian inference based on available scientific data.

So for some issues (that have undergone multiple RCTs), we may insist on scientific validity, such as recommendations for screening mammography (scientific validity leads to the most effective screening strategy, but not the most efficient … that is Evidence Informed Medicine).

For other issues (such as when making a decision about what to recommend to a patient), the distinction between EBM and SBM becomes academic. Both observations and scientifically validated hypotheses must be used in the decision.

“Good clinicians usually beat any Bayesian inference based on available scientific data.” I’m skeptical, and that came after describing a good clinician trying to be a perfect Bayesian – I thought.
Do I think most clinicians are good at determining priors, determining the weight of evidence, and determining the loss function? No, even though it is one of their main tasks. Not a good situation.

Zoe – “Although I’ve seen evidence both ways, I also wonder if you’re honest with a patient about the treatment being a placebo, if it would work as well.”

The answer is ‘yes’. I doubt it would for everyone but it certainly does for some people. You can be quite honest with a patient that something doesn’t work but if they believe it works for them or will work it often does. CAM is a perfect example of this. I suspect that a placebo probably wouldn’t work – or is at least less likely to work – if someone has prior knowledge that it’s a placebo and no prior affirming experience. Subjective experiences, being what they are, are quite easy to manipulate. However, from an ethical standpoint, I consider it preferrable to teach people how to change their own subjective experiences rather than making them reliant upon someone else (be it a CAM or medical practitioner).

I’m increasingly coming to believe we also need to consider what I’d call a “social effect”. By this I mean the relief people experience from being taken care of, of having compassionate contact with another human being who takes their suffering seriously and is kind. This is particularly relevant in areas of medical practice like depression and chronic pain where those suffering often get dismissed as being malignerers or not taken seriously (and where the people around them may not provide any emotional support, or even be part of the problem).

pmoran said “One of the worst features of EBM may yet prove to be the tyranny it has had over much everyday medical practice at a time when there are a great many conditions where we lack entirely effective and safe treatments and some patients need any help they can get.”

+++++++++

plonit said “Is using a potentially ineffective, unsafe treatment helping a patient? It could be, but it could also be doing harm. I would say this is only ethical if you are honest about the uncertainty surrounding the treatment.”

plonit – I think you are jumping to the worse case scenario. There are a lot of variables that could lead to different treatment plans. Some examples.

Someone is found to have a congenital defect. There is no strong evidence that indicates that defect raises the risk of a future health problem, BUT the studies are weak and contradictory. It is scientifically plausible that it may increase risk. How should that be presented to the patient (or patient’s parent)? Should preventative measures be taken, what is the balance of cost, risk, benefit in this case?

You have a patient with a condition with no EB treatment. There is a well tested drug, physical therapy or occupational therapy (that is government agency approved for another condition) That suggests itself as plausible in helping your patient. How should this be presented to the patient? Recommendations.

You have a patient with symptoms that you can not match to a condition. Extensive test results show they have a particular degenerative health condition, but not at a level you would consider treating yet. The patients symptoms are not documented by any of the studies available to be tied to their condition, but it is scientifically plausible that they could be and there is anecdotal evidence. Do you recommend treating the condition, waiting or, seek other avenues?

I think JMB response answers these questions best, BUT my question is then, are doctors and other health care providers trained in this process? Because when working with medical professionals, I get a variety of different approaches, ranging from JMB’s approach to a “if there is not good evidence then we tell the patient there is NO evidence and if we have time we might mention it’s not been studied well” approach. And sadly the later are often the people who tell me they are following an evidence based approach. (that is purely anecdotal, off course).

And in the States how does this all tie into the push for EBM that I see talked about in relation to health reform here?

I do like the idea of clinical trials that you posted in your “Revolution” link. Although not having any background in science, it is hard for me to foresee how they would play out. I like the idea that it could generate studies based on need of patients and doctors rather than the rather academic or profit motivated approach that we seem to have now (that is my uninformed opinion). But, once again how do you deal with rare conditions that are not going to generate a lot of subjects even in a large population area?

Mckenzievmdon, “…I can’t quite get over the sense that SBM replaces the weakness of EBM with a different weakness.”

I agree. I don’t think the problem is with EBM or SBM but with human interpretations which in the extreme come down to mindless adherence to “rules” or “recipes” either because of the practitioners’ ignorance, lack of intelligence, laziness or fear of the risk of making a mistake.

As long as medicine consists of humans treating humans, there will always be serious problems with interpreting and implementing the very best systems developed to obtain and evaluate evidence and to apply conclusions to real patients. Although I have not seen the way many practitioners do this, my guess is that the major problem they have with EBM is that some scrupulously base decisions on the latest study regardless of its quality or whether or not it has been consistently replicated and I suspect that the major problem with SBM is that there will be some practitioners who reject very good evidence because they believe it contradicts scientific theories.

With laypeople the total lack of education in so many who think that a PC and Internet connection make them competent medical researchers when they don’t have a clue about the most basic things like the reliability of different kinds of studies such as epidemiological ones and RCTs, the difference between in vitro and in vivo, the need to read the entire article not just the abstract, the need to look at the entire body of evidence rather than just a part of it or the fact that most researchers don’t publish studies demonstrating things that don’t work although most in their field usually hear about them, is scary and can only be overcome by intense education.

Then there is the way the mainstream media uses medical “news” to fill space although most of the “news” is usually a report on a controversial or preliminary study from which nothing can be concluded, something most readers fail to notice.

Regarding claims made, it is my belief that those making them are the ones who have to present solid evidence demonstrating that they are accurate or at the very least plausible before anyone else should take them seriously. If Cochrane states that “more studies are needed” when in fact the evidence indicates that there are no good studies indicating that something works, that there is no reason to believe that it might and every reason to believe that it won’t, then that should simply be viewed as a limitation that Cochrane has not that EMB should be replaced with SBM.

And practitioners who use drugs and therapies rationally (yes I know that is a relative term but what isn’t really?), to treat diseases for which presently there is no good evidence for or against their use, should not be ridiculed as long as they admit to themselves and their patients that there is no solid evidence and that they merely using their best clinical judgment until something better comes along.

It really boils down to the difference between the science and practice of medicine in the real world.

Mine is that SBM does not remove EBM (clinical trials) from consideration, it merely introduces some much-needed street savvy into its interpretation. It also accepts that lesser evidence often has to suffice in practical medicine, but not usually in opposition to high quality RCTs.

SBM judges where the probabilities lie once ALL the available evidence has been weighed up, including that relating to the real-world reliability of the RCT itself.

EBM, OTOH, operates on a simple scale, using RCTs as the only data and also making the very big assumption that its reviewers know how to eliminate all invalid studies. This is why the Cochrane collaboration and others can regard the absence of solid RCT data for homeopathic remedies as leaving its many questions in neutral territory “needing further research”.

So I don’t see how SBM can be weaker, or that it is a matter of choice for different contexts, if the role of doctors is to always apply the best evidence available.

OTOH I do suspect that there is always an unspoken “for this particular practical purpose” when considering the *adequacy* of evidence within medicine practice. There will always be extreme contexts where normally weak kinds of evidence can suffice for action or for the prompting of further research, if the stakes are high enough.

Mine is that SBM does not remove EBM (clinical trials) from consideration, it merely introduces some much-needed street savvy into its interpretation. It also accepts that lesser evidence often has to suffice in practical medicine, but not usually in opposition to high quality RCTs.

SBM judges where the probabilities lie once ALL the available evidence has been weighed up, including that relating to the real-world reliability of the RCT itself.

That’s more or less the way I view SBM. It’s SBM that takes into account all the available scientific evidence, including basic science.

EBM, OTOH, operates on a simple scale, using RCTs as the only data and also making the very big assumption that its reviewers know how to eliminate all invalid studies. This is why the Cochrane collaboration and others can regard the absence of solid RCT data for homeopathic remedies as leaving its many questions in neutral territory “needing further research”.

Indeed.

Although EBM bills itself as taking into account all the evidence, in practice EBM in essence ignores basic science, relegating it to being no more important than “expert opinion.” What this means is that the utter ridiculousness of homeopathy on a purely scientific level, where for homeopathy to be true a whole lot of very well-supported and well-established science from multiple disciplines (physics, chemistry, biology, etc.) would have to be not just wrong but spectacularly wrong, bothers Cochrane EBM mavens not at all. It affects their conclusions not at all. All that matters is RCTs, the protestations of EBM mavens that EBM takes into account all the evidence notwithstanding. If RCTs for a treatment, even homeopathy, don’t exist, Cochrane concludes that “more research is needed” or “high quality RCT evidence is lacking” even when multiple branches of science cry out that no more research is needed to show that homeopathy is about as close to impossible as science can show.

SBM is what EBM could be if it actually did take into account all available evidence.

1. “Someone is found to have a congenital defect. There is no strong evidence that indicates that defect raises the risk of a future health problem, BUT the studies are weak and contradictory. It is scientifically plausible that it may increase risk. How should that be presented to the patient (or patient’s parent)? Should preventative measures be taken, what is the balance of cost, risk, benefit in this case?”

Exactly as you’ve presented it. Your child has congenital condition X. Some people think X is associated with Y, but the evidence is weak and contradictory (admit uncertainty). This is the thinking of possible preventative interventions, but we don’t know the benefits (because we are not sure of the association) – again, admit uncertainty. There is a current study on a) natural history of people with condition X, and b) interventions to prevent associated health problem Y. Would you like your child to participate in one or both of these? Followed by discussion of what participation would mean (regular follow-up, possibly enroll to clinical trial) and what the benefits (and risks) would be.

2. “You have a patient with a condition with no EB treatment. There is a well tested drug, physical therapy or occupational therapy (that is government agency approved for another condition) That suggests itself as plausible in helping your patient. How should this be presented to the patient? Recommendations.”

Again, exactly as you’ve presented. You have condition X. At present there is uncertainty about how to treat X. Drug Z works for another condition and has a mechanism that leads us to believe it may be helpful also for your condition. There is a current study of Drug Z for treatment of condition X. Would you like to enroll in a clinical trial of Drug Z for condition X?

3. “You have a patient with symptoms that you can not match to a condition. Extensive test results show they have a particular degenerative health condition, but not at a level you would consider treating yet. The patients symptoms are not documented by any of the studies available to be tied to their condition, but it is scientifically plausible that they could be and there is anecdotal evidence. Do you recommend treating the condition, waiting or, seek other avenues?”

Exactly has you’ve stated. You have condition A, but seemingly at a very early stage. There are studies that show benefit from treatment B in later stages of condition A, but we wouldn’t normally consider treating yet as there is no evidence of efficacy. However, there is a current study that addresses the timing of intervention with treatment B for condition A. Would you like to participate?

or

There is no current trial in progress, but we can upload the clinical question and see if there is a trial planned in the near future.

Note that a study doesn’t have to be an RCT, it could simply be surveillence of the natural history of a disease (in the absence of a treatment). All of the above depend upon a) admission of uncertainty about the benefits/risks of treatments and b) a systematic way of making study participation accessible to patients and their clinicians (ideally a national or international register of trials in progress with criteria for enrollment, and a way of putting clinical questions not addressed by current trials out in the open so that researchers can address themselves to these questions).

In a small way, guidelines produced by NICE already do set out research recommendations for under-researched areas, and all clinical guidelines should do this.

I don’t know how any of it relates to healthcare reform in the US (I’m in another country) but I would imagine that the scenarios I outline above are only really feasible in an integrated, national health system.

Dr. Gorski, “Although EBM bills itself as taking into account all the evidence, in practice EBM in essence ignores basic science, relegating it to being no more important than ‘expert opinion.'”

That is the point I was trying to make. In practice the theory upon which EBM is based got lost and I suspect that the same thing will happen with SBM because the problem is not the theory or the method used but educating people to understand it and to use it correctly. While we can and should always try to come up with better methods, we are always going to have to deal with the imperfections of humans using them.

“I don’t know how any of it relates to healthcare reform in the US (I’m in another country) but I would imagine that the scenarios I outline above are only really feasible in an integrated, national health system.”

Sadly, that is probably why I was thinking “Sounds like a good idea, but how would that work out with our doctors and insurance?”

That is the point I was trying to make. In practice the theory upon which EBM is based got lost and I suspect that the same thing will happen with SBM because the problem is not the theory or the method used but educating people to understand it and to use it correctly. While we can and should always try to come up with better methods, we are always going to have to deal with the imperfections of humans using them.

No, I’m afraid you’re mistaken here, at least as far as EBM goes. EBM by design when it lists levels of evidence relegates basic science and expert opinion to the very lowest level, beneath all forms of clinical evidence, even case series. This is not a “misapplication”; it is inherent in the design of EBM in its current form that science is listed as far below any form of clinical evidence.

Whether as similar fate could befall SBM, well, it’s certainly possible, but we have to start somewhere correcting the shortcomings of EBM, and I consider the risk worth taking, given how EBM has lent legitimacy to quackery by refusing to consider basic scientific principles that are very well established in other disciplines.

“If RCTs for a treatment, even homeopathy, don’t exist, Cochrane concludes that “more research is needed” or “high quality RCT evidence is lacking” even when multiple branches of science cry out that no more research is needed to show that homeopathy is about as close to impossible as science can show.”

If RCTs don’t exist, then other evidence SHOULD be considered, and EBM acknowledges this. Isn’t it beyond the scope of Cochrane to consider anything but the studies in the database? IOW, does Cochrane by itself equal EBM?

Also, what mechanism do you propose if multiple well designed RCTs point to the efficacy of a treatment, but science doesn’t back up the studies? Chance? Is .05 not enough?

The other question is if evidence based medicine considers evidence above that of science, does science baed medicine consider science above that of evidence? Is there a hierarchy on SBM? If 40 good RCTs on homeopathy showed a positive effect, would you then adopt homeopathy into mainstream medicine, even if you didn’t know the mechanism? (hypothetical question, of course!).

If 40 good RCTs on homeopathy showed a positive effect, would you then adopt homeopathy into mainstream medicine, even if you didn’t know the mechanism? (hypothetical question, of course!)

Hypothetical indeed.

I’ll say what I always say when asked this very question. (Surely you don’t think this is the first time I’ve heard this question before, do you?) I would begin to consider that homeopathy might have something to it if an amount of evidence were amassed that is at least in the same order of magnitude of the huge amount of high quality evidence from multiple basic sciences that show that homeopathy can’t work. Trials that show a variable effect barely greater than that of a placebo aren’t nearly enough. The effect would have to be dramatic and unquestionable, as well as replicated multiple times by multiple groups using a variety of methods. After all, extraordinary claims require extraordinary evidence. If such extraordinary evidence were produced, then I would reconsider. But the evidence would have to be extraordinary and as compelling as the evidence that shows that homeopathy doesn’t work.

I choose homeopathy as an example of the problems with EBM because it is so ridiculous from a scientific standpoint and because everything we know about basic science says that it can’t work the way homeopaths claim. No one’s claiming that there won’t be gray areas or treatments that appear to work in RCTs but whose mechanism we don’t know. The difference is that such treatments can always be assumed to work through biochemical interactions that we can potentially understand and that are at least orders of magnitude more plausible than homeopathy, which basically postulates a magical memory of water that has never been observed and defies what we know about water.

That’s why we’ve discussed ideas for a prior plausibility scale. Homeopathy would score very, very low on such a scale, indisputably. A lot of other modalities we could argue about where on such a scale they would fall. In any case, no one’s saying to downplay RCTs or to throw them out. SBM is considering all scientific evidence, not just RCTs, and, taking all the evidence into account, we can reasonably say that homeopathy is not worth further study unless someone can produce evidence as compelling as what I describe above in my example.

EBM by design when it lists levels of evidence relegates basic science and expert opinion to the very lowest level, beneath all forms of clinical evidence, even case series. This is not a “misapplication”; it is inherent in the design of EBM in its current form that science is listed as far below any form of clinical evidence.

+++++++++++

Only if you think that EBM is some kind of unalterable TM product of Sackett’s Oxford CEBM – which is how you are treating it here. But practical applications of scientific principles are subject to development and change. EBM levels of evidence are quite commonly taught as continuum of “most subject to systematic and/or random bias” through “least subject to bias” and without ranking of the basic science in that hierarchy (an example was giving above – I could give more). Obviously, no one thinks that one can make the leap from basic science principles to applications without some in vivo trials or observations of those applied principles in people, so it’s not clear what the beef is here really – other than that highly implausible treatments get too much of a free pass (rather than desiring that untested but scientifically plausible treatments get taken up more widely than they already do).

It seems to me that one of the pressing issues is not the performance of trials on highly implausible treatments, but the use of public funds for such trials and the misrepresentation of the results. There is more than one way to skin a cat, and a regulatory level-playing field for treatments would sort out many of these issue. Firstly, prioritize plausible treatments for public funding – obviously you’ll need a mechanism for that, and it will surely be criticized, but hey ho, that’s life. Secondly, demand that safety and efficacy of treatments is demonstrated, with trials paid for by the purveyors in the same way as many other medications. Thirdly, register ALL trials and demand publication of negative results to avoid publication bias (that goes for everyone). Fourthly, only license for prescription/sale those medications that have been found to work. All other medications to be used only in the context of clinical trials (which incentivises the system of widespread participation in clinical trials, for all interventions, that I describe upthread).

All that’s required is to accept alternative medicine on the same terms as real medicine and demand the same standards of it. It would wipe out homeopathy in the blinking of an eye, insofar as they wouldn’t be able to market their product until shown to work, and since it can’t actually work…..there is no market, right? Unless you think it can be shown to work for a specific indication, but that would require upturning of all scientific principles so….

@Fifi: the “social effect” you refer to was first (and, in my view, best) articulated in “Persuasion and Healing” by Frank and Frank.

It’s the first book (1st edition from the mid-sixties) to look at common factors across all psychotherapeutic modalities, whether formally construed as such or with the potential to tap into the relevant similarities.

“I think JMB response answers these questions best, BUT my question is then, are doctors and other health care providers trained in this process? Because when working with medical professionals, I get a variety of different approaches, ranging from JMB’s approach to a “if there is not good evidence then we tell the patient there is NO evidence and if we have time we might mention it’s not been studied well” approach. And sadly the later are often the people who tell me they are following an evidence based approach. (that is purely anecdotal, off course).

And in the States how does this all tie into the push for EBM that I see talked about in relation to health reform here?”

Thank you for the compliment. In answer to your question about the training of doctors, I cannot answer definitively for current medical curriculum, but 25 years ago when I was in academic medicine, all students were trained in the basics of statistical decision making (typically in a brief course of epidemiology), and more extensively on assessment of medical literature (usually in so called journal clubs). However, there is a difference between answering questions on a test correctly, and applying the information in practice.

My research interest was medical decision making. From studies in medical decision making, you could always find one or more clinicians that could outperform any probabilistic decision analysis for an individual patient, or a statistical classification method for a group of patients. How common are those physicians? Depending on the specific problem, an average of 10 to 60% (a guess, not a measured figure) will do better using the expanded information of SBM in a decision than the algorithmic approach of EBM. What the algorithms of EBM do improve are the bottom half of physicians. When algorithmic strategies of EBM are applied in hospitals, overall care tends to improve because of the lift given to less skilled clinicians. It does not improve the outcomes of a capable clinician.

Be aware that being a capable clinician requires the ability to make observations and solve problems. Very smart people may have weaknesses in these areas. I remember a medical school valedictorian who was well versed in the medical literature and very articulate, but was weak in problem solving. There are numerous prominent medical researchers that are lousy clinicians.

The emphasis on EBM in healthcare reform is a double edged sword. Application of EBM can improve the performance of the less skilled clinicians, but could seriously hamper the more capable clinicians. A really good application of EBM would be to provide algorithms for care in the emergency room that would significantly decrease the expense of defensive medicine.

What is a very ominous development is that the application of EBM can be used as a guise of science for rationing medical care. You cannot find very many randomized clinical trials of medical interventions past the age of 75. Unless such trials are organized with better measures of physiologic age (how long the subject would be expected to live) instead of chronologic age, RCTs with an endpoint of death with not show success of intervention. Basically all treatments except for supportive care could be withdrawn in patients over 75 because of the ‘lack of scientific evidence’. A major reason foreign healthcare systems are more cost effective than the United States is because of the decreased care offered to elderly patients. The life expectancy of an 80 year old person in the United States is better than in Europe, even though the life expectancy at birth is lower in the United States than in Europe.

The current healthcare reform bills establish a federal EBM board that is independent of congress (like the Federal Reserve Board), but is empowered to set rules for private health insurance, medicare/medicaid, DOD healthcare systems (like Veteran Administration hospitals). When the EBM board slashes medical care for the elderly, then we will see
$500 billion saved in ten years from medicare. Of course it is based on science (in the form of EBM). I guess when they claim it is science, they can deflect the criticism that they are rationing healthcare (or represent a death panel).

Another ominous development is the way EBM advisory panels have strayed from the original empirical methods of EBM (that are pure empirical science and probabilty theory) and are incorporating decision analysis tools that introduce value judgments (not Bayesian strategy). Since most doctors are not familiar with decision analysis, they think that it still represents EBM. The classic example are the recent recommendations of the USPSTF. If you read their articles listed on the webpage for colorectal cancer screening, they even state that they have moved beyond EBM to “Evidence Informed Medicine”. That overlooked step beyond EBM allowed them to recommend the most cost effective mammogram screening strategy, similar to what is used in Europe (with considerable cost savings).

EBM can weed out ineffective interventions, but because of problems with statistical power inherent in biological science, it may also terminate some effective treatments.

So EBM has pluses and minuses. SBM may not be as pure in theory as EBM, but in practical application it is usually more accurate. But EBM is less dependent on the skill of the healthcare provider.

In JBMs comment, EBM is being misrepresented as simply following an algorithm. It simply isn’t. It is applying the available evidence to the clinical situation, and always demands use of clinical judgment. He is also confusing the questions about efficacy (does it work) which EBM seeks to answer, and cost-effectiveness (how should health care resources be used).

“In JBMs comment, EBM is being misrepresented as simply following an algorithm. It simply isn’t. It is applying the available evidence to the clinical situation, and always demands use of clinical judgment. He is also confusing the questions about efficacy (does it work) which EBM seeks to answer, and cost-effectiveness (how should health care resources be used).”

Perhaps the misunderstanding is in the location difference.

Plonit, I’m not sure if you know that in the states doctors and their office staff spend a significant amount of time proving that care is needed to insurance companies. In fact, most patient assume it is the doctor’s office’s job to convince the insurance companies to pay for treatment.

Right now because of the financial crisis my insurance companies have basically doubled the number of denial of payments I normally receive (same policy) So for us, regardless of the clinicians approach, the kind of EBM an insurance company might use controls the care we receive (or can afford). Are the insurance companies going to allow for scientific plausibility or not?

I wonder if/how this is different from a national healthcare system (one payer system)? How does the payment plan work with (or against) doctor recommendations?

Within the states I have heard the term EBM most often tied to talk of finding cost savings in health care and improving health outcomes. So now I am wondering if perhaps EBM comes less baggage outside the U.S.

In whatever system you have, it’s reasonable not to want to pay for ineffective, or even harmful, treatments.

Patients may prefer doctors who provide ineffective or harmful treatments over doctors who decline to treat. The classic demand for antibiotics to treat viral illness would be a good example of that (doctors who don’t write desired prescriptions are disliked by patients). In a free market system, that may incentivise prescribing of ineffective treatments (since both patients and doctors prefer to do something rather than do nothing, even when do nothing may be the rational thing) and the existence of a counterveiling pressure to withold ineffective or harmful treatments is probably a good thing.

The issue surely comes when the external pressure is to withold and effective but expensive treatment. I think we would all acknowledge that there must logically be a point at which the price of a treatment is too high, if we are also committed to any kind of health equity. Of course, defining that point is always difficult – and individuals will always identify the correct limit as somewhat above the cost of meeting their own health needs, whatever they happen to be. There are various methods you can use to deal with that issue – and they are used in the US (as I understand it, Medicare sets a limit on the treatment funded) as much as they are anywhere else.

As for how it ought to work in NHS style systems, there’s a nice anecdote from Archie Cochrane that illustrates this. As a medical student, Cochrane attended a rally to campaign for a national health service (this was in the 1930s, prior to the creation of the NHS).

“I decided to go alone with my own banner….After considerable throught I wrote out my slogan: All effective treatment must be free.”

Of course, no treatment is really “free” – what Cochrane means here is free at the point of use, and paid for by state funding ultimately derived fromboth personal and corporate taxation.

Regardless, of how you pay for your health services, does anyone really want to fund ineffective, and possibly also harmful, treatments? That puts up your taxes unnecessarily in an NHS style system, or leaves less money in the pot for effective treatments. That puts up your insurance premiums in a US style system and/or leaves less money in the insurance pot for effective treatments.

And of course, in its time the NHS has funded large numbers of ineffective treatments, but the objective is not to do so. I think this is guided by the understanding that the resources available are finite. If we put a lot of resources into ineffective treatments, that’s money we could have used funding a treatment that does work. Or alternatively could put into education, social care, transport infrastructure, etc….

What are the down sides in practice? Sometimes we stop funding “less effective” (rather than ineffective) treatments, which people may prefer for a variety of reasons. At some point (prior to wanting sterilization!) my Primary Care Trust stopped funding female sterilization in most circumstances. For couples, vasectomy is a much more effective form of permanent contraception (lifetime failure rate is lower, fewer complications, and cheaper). Female sterilization is not even the most effective female contraception anymore, now we have a good range of highly effective LARCs. Of course, female sterilization is still more effective than no contraception, but for a national health service free at the point of use, it is not a very cost-effective method. And it has more harms than alternatives. So, I understand the rationale of the Primary Care Trust, and I’m also personally a bit pissed off.

I can take my pissed offness in a number of directions. I can realize that this is the one unment health care need in my life so far and pay for it privately (< £1000) and get over it. I can talk it over with my husband, realize that the Primary Care Trust has rationality on its side and he chooses vasectomy. I can lobby the Primary Care Trust to change their policy with regard to female sterilization or to make an exception in my case. (If someone decided to take them to law using equal opportunities legislation in this case, they might find it hard to defend).

The Primary Care Trust is answerable to the local community it serves, but hopefully people also understand that their individual needs must be balanced against the total needs of the community. One female sterilization = three vasectomies, or thereabouts. If couples have vasectomy in preference to the more dangerous, less effective tubal ligation, that's money that can be more usefully deployed. I didn't sue my Primary Care Trust under the equal opportunities legislation, and my husband did have a vasectomy.

I realize that outside the UK this way of thinking may seem like Communism, being part of the Borg or worse.

Plonit, I’m not sure if you know that in the states doctors and their office staff spend a significant amount of time proving that care is needed to insurance companies. In fact, most patient assume it is the doctor’s office’s job to convince the insurance companies to pay for treatment.

++++++++++++++

This is a very inefficient use of resources (paying two administrators to wrangle about this issue). Wouldn’t it work better if insurers simply said that they would pay for all investigations and treatments deemed effective by some body (AHRQ? or whosoever you decide) up to a certain cost per QALY. That way, clinician and patient knows exactly where they are ahead of time. And it massively incentivizes the process of determining the efficacy of investigations and treatments, as well as their cost-effectiveness. Treatments for which you don’t have this information are subject to clinical trials anyway (or ought to be) and so could be paid for differently.

“SBM is considering all scientific evidence, not just RCTs, and, taking all the evidence into account, we can reasonably say that homeopathy is not worth further study unless someone can produce evidence as compelling as what I describe above in my example.”

Oh yes, and the reason I asked was to see if SBM addresses the homeopathy question better than EBM. I’m still not sure, since EBM also considers scientific evidence. in several ways.

My response to your hypothetical would be that if 40 well designed RCTs on homeopathy showed a positive effect, I’d believe there was a positive effect. However, I would not automatically believe that the effect was actually obtained through the principles of homeopathy. Instead, I’d start looking at the possibility that one of the supposedly inert ingredients was actually active, or that the preparation contained active contaminants, or that the original substance isn’t really being diluted out (some things don’t behave as you’d expect when you get to low concentrations), etc. I’d also be looking very hard at the placebo aspects.

I’d need much more and different evidence to believe that homeopathy works according to the principles espoused by Hahnemann. No amount of standard RCT data would suffice for that; basic science would be essential.

As for adopting it into mainstream medicine, that depends. If the hypothetical effect was clinically significant, with a clear and strong benefit-to-risk ratio, then yes. However, research to understand why it was effective would need to continue.

“Right now because of the financial crisis my insurance companies have basically doubled the number of denial of payments I normally receive (same policy) So for us, regardless of the clinicians approach, the kind of EBM an insurance company might use controls the care we receive (or can afford). Are the insurance companies going to allow for scientific plausibility or not?”

Same here. I’ve been arguing with BCBS for 6 months about a recent surgery.

Very interesting JMB. I too have read that EBM was put into effect partly so insurance companies didn’t have to pay for unproven treatments, not sure if that’s true or not.

@qetzal:

“As for adopting it into mainstream medicine, that depends. If the hypothetical effect was clinically significant, with a clear and strong benefit-to-risk ratio, then yes. However, research to understand why it was effective would need to continue.”

Thanks. That’s pretty much what I was thinking, but not nearly so concisely. I was hoping that SBM didn’t put basic science or expert opinion ABOVE that of research evidence (as in vertebroplasty).

Just to be clear, my questioning of EBM use by health care payers is based more on seeing some of our insurance companies use any new model possible to deny as much care as possible in the interest of cutting costs. If I thought this was a matter of an organized effort to use fund efficiently so as to cover more patients with EBM or fund research, I would be more tolerant. But since it appears to be in order to send money home with shareholders and executives, I am pretty peeved.

So my responses aren’t anti-NHS and I have a complete understanding that prioritization (or some call it rationing) is a necessity when looking at health care costs.

I was in effect, trying to look forward and understand if EBM is a system that will help patients get good care in the U.S. and/or does it have obvious characteristics that insurance companies can bastardize to deny payment. I understand this is a cynical question, but I deal with insurance companies too much. Also I understand that you (Plonit) are probably not going to be able to answer since you don’t have experience in the sport of predicting U.S. insurance company tactics.

I suspect that commercial insurance companies are motivated to use whatever they can to maximise their profits for shareholders. I wouldn’t assume that SBM+EBM would be less vulnerable to any distortions than EBM alone.

The only system I can think of where that wouldn’t happen is one of fierce regulation (such as described above – create national guidelines as to what is good medicine and what is covered, and force insurers to pay out for it) perhaps to the point where you have effectively abolished a free market in insurance. (The only thing they could then “compete” on would be the efficiency of their administration – or perhaps some expensive schemes would pay up to a higher QALY, or something).

Or give doctors complete autonomy to write their own cheques – in which case insurers will just get out of that business model. (And of course, doctors are also commercial interests rather than public servants in the free market model).

There are fairly rigid definitions of EBM used by health policy organizations. In clinical practice, for the application to a single patient, the definition is not as rigorous, and EBM may become synonymous with SBM. However, if the clinician’s first step is to relate the patient’s problems to published RCT’s, then they are pigeonholing the patient, and the treatment strategy becomes an algorithm of the published intervention. If the clinician’s first step is to recognize manifestations of a pathologic physiologic process that is leading to an illness, then they are thinking in the realm of SBM that is beyond EBM (or considered the least reliable source of information).

EBM placed in current healthcare reform bills has two primary functions. The AHRQ (which will include the USPSTF) gains increased regulatory power and autonomy to set policy about what medical interventions are effective. They mostly follow the rigid definition of EBM. However, the USPSTF recently used Efficient Frontier Analysis which introduces a function of cost effectiveness in their grading scale. I can see that departure from traditional EBM, and am disturbed about the way the press keeps telling the public this is the best science or EBM.

The second appearance of EBM is more nebulous and lies in the change in status of certain policy groups in the Centers for Medicare/medicaid Services to become a regulator of healthcare. This policy group would be expected to act based on published research. The examples cited included those studies in which EBM algorithms improved outcomes in hospitals. One of the improved outcomes was shorter hospital stays, which translates to cost savings.

On the subject of healthcare expense of the elderly, that statement came from statements about healthcare expenses from the US government, and articles from the BBC, etc. (a google search for “ageism NHS”). DeathRiskRankings,com also provides some interesting comparisons between the US and Europe.

Insurance companies love EBM because it gives them more reasons to deny claims. They also like EBM algorithms, because they can then dictate care.

However, if the clinician’s first step is to relate the patient’s problems to published RCT’s, then they are pigeonholing the patient, and the treatment strategy becomes an algorithm of the published intervention. If the clinician’s first step is to recognize manifestations of a pathologic physiologic process that is leading to an illness, then they are thinking in the realm of SBM that is beyond EBM (or considered the least reliable source of information).

+++++++++++

I can’t quite get my head round the distinction you are trying to make here and how it might operate in real life. The first step is always listen to the patient (or client, if your user group are not patients). Listen to their symptoms and concerns. Listen to what outcomes they care about. Then you share with them the information that is known about their situation. You take into account the extend to which the available evidence fits that individual and how far you are having to extrapolate the existing information. And on the basis of what is known (using the best evidence and also the individual characteristics and preferences of the person in front of you) you propose a plan for diagnosis, for treatment. You answer their questions. They tell you what they think of your plan. You negotiate together what to do next.

“I can’t quite get my head round the distinction you are trying to make here and how it might operate in real life. The first step is always listen to the patient (or client, if your user group are not patients). Listen to their symptoms and concerns. Listen to what outcomes they care about. Then you share with them the information that is known about their situation. You take into account the extend to which the available evidence fits that individual and how far you are having to extrapolate the existing information. And on the basis of what is known (using the best evidence and also the individual characteristics and preferences of the person in front of you) you propose a plan for diagnosis, for treatment. You answer their questions. They tell you what they think of your plan. You negotiate together what to do next.

This is practising EBM, right?”

The distinction between EBM and SBM approaches in clinical practice is usually not that important. It does become important when the government is trying to tell us how to treat patients based on EBM.

I don’t claim to be an expert in what defines EBM, but I would suggest that what you describe could represent either Evidence Based Medicine or Science Based Medicine. The only point of distinction would be in your assessment of what is ‘known’ about the situation. If you follow criteria for evidence which emphasizes clinical trials over basic science then your process leans toward EBM. Most patients prefer explanations from basic clinical sciences. “Why does my knee hurt? Because the articular cartilage that allows the knee to move easily has ground down.” So the distinction between EBM and SBM really does not apply in the scenario you described. The idea of an algorithm does not apply because you are going directly from history and physical, to diagnosis, to treatment.

If your clinical scenario was more complex, such as acute chest pain in the emergency room, then the distinction (between EBM and SBM) in your approach may become more significant. The history and physical is definitive in fewer patients presenting with acute chest pain, you are faced with a multitude of diagnostic tests that may be applicable, and there are multiple choices for treatment. Your approach could reflect your preference for EBM or SBM.

I would point out that one component of your scenario is a decision from the patients preferences. Decisions using the patients feelings and attitudes are outside of EBM or SBM. Obviously, the practice of medicine is not limited to EBM or SBM. Communication skills and courtesy as well as empathy are large components of clinical practice.

The issue in Healthcare reform in the US is the elevation of EBM
as a way of directing medical care. While EBM approaches have the greatest scientific validity, in most cases SBM approaches have better outcomes. Mammographic screening programs in many countries are based on EBM. In the US, they are mainly based in SBM. Better outcomes from the SBM approach are reflected in the better breast cancer survivability statistics.

Data about cost inefficiency in the US in healthcare for the elderly comes from government published statistics. The published statistic is that 27% of medicare expenditure is for patients in the last year of life. 50 billion dollars a year are spent for patients in the last two months of life. Clearly much of what we spend is not producing the desired result of longer life. There was a program on CBS news “60 minutes” program called “the cost of dying” which very nicely presented the problem.

Most patients prefer explanations from basic clinical sciences. “Why does my knee hurt? Because the articular cartilage that allows the knee to move easily has ground down.” So the distinction between EBM and SBM really does not apply in the scenario you described. The idea of an algorithm does not apply because you are going directly from history and physical, to diagnosis, to treatment.

++++++++++

As soon as you get onto the issue the of the most effective means of diagnosing the cause of knee pain, and treating it, then you into the realm of EBM, I’m afraid. Maybe it’s not the grinding down of the articular cartilage? What sorts of diagnostic tools are you going to use – how are you going to assure that the tests they are having are the most precise available (fewest false positives, false negatives)? Etc…

Decisions using the patients feelings and attitudes are outside of EBM or SBM.

+++++++++++

This is absolutely not true, in fact EBM has famously been represented as a tripartite system involving: research evidence, clinical judgment, and patient values. It is neatly summarized by Sackett thus

“EBM is the integration of clinical expertise, patient values, and the best evidence into the decision making process for patient care. Clinical expertise refers to the clinician’s cumulated experience, education and clinical skills. The patient brings to the encounter his or her own personal and unique concerns, expectations, and values. The best evidence is usually found in clinically relevant research that has been conducted using sound methodology.”

The published statistic is that 27% of medicare expenditure is for patients in the last year of life. 50 billion dollars a year are spent for patients in the last two months of life. Clearly much of what we spend is not producing the desired result of longer life.

++++++++++++++++

How much of this expenditure is for treatments? How much is for nursing care? (EBM is most developed for ‘cure’ and not so much for ‘care’). Also, EBM tells you how effective a treatment is at achieving a particular outcome, it doesn’t dictate the value you place on outcomes. Maybe and additional 2 months at the end of life are ‘worth’ more than any other two months of your life. That is effectively the value judgment that NICE has made with regard to life-extending treatments for terminal illness (they are prepared to spend more per QALY than on other treatments). Arguably, end of life treatments are at the cost of end of life care, but what does that have to do with EBM?

Most patients prefer explanations from basic clinical sciences. “Why does my knee hurt? Because the articular cartilage that allows the knee to move easily has ground down.” So the distinction between EBM and SBM really does not apply in the scenario you described. The idea of an algorithm does not apply because you are going directly from history and physical, to diagnosis, to treatment.

++++++++++
Plonit:
“As soon as you get onto the issue the of the most effective means of diagnosing the cause of knee pain, and treating it, then you into the realm of EBM, I’m afraid. Maybe it’s not the grinding down of the articular cartilage? What sorts of diagnostic tools are you going to use – how are you going to assure that the tests they are having are the most precise available (fewest false positives, false negatives)? Etc…”

As a patient who had a recent arthroscopic meniscectomy and ACL reconstruction, I was pretty disturbed at the lack of EBM for such surgery, comparing surgical options versus physical repair. Also, the efficacy of autograft vs allograft, and hamstring versus patellar tendon. I went ahead with the surgery (after months of trying medical options), but would not be suprised to find that this repair offers a short term fix, but not long term. Already, reports of the inevitability of osteoathritis are becoming apparent. Yes, science-wise, a surgical repair makes sense, but that is not enough. Further EBM is absolutely necessary. I’m not sure how much evidence is there for ortho in general, especially since there is so much money to be made.

Oh, and for testing of knee damage, expert physician tests are as effective as MRIs for diagnosis. Yet MRIs continue to be performed for most situations, at $2000 a pop. If you have health insurance, that is.

Yes, so you are clearly in the realm of EBM. The doctor should have been honest with you about the uncertainties and, in an ideal world, have asked if you were willing to be enrolled in a trial to reduce that uncertainty. That is also the practice of EBM.

What is “physical repair” (as distinct from surgery)? (Not my field, do you mean just “gets better on its own” or with physical therapy or…?)

EBM is the emphasis of principles of scientific evidence and validity in medicine. The principles of scientific validity (basically British Empiriscm) are formalized in EBM. The principles of EBM or SBM primarily effect how we assess medical teaching, literature, and experience.

There are different definitions of EBM from different EBM researchers. Much of the differences in definitions reflect whether the researcher is discussing healthcare policy decisions or clinical decisions. The distinction between the approach of EBM vs SBM is most evident when the approach to healthcare policy decisions is discussed.

If we were to discuss healthcare policy about preoperative MRI of the knee, We would begin by noting that there are no randomized clinical trials to demonstrate differences in surgical outcomes when an MRI of the knee is performed before arthroscopic surgery. There are clinical series published that show some lesions will be missed in laproscopic surgery, and that the rate of missing lesions is decreased by preoperative MRI. So far the EBM and SBM assessment are the same. Now EBM could stop here (because these are the most important criteria for scientific validity), and the recommendation for a preoperative MRI of the knee would be graded ‘D’ or ‘I’, and insurance would not be required to pay for it. SBM would continue the assessment because of known variations in pattern of lesions associated with internal derangement (meniscal tears, ligament tears, articular cartilage and bone injuries). Ultimately the clinical experience of orthopedists about the value of preoperative MRI would be considered. Then SBM would make a judgement about the guidelines. EBM could also consider the basic science of the pathology of internal derangement, and the clinical experience of utility, but would discount that information as the least reliable for scientific validity.

Now consider the differences in the clinical application of EBM and SBM to the same problem. It is possible that under the EBM approach, that the patient would not have had an MRI of the knee preoperatively. Then the discussion with the patient would have been, “The physical exam findings suggest a tear of the anterior cruciate ligament and the medial meniscus. When we look at the anterior cruciate ligament and meniscus, we can determine some details of the tears, and then decide what is the most effective repair.” Then details of the repairs would be discussed. If the patient had a preoperative MRI, then the discussion might have been (there are many variations in the pathology), “The MRI findings indicate that you have torn your anterior cruciate ligament at its site of attachment to the femur. Most of the ligament is intact, so we would expect that by reattaching the ligament, it should heal well. The MRI also indicates that you have a small tear at the base of the medial meniscus located on the inside of the knee. We can usually repair the tear rather than trim that part of the meniscus, so you should do well after surgery. Of course, tears don’t always show up on the MRI as well as we can see them at arthroscopy, so sometimes we have to do something slightly different.” So the main difference in the clinical application of EBM vs SBM in this scenario is whether a preoperative MRI is done. Proponents of EBM may make no distinction in the validity of basic science/clinical experience versus RCT as sources for information when you are dealing with clinical problems. Therefore for some researchers, there is a distinction between EBM and SBM in the application to healthcare policy, but EBM and SBM are synonymous in the clinical setting.

Not all doctors who order an MRI profit by ordering the test. Therefore, the profit incentive cannot account for all of the MRI’s ordered. A doctor in the VA system makes no more money because they order an MRI. Yet MRI of the knee is still a common test in the VA system. There is plenty of data to show that when a doctor profits from ordering a test, they are more likely to order the test (this is called self referral in the US). There is also data to show when a doctor profits from not ordering the test (such as bonuses given to doctors in HMO’s for not ordering tests) then the use of the test decreases.

Now in the separate issue of what constitutes EBM and SBM in clinical practice, neither EBM or SBM principles constitute a complete description of principles for clinical practice. In Sacketts description of EBM medical practice, he is describing how principles of EBM can be incorporated into medical practice. EBM and SBM address principles of how we assess medical literature and clinical experience in making medical decisions, not whether we should care about the patient’s feelings.

If strict EBM principles guide healthcare policy, we would significantly reduce expenditures in end of life care, however there would be a cost of some suffering or death me might have prevented or delayed. QALY is used in both EBM and SBM. EBM policies would ration medicine based on scientific validity. Scientific validity (essentially British Empiricism) has served us well in determining whether a treatment or diagnostic test works. Healthcare policy is an optimization problem. We need to maximize patient outcomes while minimizing costs. There are other scientific approaches more suitable to problems of optimization than principles focused on scientific validity.

If we make clinical decisions based primarily on information from randomized clinical trials and case series, then we must relate the individual patient to a group of patients. Then medicine becomes less individualized and more algorithmic. If we consider RCT, clinical series, basic science mechanisms, and our own clinical experiences, then we will arrive at a more individualized treatment plan. If there are significant successes in genetic determinations in individual diseases, then the emphasis on RCT and scientific validity will decrease. Medicine will become much more individualized.

If we were to discuss healthcare policy about preoperative MRI of the knee, We would begin by noting that there are no randomized clinical trials to demonstrate differences in surgical outcomes when an MRI of the knee is performed before arthroscopic surgery. There are clinical series published that show some lesions will be missed in laproscopic surgery, and that the rate of missing lesions is decreased by preoperative MRI. So far the EBM and SBM assessment are the same. Now EBM could stop here (because these are the most important criteria for scientific validity), and the recommendation for a preoperative MRI of the knee would be graded ‘D’ or ‘I’, and insurance would not be required to pay for it.”

Well, no, the EBM story doesn’t end there. This is one of several prospective studies I’ve seen showing little difference between MRI and clinical diagnosis. If we are going to try to reduce the number of MRIs (or ultrasounds, or c-sections, or CAT scans, or whatever), we have to look for places to do so without compromising patient care. I see little interest in SBM about cost effectiveness. But I’m no expert.

My point in mentioning the cost of MRI wasn’t to bash my doctor. It was to point out that cost effectiveness is relevant. Maybe that $2000 could have been spent elsewhere more effectively.

“The aim of this prospective study was to compare and correlate clinical, magnetic resonance imaging (MRI), and arthroscopic findings in cases of meniscal tear and anterior cruciate ligament (ACL) injuries. MRI scan results and clinical diagnosis are compared against the arthroscopic confirmation of the diagnosis. One hundred and thirty-one patients had suspected traumatic meniscal or anterior cruciate ligament (ACL) injury. Clinical examination had better sensitivity (0.86 vs. 0.76), specificity (0.73 vs. 0.52), predictive values, and diagnostic accuracy in comparison to MRI scan in diagnosis for medial meniscal tears. These parameters showed only marginal difference in lateral meniscal and anterior cruciate ligament injuries. We conclude that carefully performed clinical examination can give equal or better diagnosis of meniscal and ACL injuries in comparison to MRI scan. MRI may be used to rule out such injuries rather than to diagnose them.”

“We have reviewed the important aspects of the history, physical examination, and other diagnostic tools available to help diagnose ACL injuries. We feel that, in the hands of an experienced clinician, greater than 90% of ACL disruptions can be diagnosed at the time of injury. Appropriate evaluation will enable the clinician to advise the appropriate treatment, whether it be operative or nonoperative. We have also briefly outlined the variables that we consider to be the most important in the decision-making process of treatment options after ACL disruption.”

JBM, you seem to be talking about something that we might call RCTBM, or even MA/SRBM. But actually, it is EVIDENCE based medicine, and we use the best available evidence, which not always RCT evidence (ideally it is, but sometimes it isn’t yet or never could be). If the insurance companies don’t understand this, then that’s their problem (and yours!) but not EBM’s.

In the examples of methodology that I have read (primarily articles from the Oregon Evidence Based Practice Center, the Cochrane Collaboration, and the USPSTF) I am aware of the formalized routine for information retrieval, review. and inclusion/exclusion of past articles. Obviously, I made no attempt to perform an exhaustive search for articles about effectiveness of preoperative MRI of the knee. I had simply cited an example of a friend of mine who had recently undergone a newer method of ACL reconstruction. Since the preselection of patients for that type of surgery was dependent on the location of the tear, and since physical exam findings may indicate the grade of the tear (but not the location), I thought that might be a good demonstration of the potential pitfalls of EBM. For the average patient receiving the most common surgical repair for an ACL repair, the average data would probably not show any benefit (again, I have performed no such exhaustive analysis). However, there may still be individual patients paired with certain orthopedists in which the preoperative MRI may be beneficial. The differences in the approach of EBM and SBM in my hypothetical example was an attempt to show an example of how strict EBM (as practiced by the above mentioned focus groups) might not be the best method of determining medical policy. I believe that the Oregon group was involved in the practice of assessing the literature and performing calculations of estimates such as QALY per dollar spent for the policy of funding the state Medicaid program.

When I read Sackett’s summary of EBM I can’t find any difference with what we were taught 30 years ago in journal club in medical school. I assumed the difference is the ranking of the sources of information. In journal club, clinical experience was on the lowest rung of reliability if the person was a medical student or in the first 3 months of residency. Clinical experience was rated higher for senior residents. Of course clinical experience was rated even higher the RCT if it was the physician/professor. So I thought that the main difference in EBM and SBM in the clinical practice is in the rating of evidence from clinical experience.

My two main concerns about EBM discussed in healthcare reform in the US are the potential for abuse by certain agencies, and the denigration of clinical judgment. In the recent mammogram controversy created by the USPSTF, the Oregon Evidence

As I started to say, the Oregon Evidence-based practice center produced an article that I would say adheres to EBM principles, including the calculation of QALY cost. The USPSTF proceeded to add review of various computer models, and utilized the data from the computer models in an Efficient Frontier analysis. To me, that was going beyond EBM. No one seemed to take notice that the scientific advisory panel was now making value judgments, instead of sticking to the issue, does screening mammography reduce breast cancer deaths.

The other issue I had with letting EBM dictate medical practice came from my experience in studying computer aided diagnosis and medical decision making. That was when I observed that a good clinician could usually beat any computer program. That is why I tend to place clinical experience at the same level as RCT.

I apologize for getting off topic and focusing on how EBM might be abused in healthcare reform in the US. At least I have learned more of what EBM means. I still think that EBM is the logical application of British empiricism in medicine. But British empiricism was designed with the goal of discerning cause and effect in the natural world. The process of deciding on what is the best medicine is a slightly different task. A broader methodology such as described here as SBM is better suited to the task.

By the way, the average reimbursements for an MRI of the knee is somewhere between $600 and $1000. The only patients that pay $2000 are those without insurance who have an MRI at a more expensive facility. Healthcare reform could increase protection for uninsured patients so that they would pay more reasonable rates, but I have seen no such provisions yet.

“The other issue I had with letting EBM dictate medical practice came from my experience in studying computer aided diagnosis and medical decision making. That was when I observed that a good clinician could usually beat any computer program. That is why I tend to place clinical experience at the same level as RCT.”

But diagnostic accuracy has absolutely nothing to do with deciding – on the basis of the doctor’s personal experience – what treatment to give the patient.
The “personal experience” of even an expert clinician is no match for the results of a properly conducted RCT.

“By the way, the average reimbursements for an MRI of the knee is somewhere between $600 and $1000. The only patients that pay $2000 are those without insurance who have an MRI at a more expensive facility.”

Outrageous!
Medicine in America must be only for the wealthy.
In Australia you can get an MRI of the knee without any insurance cover for about $300 ($266 USD)

Part of the reason for randomization of patients in RCT is an attempt to insure that the average a priori probabilities of the control and experimental group are as nearly equal as possible. RCTs usually ignore some variations in subjects in order to have larger numbers for statistical power. Depending on the definition for inclusion of subjects (what symptoms, lab results, age sex, past medical history) the RCT may match your patient for which you expected to make a decision. If your patient’s a priori probabilty is a reasonable match to the RCT group average, then the RCT is probably the best source of information for which you can base your decision. However, if the patient’s a priori probability varies noticeably from the group average, then the RCT is not a good source of information. You must consider a priori probalities for adverse effects. If your only available surgeon has poor results, then an RCT leading to a recommendation of surgery might be wisely ignored.

So I would still argue that the expert clinician can outperform the clinician insisting on relying on published RCT’s when you consider their performance over a series of cases. That does not mean that the expert clinician ignores the RCT’s. On average, they would follow the RCT recommendations. In my experience with computer aided diagnosis, for most test cases the computer result was the same as the expert result. When test cases were less typical, the clinician had better results.

The cost of an MRI of the knee in the US might decrease if there was tort reform (that effects the cost of the MRI machine because of manufacturer liability, not just because of malpractice premiums), and improvements in administrative costs (such as the burden of obtaining insurance information from the patient, billing the insurance company, and arguing with the insurance about whether the study is justified). It would be interesting to compare the cost of equivalent MRI machines in Australia and the US, and the manufacturers justification of differences in cost (FDA approval is expensive). The variation in cost might be similar to the higher prices we pay for prescription medication. Oh, I forgot to mention the high cost of lobbying congress in the US.

Now in the US, the uninsured patient usually pays the highest price partly because some insurance companies bargain for discounts with facilities based on a percentage of standard fee, so a facility may jack up its price to make it sound like it’s giving a big discount to the insurance company. Another reason prices gets jacked up for uninsured patients is that the facility can sell the unpaid bills to a collection agency for a percentage of the charge. Pretty sad state of affairs.

Let’s face it, JMB, if it costs 7 times as much to get an MRI, health care is the US of A is #v<k#d, whatever reasons you may come up with for the difference.

And I don't think you've convinced me that the personal experience of an expert has anything on the result of a properly conducted RCT regarding the effectiveness of a medical teatment. In fact, you didn't even answer that question. We are not talking about diagnosis here, we are talking about treatments. And we are not talking about the application of the result of an RCT to a particular patient, we are talking about the actual result of the RCT and whether we can trust the opinion of an expert (based on his personal experience) on the effectiveness of a treatment above the result of an RCT. The answer to that question is clearly no.

The reason for conducting RCTs is precisely because the opinion of even an expert (based on his personal experience) is so unreliable as to be nearly useless.