Here’s a summary of some common problems with making diagnoses and evaluating medical interventions. I adapted most of this from Quackwatch and a couple of research articles and put it together as a memo to the other vets in my practice, to lay the groundwork for conversations about specific alternative therapies, since getting people to see the limits of the reliability of their own experience is apparently a necessary step before getting them to see the weakness of the evidence for many alternative therapies. OI just thought I’d post it here in case it’s useful to anyone else.

How Reliable is Clinical Experience?

Acquiring clinical experience is a major part of improving one’s medical knowledge and skills. Unfortunately, extensive practical experience can often lead to the erroneous the belief that we can reliably determine the safety and efficacy of a particular therapy based on our personal experiences or those of other doctors, especially specialists or experts. “I’ve used it, and I know it works/doesn’t work” is the most common response by doctors or clients to the suggestion that research evidence contradicts what we believe about a given medical practice. Another common response from experienced veterinarians is “I don’t need to run Test X. I know what this is because I’ve seen a hundred cases like it.”

Such personal experiences are very compelling, but science is built on the premise that our imperfect brains can make mistakes and that careful use of appropriate research techniques can compensate for our limitations. Here are a few reasons to be skeptical of one’s own clinical impressions and those of “experts” and to look for guidance in good quality research evidence when it is available.

1. Self-Limiting Disease Many diseases are self-limiting. If the condition is not chronic or fatal, the body’s own recuperative processes usually restore the sufferer to health. Thus, to demonstrate that a therapy is effective, its proponents must show that the number of patients improved exceeds the number expected to recover without any treatment at all. Without detailed records of successes and failures for a large enough number of patients with the same complaint, someone cannot legitimately claim to have exceeded the norms for unaided recovery.

2. Waxing and Waning Chronic Disease (also known as Regression to the Mean) Such conditions as arthritis, allergies, and gastrointestinal problems normally have “ups and downs.” Naturally, clients tend to seek therapy during the period or greatest clinical symptoms. In this way, a treatment will have repeated opportunities to coincide with upturns that would have happened anyway.

3. Placebo Effect Through suggestion, belief, expectancy, cognitive reinterpretation, and diversion of attention, people given biologically useless treatments often experience measurable relief. Some placebo responses produce actual changes in the physical condition; others are subjective changes that make patients feel better even though there has been no objective change in the underlying pathology. Of course, in veterinary medicine, the effect is mostly “by proxy,” in which the owner’s beliefs and desires lead to a report of improvement in the pet’s symptoms when none has actually occurred.

4. Multiple Concurrent Therapies If improvement occurs after a pet has had several interventions, and probably other unremarked changes in the owner’s treatment of the sick pet, one or another of the changes often gets a disproportionate share of the credit or blame. Frequently, the latest in a series of interventions or the newest thing tried is credited with improvement even though many things were done.

5. Misdiagnosis Scientifically trained veterinarians are not infallible. A mistaken diagnosis, followed by an irrelevant intervention, can lead to a glowing testimonial for curing a condition that would have resolved by itself. In other cases, the diagnosis may be correct but the time frame, which is inherently difficult to predict, might prove inaccurate.

6. Human Psychology Even when no objective improvement occurs, people with a strong psychological investment in the pet can convince themselves the treatment has helped. And doctors, who want very much to do the right thing for their patients and clients, have a vested interest in the outcome as well. A number of common cognitive phenomena can influence one’s impression of whether a treatment helped or hurt a patient. Here’s a brief list of common cognitive errors in medical diagnosis. Any of these sound familiar?

a. Cognitive Dissonance When experiences contradict existing attitudes, feelings, or knowledge, mental distress is produced. People tend to alleviate this discord by reinterpreting (distorting) the offending information. If no relief occurs after committing time, money, and “face” to a course of treatment internal disharmony can result. Rather than admit to themselves or to others that their efforts have been a waste, many people find some redeeming value in the treatment.

b. Confirmation Bias is another common reason for our impressions and memories to inaccurately represent reality. Practitioners and their clients are prone to misinterpret cues and remember things as they wish they had happened. They may be selective in what they recall, overestimating their apparent successes while ignoring, downplaying, or explaining away their failures. Or they may notice the signs consistent with their favored diagnosis and ignore or downplay aspects of the case inconsistent with this.

c. Anchoring This is the tendency to perceptually lock onto salient features in the patient’s initial presentation too early in the diagnostic process, and failing to adjust this initial impression in the light of later information. This error may be severely compounded by the confirmation bias.

d. Availability The disposition to judge things as being more likely, or frequently occurring, if they readily come to mind. Thus, recent experience with a disease may inflate the likelihood of its being diagnosed. Conversely, if a disease has not been seen for a long time (is less available), it may be underdiagnosed.

e. Commission Bias results from the obligation toward beneficence, in that harm to the patient can only be prevented by active intervention. It is the tendency toward action rather than inaction. It is more likely in over-confident veterinarians. Commission bias is less common than omission bias.

f. Omission Bias the tendency toward inaction and rooted in the principle of nonmaleficence. In hindsight, events that have occurred through the natural progression of a disease are more acceptable than those that may be attributed directly to the action of the veterinarian. The bias may be sustained by the reinforcement often associated with not doing anything, but it may prove disastrous.

g. Diagnosis Momentum Once diagnostic labels are attached to patients they tend to become stickier and stickier. Through intermediaries (clients, techs, other vets) what might have started as a possibility gathers increasing momentum until it becomes definite, and all other possibilities are excluded.

h. Feedback Sanction Making a diagnostic error may carry no immediate consequences, as considerable time may elapse before the error is discovered, if ever, or poor system feedback processes prevent important information on decisions getting back to the decision maker.

i. Gambler’s Fallacy Attributed to gamblers, this fallacy is the belief that if a coin is tossed ten times and is heads each time, the 11th toss has a greater chance of being tails (even though a fair coin has no memory). An example would be a vet who sees a series of patients with dyspnea, diagnoses all of them with a CHF, and assumes the sequence will not continue. Thus, the pretest probability that a patient will have a particular diagnosis might be influenced by preceding but independent events.

j. Posterior Probability Error Occurs when a vet’s estimate for the likelihood of disease is unduly influenced by what has gone on before for a particular patient. It is the opposite of the gambler’s fallacy in that the doctor is gambling on the sequence continuing,

k. Hindsight Bias Knowing the outcome may profoundly influence the perception of past events and prevent a realistic appraisal of what actually occurred. In the context of diagnostic error, it may compromise learning through either an underestimation (illusion of failure) or overestimation (illusion of control) of the decision maker’s abilities.

l. Overconfidence Bias A universal tendency to believe we know more than we do. Overconfidence reflects a tendency to act on incomplete information, intuitions, or hunches. Too much faith is placed in opinion instead of carefully gathered evidence. The bias may be augmented by both anchoring and availability, and catastrophic outcomes may result when there is a prevailing commission bias.

m. Premature Closure A powerful error accounting for a high proportion of missed diagnoses. It is the tendency to apply premature closure to the decision making process, accepting a diagnosis before it has been fully verified. The consequences of the bias are reflected in the maxim: ‘‘When the diagnosis is made, the thinking stops.’’

n. Search Satisfying Reflects the universal tendency to call off a search once something is found. Comorbidities, second foreign bodies, other fractures, and coingestants in poisoning may all be missed. Also, if the search yields nothing, diagnosticians should satisfy themselves that they have been looking in the right place.

o. Yin-Yang Out When patients have been subjected to exhaustive and unavailing diagnostic investigations, they are said to have been worked up the Yin-Yang. The Yin-Yang Out is the tendency to believe that nothing further can be done to throw light on the dark place where, and if, any definitive diagnosis resides for the patient, i.e., the vet is let out of further diagnostic effort. This may prove ultimately to be true, but to adopt the strategy at the outset is fraught with the chance of a variety of errors.