Why do good people use bad medicines?

Hat tip goes to Epiphenom who wrote about this article on Friday. Dr David Gorski has also written a piece on this paper for Science Based Medicine here.

We’ve been talking a bit about evidence based medicine here at Beyond the Short Coat. In the Hard Conversation posts (1, 2, 3, 4, guidelines here), we’ve noticed a lot of dissenters bring up alternative treatments. Invariably, these treatments aren’t evidence based, and don’t work. They generally are “treatments” that seem to be “safe”, but often have no basic science basis. The classic example is homeopathic treatments – which are generally just sugar water. Sometimes people bring up more controversial, less benign things – chelation therapy comes to mind.

So why in the world would people use a treatment like that? Classically we talk about placebo effect: some people improve if you just tell them you’re giving them strong medicine. Occasionally I bring up the fact that complicated diseases have variable courses. That’s not the whole story though, and in this paper Tanaka, Kendal, and Laland, show us another side of the issue.

The authors created a simple mathematical model to simulate the spread of possible treatments in a population. This isn’t a model of doctor dispensed or recommended treatments. This model is specifically about “self medicating” – more similar to over the counter meds, and traditional therapies, things with a pretty low barrier to access. They start out with a population, everyone is either sick (“diseased state”), or healthy(“healthy state”). People who are healthy can become sick and people who are sick can become healthy. Then they introduce a “treatment”. This treatment can either be adaptive (it works), neutral (it doesn’t work), or maladaptive (it makes things worse).

They assume that sick people will expose others to this treatment, and that people adopt the treatment at a constant rate of the time they “see” the treatment. Notice, I said nothing about the treatment working or not. The authors assumed people had no way of telling if a treatment works or not. Don’t be insulted, it’s a pretty reasonable assumption. In fact that’s the reason we have to do evidence based medicine – people generally can’t tell if something actually worked, or if they just got better anyway. They also assume that the longer a treatment doesn’t work, the more likely a sick person is to abandon the treatment.

This model in hand, the authors can ask a lot of questions. They can test a variety of situations. What if the disease is short lived, and rapidly gets better on its own? What if it never gets better? How about if people can catch the disease multiple times? What if the treatment works really well? What if it makes things worse? What if people only expose others to the treatment when they’re sick? What if they expose others to the treatment forever? How fast do people abandon a treatment that doesn’t work?

Their results are interesting to say the least. By their model If people can only get the disease once, the disease is relatively short lived, and only show people the treatment to others when sick, then treatments that don’t work, or hurt you, spread better than effective treatments! Why? Bad treatments give longer “exposure times”, which lets more people pick up on the treatment.

Now if you flip it – if the disease is long lived, and if people spread treatments all the time instead of just when sick, this model tends to select effective treatments as more dominant.

If you can get sick more than once – that too trends towards effective treatments over the long haul.

Interestingly, their model shows that prophylactic treatments spread badly, because they don’t offer a lot of opportunity to get “converts.”

So how much stock can we put in this model? How closely does it resemble real life? Across the board, it’s limited. Not enough of medicine fits in with their initial assumptions. While they discuss CAM in the paper, this model seems to resemble a lower tech, more “traditional medicine” approach. So it may model the spread of modalities in the past better than in the age of the internet.

One situation that does seem to work is flu season – it’s an illness that resolves quickly, and generally the flu “cures” are only used when you have the flu. Admittedly, one can get the flu every year since there are different strains every year. The situation fits the model pretty well actually – there are all manner of non-efficacious flu treatments out there!

How about our topic of choice recently – vaccines? Vaccines are generally pretty easy to come by – yes technically a doctor should be involved, but outreach is pretty high, and the barrier to vaccination is low. It’s a prophylactic treatment, so once you’ve gotten vaccinated you aren’t showing the whole world the joy of vaccination while you’re sick – because you don’t get sick! Well that actually fits the model pretty well, we have to do all kinds of things to get people vaccinated, and it’s not easy!

Where does this model breakdown? The easy one is chronic disease. This model predicts that long duration of disease leans away from bad treatments. Yet those patients who have chronic diseases will often try a wide variety of non-efficacious treatments. I think the inherent assumptions don’t fit chronic disease well though. In chronic disease, rather than pick therapies based on time exposure, one is more likely to actively seek out therapies. Additionally I think there are psychological components that are a little too subtle for this model to take into account.

One obvious issue is that this model doesn’t take into account the ways doctors can spread good practices. Nor does it give us a model through which other “thought leaders” can spread practices. I’m thinking of course of the celebrities – Like She Who Shall Not Be Named of the vaccine denialists. These are powerful forces today.

I think this model is an interesting counterpoint to our regular ongoing discussion here. Regularly commenters bring up “treatments” that have “helped children recover”, in an effort to spread their particular brand of “medicine”. They never cite evidence, they never see the need for it. I find it comforting to think that a model assuming patients know nothing about treatments can lead to this behavior. In my mind that means that maybe, just maybe, science based medicine can make a dent in this situation through the appropriate education.

10 Comments on “Why do good people use bad medicines?”

Thanks for this article. I read the one on SBM and was mildly confused. You have made it so the average person can understand — I think. Thanks also for writing this blog. I am a student returning to school to be a PA and I have a strong interest in medicine. Since I do not have all the prerequisite courses I have some trouble understanding. I think your words will help me evaluate the world of medicine when I get to that point in my education/career.

Wow, that’s absolutely fascinating. I’d certainly be interested in seeing if they refine that numerical model. I’d also love to know (as you mention education) if there are any good resources out there for teaching children about how to tell between good resources, how to teach them to think critically, how to tell the difference between good and bad evidence/studies, scientific method etc. I don’t exactly trust the school system to get around to teaching this sort of thing by the time I have kids, and I want to be able to take matters into my own hands. For that matter anything for teaching adults this would be nearly as useful :) I’m glad you’re taking the time to try to make information accessable to the layperson; it’s very sad to see some commenters that refuse to read more of your articles or any sources that you link.

This post reminded me also of why autism treatment testimonials can be quite misleading – as can any anecdote without data, but specifically in that, aside from the fact that autistic people develop and learn (however atypically in rate or path), that there are also tendencies for loss of skills – most notably as infants/toddlers and as teenagers/young adults. And, given the tendency for autistic skills to be scattered combined with the aforementioned tendencies, it can explain what confused even my dad about me, when he didn’t understand how I was experiencing certain daily living difficulties that I didn’t have as a child, yet am a college student studying math and science and am generally able to prepare simple meals and talk and such.

I’m curious about the impact that information has on the placebo effect.

Say I’m convinced that eating a banana is going to cure my flu, and sure enough, after eating a banana I feel better. Then you show me a peer-reviewed study that convinces me that eating the banana didn’t help, except as a placebo. That basically destroy the efficacy of bananas for me, right? Should you have showed me that study?

I actually struggled with this when my wife was post-term and looking for ways to get labor going. She tried some things that I viewed as ineffective but harmless, but I didn’t bring up my reservations (you do NOT aggravate someone at 44 weeks…)

The ethics of placebo treatments is something I’m also very interested in. I am considering taking a medical ethics elective at the end of my third year and this topic is something I’ve considered turning into that research project. To be sure, I wouldn’t advocate lying to patients by telling them I have prescribed a medicine when it was in fact only a sugar pill, but I have been considering the ethics in simply letting the patients find their own placebos. Provided, of course, that they do not eschew actual medicine in any dangerous way.

I personally remain skeptical of the value of a placebo effect. With regards to Stepan’s banana anecdote, plainly it’s post hoc ergo procter hoc, that is, a temporal sequence that appears to have a causal basis, but may just be random. So, you might have eaten a banana and your health improved, just because your health was invariably going to improve, unless you had something more serious, and the banana was going to do nothing.

I think what we attribute to placebo is not an effect, it’s just merely random statistical variations. We remember the ones we want to, and ignore the data that doesn’t support our belief.

That’s why many retain a valid belief in CAM-woo. There is just enough evidence that their therapies work, not because there is a causal relationship, but merely because sometimes diseases go into remission. These aren’t miracles, and they certainly are not a result of magical potions, spiritual aura’s or faith. They just happen, and eventually we’ll figure out why, then use that knowledge to further help others.

@Esther: I appreciate your comment, thats what I’m going for with this blog, evidence based medicine for the everyone :) Let me know if you have specific topics you find confusing, I’m always gathering topics to write about. Good luck with your PA program!

@Katherine: Thanks :) I’m trying to find good resources on critical thinking education myself.
The problem I’m finding is that the best resources are very specialized. That is, I see great resources for critical thinking, applied to medicine, or applied to engineering. But much of the best educational material I’ve seen on the subject requires alot of background knowledge in something else.

I’m not a parent, but from my anecdotal, completely unscientific personal experience (myself, and the patients I’ve worked with), being actively involved in your children’s education seems to be the best way to teach it. I’ll be doing my best to post any good examples I see, and filling as much of the gap as I can with my own meager additions.

So far, the best things I’ve seen for educating adults is just making them aware of the basic issues. Once you’ve been made aware, the best way to progress, is to constantly question your own actions. Both in science and out of science, because critical thinking is a skill best practiced everywhere.

As far as critical thinking specifically applied to medicine is concerned, I’ll try and get alot of that up here, especially once Hard Conversations V&A is done.

As for the commenters, you can take a denialist to water, but you can’t make them think. I just want the people who haven’t decided to have a good resource. The fact that I also provide a forum to show the true colors of the opposition is a bonus.

@Melody: Thats the kind of thought I’d like to come to mind with all of these posts :o). I’ve found your blog quite interesting, and i hope you don’t mind if I add you to it!

I agree with Michael, perhaps the banana/flu example is less relevant just because it may not have even had a placebo effect.

However there is ample evidence of placebo effect in some areas, depression is an interesting one. In depression the placebo effect can be large, and persistent.

So if we do believe in placebo effect, the ethics get pretty complicated.

I think the principle thats at issue here is autonomy. Even if you pick the placebo treatment yourself, can you really give informed consent to a treatment without losing the very belief that would give you a large placebo?

The other issue is an argument I find specious alot of the time myself – slippery slope. Once you legitimize not ruining patients “harmless” placebos, you’re possibly in trouble if they later find a harmful alternative treatment.

I’m going to post on this one later too… the pile of future posts is getting larger and larger every day – the base issue is that once you let some voodoo in the door, it’s hard to shut the door out on the rest.

Refs on depression and the placebo effect:
“Meta-analysis of the placebo response in antidepressant trials” in Journal of Affective Disorders, doi:10.1016/j.jad.2009.01.029
“The persistence of the placebo response in antidepressant clinical trials” in Journal of Psychiatric Research, doi:10.1016/j.jpsychires.2007.10.004

Teacher librarians across the world are on a mission to increase information literacy (aka information fluency and critical thinking). It’s about not just finding the information you need but evaluating it and using it properly. Check out some information search processes like the Big6 at http://www.big6.com/ and Carol Kuhlthau’s model at http://www.scils.rutgers.edu/~kuhlthau/information_search_process.htm
If your kids’ school doesn’t seem to doing this sort of stuff, maybe ask them why not. Or chat to their librarian or media specialist.

[…] A paper published last week in PLoS ONE by Mark Tanaka at the University of New South Wales and colleagues in the UK has prompted a number of great blog posts this week. In the article, From Traditional Medicine to Witchcraft: Why Medical Treatments Are Not Always Efficacious, the authors use a mathematical model, which could explain why the use of complementary medicines and purely superstitious remedies for medical ailments can, in some circumstances, spread more quickly through populations than treatments known to be effective. The following blogs all covered the study: Science-Based Medicine, CxLxMxRx, Epiphenom, Respectful Insolence and Beyond the Short Coat. […]