Why I Don’t Read Empirical Studies About Psychotherapy

The analogy between anxiety and scurvy does not stand up to scrutiny.

As psychology bends over to scoop up the dollars that manage to slip by pharmaceutical companies and health insurers, the necessity of entry into the medical establishment for access to that money has led to a call for “evidence-based treatment” and “empirically-supported therapies.” Responsiveness to evidence was the great leap forward in medicine. Many people thought, for example, that consuming citrus fruit would prevent scurvy, but it was not until 1747, in the first ever clinical trial, that this was proven. The field of medicine being populated by, well, people, it took several decades for the idea of an evidence-based approach to catch on. A practitioner could avoid all sorts of trouble by doing what Aristotle or some other luminary said to do, as opposed to worrying about what works. In 1799, George Washington’s physicians bled him to death because, even though he was obviously suffering from the lack of blood, it was the accepted treatment.

Unfortunately for those who prefer simple treatments, psychological disorders are rarely simple. The analogy between depression and a virus, between anxiety and a strain of bacteria, does not stand up to scrutiny; every depression is different and context-specific. Clinical trials for psychological disorders cannot infect randomly selected people with the disorders, a crucial step in the proof that penicillin works, and it is impossible to construct a double-blind study in which the therapists don’t know what sort of treatment they are actually providing. Outcome measures in medical research involve, for example, inspecting blood samples for the presence of the virus or scans for the sizes of tumors. In psychology, outcome studies typically involve asking patients if they feel better, disregarding any motivations the patient might have to answer one way or another.

Most published studies in the social sciences are simply incorrect (Ioannidis, 2005). The reasons have to do in large part with Bayesian probabilities, in that unlikely findings are more publishable, and low pre-existing base rates work against the findings standing up over time. Publishability also underlies other sources of error, including especially flexibility in research designs (since researchers are not as careful as they might be if their careergoals could wait for them to get it right), researching hot topics (since a larger number of studies is likely to create more false positives), and bias (since positive findings enhance the researcher’s career in a way that negative findings do not). To this list, Shedler (2002) adds that clinical researchers are pressured by tenure considerations and then by granting agencies to get results quickly, leading to an examination of shorter and easily coded treatments. He also notes the hostility that researchers often feel toward clinicians who are trained and experienced in ways the researchers are not. How many clinical researchers have put in the 10,000 hours with feedback that it takes to get really good at something? Many researchers respond by scoffing at the idea that there is anything to get good at.

Researchers and even clinicians get recognition for developing something new (usually referred to by a three-letter acronym) rather than refining what is already known. A good example is the behavioristic development of FAP (functional analytic psychotherapy), which is an exact replica of psychoanalytic therapy in the 1970s; it disdains psychoanalytic thinking and has to wait for behaviorists to discover on their own what is already known about transference, framing, and intersubjectivity. Research on FAP outcomes will imply that psychoanalytic treatment is not evidenced based, even though it will be psychoanalytic principles (in different clothing) that this research supports. (I’m a behaviorist, by the way.)

One of strangest perversions arising from applying a medical model of validation to psychological treatments is the insistence that empirically-supported treatments follow a manual. Presumably, this is to insure that the treatment claiming validation is the treatment that was actually provided. The idea is that the therapist’s knowledge of psychology, wisdom about life, and empathy for the client are all irrelevant, while surely these are the three qualities you’d most want in your own therapist.

The push for an “evidence base,” inappropriate to the kinds of problems psychologists address, also leads to a conformity of practice, turning us from chefs into sous-chefs, following treatment regimens as if they were recipes. The danger is that we will lose our ability to innovate and to base our work on understanding rather than on manuals. To remind yourself of the fallibility of research evidence, you might keep in mind that the only treatment relevant to our field that has won a Nobel Prize is the lobotomy.

The push for an “evidence base,” inappropriate to the kinds of problems psychologists address, also leads to a conformity of practice, turning us from chefs into sous-chefs, following treatment regimens as if they were recipes. The danger is that we will lose our ability to innovate and to base our work on understanding rather than on manuals. To remind yourself of the fallibility of research evidence, you might keep in mind that the only treatment relevant to our field that has won a Nobel Prize is the lobotomy.

Which on the flip side opens the door to any kind of therapy being "valid" and "standard of care" no matter how insanely wacky because you would insist there is no standard of care. Evidence based studies are difficult in just about ANY field, and more difficult in psychology, I'll give you that, but calling them inappropriate shows a lack of desire to approach this scientifically and anachronistic mindset where you take your classical education and then just do your own thing because it seems right to you.

Not at all. The answer is to find therapies that work (with more sophisticated outcome measures than a depression checklist) and to find out what made them work. This kind of research produced what we know about the working alliance, in-office signs of improvement, and APA's recent statement (in direct contradiction to its push for ESTs) that psychotherapy has demonstrated efficacy.

I appreciate the previous commentor's argument and agree that, in some respects, we know that certain things work well for certain types of disorders. For example, exposure therapy for OCD. However, even in that realm, every person with OCD manifests it in a different way, in a different context, and with the symptoms meaning and representing different things to them. So even there its not completely clear cut.

I also sometimes wonder about the flip side of this argument as the prev commentor points out which is: how do we make the case that what we are doing is working? How is the therapy I'm doing with someone better than crystal healing or past-life recall therapy (if there is such a thing)? It's a question that I'm often plagued with during times of doubt and, while I don't agree with the push for "evidence based" for the reasons Michael points out, I also find myself finding comfort in it when I feel insecure about my work.

There's plenty of evidence for psychotherapy as a whole without resorting to the stupidity of manualized treatments. Once you realize that, then I suggest looking for evidence from the patient to shape your behavior in sessions. In the same way, there's not much doubt that parenting is better than no parenting and that parenting without abuse, neglect, or spoiling is better than parenting with those lapses, but after that, parenting has to be tailored to feedback cues from children, and those cues must not be conscious preferences on the part of the child but signs that the child is doing better. Establishing reliability about the interpretation of such signs is difficult but not insurmountable, and it's the *right* problem to investigate.

Michael - thanks for the response which clarifies things for me. I like the parenting analogy - we can make broad statements about parenting but when it comes to how to respond to specific behaviors, we need to know the context, the child, and many other factors to know what is the "right" thing to do.