In an inflammatory new opinion piece for Newsweek, Sharon Begley says, 'Hell yeah!' - "It's a good thing couches are too heavy to throw, because the fight brewing among therapists is getting ugly. For years, psychologists who conduct research have lamented what they see as an antiscience bias among clinicians, who treat patients. But now the gloves have come off."

I for one have begun pumping iron to improve my couch-hurling abilities in preparation for the upcoming sofa melee!

(I made that picture myself!)

Ms. Begley is talking about a new article, set to appear in the journal Psychological Science in the Public Interest, in which Timothy Baker, Richard McFall, and Varda Shoham argue that too many practicing clinical and counseling psychologists ignore the huge amount of research identifying successful and effective ways to do therapy. This debate has been around for a while, and raises hackles on both sides. I think it's a matter of values, and as is the case in most battles over values, there is probably not an easy solution. At the same time, this debate is absolutely critical for the health of the field and the economic and psychological health our populace!

Here is how Baker and colleagues begin their article (currently available only in an 'in press' form here): "The principal goals of clinical psychology are to generate knowledge based on scientifically valid evidence and to apply this knowledge to the optimal improvement of mental and behavioral health." This highlights the first values conflict:

♦ Is the goal of clinical and counseling psychology to create knowledge through research and translate that into helping people? OR is the goal to try to help people and later use research to understand how (or indeed whether) that therapy works?

People whose values lie in the first question say, given that we can verify that several empirically effective treatment options exist, why choose an untested product?

People whose values lie in the second half of that question say, given that the therapies that are being tested were generally drawn from the experience and experimentation of clinicians in the first place, why should we wait for research to ratify each and every approach (when it gets around to it)?

There is a second value at work as well:

♦ Can therapy be dismantled, with critical elements isolated, delivered in calibrated doses, with effects reliably measured against meaningful comparisons with other approaches, then reconstructed and implemented by clinicians? OR is the interplay of myriad client and therapists characteristics and behaviors across countless discrete interactions in attempts to address multiple, co-existing disorders and concerns too dynamic and complex to be replicated within a laboratory setting?

Simply put, some people think that empirical approaches to testing therapy can inform us about what works best, and some people don't.

Why do I think this is a values issue? Well, in addition to using war metaphors (fight brewing, gloves come off), it is common to see people tackle this issue by creating "straw men," or absurd and extreme examples of their opponents, to attack. For example, Ms. Begley insinuates that millions of psychologists use ridiculous-sounding approaches like dolphin-assisted therapy with their clients. As awesome as that sounds, it is absurdly untrue (where am I going to get a dolphin in Fort Collins, Colorado?). On the other side of the "battle" I frequently hear people assert that researchers are trying to turn therapy into a cookbook-driven series of tricks that a monkey, robot, or child could perform. This is obviously absurd as well.

All of the rhetorical histrionics that this issue attracts distract from the real issue: How can we show that we give our clients effective services? Following closely on the heels of this question is: How can our clients and consumers assure themselves that they are getting effective services?

This reminds me of a fascinating Malcolm Gladwell article, in which he describes "The Quarterback Problem." Indulge me if you would, in a football sidebar (after all, my 2009 Minnesota Vikings team is clearly the greatest team since the 1972 Dolphins!). Essentially the quarterback problem is that it is insanely hard to figure out which college quarterback will be a great NFL quarterback. Scouts, coaches, personnel directors, managers, and media figures spend thousands of hours poring over statistics, videotapes, and game performances on practically every eligible college quarterback. The result of their mind-boggling time investment is that no one has any idea who will be an NFL MVP and who will be a pitiable bust. As Mr. Gladwell frames it, the rub lies in the incredible increase in complexity and speed of the pro game compared to the college game. Although they're both playing football, it's not really the same game.

In some ways, although scientists are "doing" therapy in their research, it's not necessarily the same game that practitioners are playing with their clients.

Some people take this notion and run with it, maintaining that research can't tell us which therapies are effective and which are not. That's ridiculous of course. Even the worst scout for the worst NFL team doesn't tell the team to draft a punter, offensive tackle, or unicorn. They're pretty good at ruling out awful, and even mediocre talent. Occasionally, an undrafted QB makes a big splash (Kurt Warner and Tony Romo come to mind), but the system - as riddled as it is with disconnects between the performance it's assessing and the performance it's trying to predict - does a brutally good job of getting rid of junk.

You can add to the quarterback problem another complicating factor. When researchers compare bona fide therapies - in other words therapies most professionals would expect to work - it is fairly uncommon to find notable differences in the outcomes clients achieve. That is to say, that the large majority of people report that therapy helps reduce their distress, and research often enough finds that specific approaches to therapy yield similarly good results (typically better than medications). Researchers like Bruce Wampold argue that this is explained by factors that are common to successful therapy ("common factors"), like establishing a good working relationship and the degree to which clients are actively engaged in their own healing.

Unfortunately, for too many practitioners, the values of help-first-research-later and it's-too-complex combine with research showing a good degree of equivalence among therapy approaches to provide an excuse to simply do whatever they feel like doing. I think it's a pretty small number of people, not really the "unconscionable embarrassment" Walter Michel labels it. After all, in contrast to the silly assertion that Ms. Begley makes about clinical psychologists not being exposed to science training, current accreditation standards for clinical and counseling psychologists require significant training. There are some reasons to wonder whether developing a new accrediting board, as Baker and colleagues promote, as opposed to pressuring existing ones to more rigorously support scientific training, would be the most efficient way to develop better therapists (see one opinion on this here). Although the criteria they lay out are appealing to me, personally, many programs do an excellent job within the existing system already.

For example, in our counseling psychology program at Colorado State University, students are required to complete multiple research methods and statistics courses, conduct empirical thesis and dissertation research projects among other additional grounding courses and experiences in the science of psychology.

I teach a course to every one of our doctoral students specifically focused on empirically supported treatments and evidence based practice. All of our students learn what works, the basics of how to implement those approaches, and how to critically consume and integrate findings from emerging research.

However, all these great things come with some boulders of salt.

First, it is far too rare for psychotherapists to evaluate their own effectiveness as therapists. No matter what one's values or how persuaded one is by the common factors debate, there really is no excuse for not using the tools of science to evaluate whether one's clients are getting better!

Second, I think it is clear by now that we have amassed a convincing amount of empirical evidence that does, in fact, support using some specific therapies. Training-to-competency in these already identified approaches should be mandatory, in my mind. I am not convinced that the evidence is solid enough that today's list contains the only therapies psychologists should be allowed to use - after all, that list is ever-growing, and contains significant gaps in what we know about treating people with certain specific disorders, multiple disorders, some potentially important cultural or level-of-functioning differences, and serving people across the lifespan. Psychologists striving to treat difficult cases often need to improvise and innovate based on their expertise and experience, and often the results benefit us as a field.

Advancing the effectiveness of psychotherapy is absolutely critical, and central to researchers, clinicians, and the people they serve. Doing the therapy that comes most easily, regardless of whether there's any evidence that it works, can do real harm. Regarding anyone who sees some ambiguity or gaps in what we know as being and angry, irrational Luddite can do real harm, too. Both sides need to consider who receives this harm, though. It's not us psychologists, it's the people we serve.

I just finished my internship at a VA and now have a job as a staff psychologist there. The VA, as everyone probably knows, is really big into empirically supported treatments. I think that a big reason for this is the payment system. When the patients aren't paying for sessions, they are more than happy to settle in to "how was your week" therapy rather than doing the more uncomfortable work of, say, exposure therapy for their PTSD. And the VA, in an attempt at efficient resource usage, discourages this heartily. I wonder whether it is perhaps too easy to do "supportive psychotherapy" indefinitely if a person's in private practice and the client is willing to pay. If the client is happy with the service they're receiving, I don't know if this is really a problem (though this can surely be argued about).

There's also the issue that the ESTs that we use (at the VA, at least) are very much about reducing the symptoms of some particular disorder, not so much working towards a more holistic idea of health, which I suspect Mike addresses in his blog above, which I haven't fully read yet. I'm struck, though, by the general feel I have working at the VA that health is a luxury that a vets' service doesn't earn them, but relief from symptoms is something the government is willing to pay for.

Dear Serena, thank you for your comment. I agree that we don't pay enough attention to increasing the positive as well as decreasing the negative. Unfortunately, I didn't get into that issue in this blog entry, but it's on deck for future columns.

Thanks again for your comment, and thanks for sharing your experience at the VA.

...for articulating what I've been struggling to get my head around for far too long!

After too many years (decades!) of "how was your week" therapy sessions, I've finally taken the bull by the horns. I realized that the therapists in my vicinity weren't going to get any more rigorous about setting and achieving treatment goals, so I needed to do it myuself.

I took a page out of my business experience and created my own roadmap for a) overcoming those things that block my progress toward emotional and psychic wellness; and b) leveraging my strengths so that I can create a life that achieves all those fulfillment states (like "Flow" and "Authentic Happiness") that research is finally starting to understand.

I'm fortunate to have the intellectual ability to do this. But I remain disappointed by the state of the profession, and I just can't understand why clinicians don't seem to be more goal (and measurement!) driven.

Thanks for your comment. It's a real treat to get a client who is motivated to the extent that he or she will create their own treatment plan! In the end, our job is serving the people who need us, and we need to do our best to provide the greatest benefit at the lowest cost and risk. Heavily goal-oriented and highly organized approaches work for many, many people at many phases in therapy. Because of the support that exists for many of these, I do feel that we should all know how to provide those services. Yet, we also need to be able to use our expertise and access to knowledgeable consultants to adapt to each individual client. It's a lot to ask, but our love for helping people is why we do what we do.

I also agree completely that we can do a better job of helping people use the best in them to create meaningful lives. Please look for future columns touching on exactly these issues.

Just want to share alternative method and scientific way to folks who suffered from depressiion and antidepressants were not effective for their depression.

The FDA has cleared the TMS (Transcranial Magnetic Stimulation) brain-stimulating device from NeuroStar company for treating depressed adults for whom one antidepressant has failed to work on Oct 8, 2008.

Although there is a logical difference between knowledge values and treatment effectiveness values, in practice the relationship between them is close enough that debating about the relative importance of each is a distraction from the more important practical question of "how should I conduct my practice to effectively improve both". In particular, the better is scientific knowledge about psychology, the more effective treatments are able to be, and the more effectively one treats patients, the greater is the potential to develop scientific knowledge about effective treatment.

I'm a research engineer (and have formerly taught philosophy of science), and I run into this apparent conflict on a daily basis. How much effort should I spend into getting this particular job done, and how much effort should I spend in trying to understand the basic scientific principles underlying the problem in this job? The apparent conflict is dissolved when I realise that, ultimately, the purpose of improved scientific understanding is so that I, and my professional colleagues, can perform similar tasks in the future more effectively. And if I have an idea that seems to work in practice to get the job done but don't yet know why it works - this provides a good opportunity to add relevant scientific knowledge to the communal corpus. I don't need every decision to be settled by scientific data - sometimes collecting the data and doing the scientific analysis is more costly than using professional intuition (ie "guessing").

So I think that the debate is a costly distraction. You don't have the most effective treatment without science and you don't have very relevant science without the empirical case studies provided by successful treatment.

I agree with you that the conflict is not productive, and that the middle ground is formed from a dialogue between the scientific knowledge and practical knowledge. I could be wrong that the reason this dialogue so often breaks down into pitched battles is because of a conflict of values, but I don't think so. Either way as much as I agree that the scenario you described is ideal (and I'd add that it is not practical for someone who is dealing with applied problems on a daily basis to learn every empirically-supported manualized treatment), I don't agree that the values conflict is not a problem. It's a hurdle that we, as psychologists, should be ideally suited to overcoming. But we haven't so far.

Sorry if this ends up being a repeat comment - I attempted to submit and got an error message indicating that the site was being updated so I am attempting to recreate what I wrote.

Anyway, thank you for writing such a thoughtful commentary on this topic. List serves and other forums have been abuzz with much more polarizing thoughts since the Newsweek and Baker, McFall & Shoham articles came out (and well before then for some). That being said, it was nice to see a much more even-handed approach to the debate.

Personally, I stand firmly on the side of the debate that favors using research findings to inform practice decisions, with the condition being that efficacy trials should only be step one of a process that involves effectiveness trials and vigorous investigations of possible mediators and moderators of treatment outcomes. My personal stance on the matter, however, is no more meaningful than anyone else's. In reality, it comes down to the data. If clinicians who eschew ESTs believe that their approach (be it an eclectic mix of skills or something bizarre like dolphin assisted therapy) produces better results than ESTs adherence does, why not test the hypothesis? It would be valuable to test for between groups differences on outcome measures decided upon a priori in order to see if such individuals can substantiate their claims.

Anyway, I again wanted to thank you for your approach to the topic. My wife and I have covered it quite extensively as well on Psychotherapy Brown Bag and the articles have prompted a fairly lively exchange in the comment section. I would be curious to hear your thoughts on those exchanges.