How well can psychotherapists and their clients judge from personal experience whether therapy has been effective? Not well at all, according to a paper by Scott Lilienfeld and his colleagues. The fear is that this can lead to the continued practice of ineffective, or even harmful, treatments.

The authors point out that, like the rest of us, clinicians are subject to four main biases that skew their ability to infer the effectiveness of their psychotherapeutic treatments. This includes the mistaken belief that we see the world precisely as it is (naive realism), and our tendency to pursue evidence that backs our initial beliefs (the confirmation bias). The other two are illusory control and illusory correlations – thinking we have more control over events than we do, and assuming the factors we’re focused on are causally responsible for observed changes.

These features of human thought lead to several specific mistakes that psychotherapists and others commit when they make claims about the effectiveness of psychological therapies. Lilienfeld’s team call these mistakes “causes of spurious therapeutic effectiveness” or CSTEs for short. The authors have created a taxonomy of 26 CSTEs arranged into three categories.

The first category includes 15 mistakes that lead to the perception that a client has improved, when in fact he or she has not. These include palliative benefits (when the client feels better about their symptoms without actually showing any tangible improvement); confusing insight with improvement (when the client better understands their problems, but does not actually show recovery); and the therapist’s office error (confusing a client’s presentation in-session with their behaviour in everyday life).

The second category consists of errors that lead therapists and their clients to infer that symptom improvements were due to the therapy, and not some other factor, such as natural recovery that would have occurred anyway. Among these eight mistakes are a failure to recognise that many disorders are cyclical (periods of recovery interspersed with phases of more intense symptoms); ignoring the influence of events occurring outside of therapy, such as an improved relationship or job situation; and the influence of maturation (disorders seen in children and teens can fade as they develop).

The third and final category of errors are those that lead to the assumption that improvements are caused by unique features of a therapy, rather than factors that are common to all therapies. Examples here include not recognising placebo effects (improvements stemming from expectations) and novelty effects (improvements due to initial enthusiasm).

To counter the many CSTEs, Lilienfeld’s group argue we need to deploy research methods including using well-validated outcome measures, taking pre-treatment measures, blinding observers to treatment condition, conducting repeated measurements (thus reducing the biasing impact of irregular everyday life events), and using control groups that are subjected to therapeutic effects common to all therapies, but not those unique to the treatment approach under scrutiny.

“CSTEs underscore the pressing need to inculcate humility in clinicians, researchers, and students,” conclude Lilienfeld and his colleagues. “We are all prone to neglecting CSTEs, not because of a lack of intelligence but because of inherent limitations in human information processing. As a consequence, all mental health professionals and consumers should be sceptical of confident proclamations of treatment breakthroughs in the absence of rigorous outcome data.”