Cognitive therapy for people with a schizophrenia spectrum diagnosis not taking antipsychotic medication: an exploratory trial

This study, published recently in Psychological Medicine marks a departure from previous CBT for psychosis studies as it involves administering CBT to individuals who are unmedicated - and alongside the media attention (All in the Mind), it is important to look more closely at the study itself.

Morrison and 11 colleagues examined a small sample of 20 participants with schizophrenia spectrum disorders (actually 18 and one also didn't complete the therapy - so 17). All were outpatients - who had not been taking antipsychotic medication for at least 6 months. Morrison et al measured the impact of CBT on symptomatic outcome measures: Positive and Negative Syndromes Scale (PANSS), which was administered at baseline, 9 months (end of treatment) and 15 months (follow-up). Secondary outcomes were dimensions of hallucinations and delusions, self-rated recovery and social functioning. Rather than dismiss missing data, the authors chose to impute the missing data in the following analyses:

Their main findings were stated as:

"significant beneficial effects on all primary and secondary outcomes at end of treatment and follow-up, with the exception of self-rated recovery at end of treatment. Cohen’s d effect sizes were moderate to large [for PANSS total, d=0.85, 95% confidence interval (CI) 0.32–1.35 at end of treatment; d=1.26, 95% CI 0.66–1.84 at follow-up]."

The authors conclude that the

"study provides preliminary evidence that CT is an acceptable and effective treatment for people with psychosis who choose not to take antipsychotic medication. An adequately powered randomized controlled trial is warranted."

Now, I do think this is important because some individuals may see this study and attendant media as a basis to decide not to take antipsychotic medication. I have no qualms about personal choice when based on evidence - so let us examine the evidence.

Joy Division: Shes Lost Control

So, what does the study show? Well, the answer comes from the design of the study. It is a pretest - posttest design i.e. there is no control group. So, a within-group analysis with no control group. Without a control group of any description, we cannot know if any change is a generalised consequence of the added interaction (rather than anything about CBT itself). And without even a Treatment as Usual (TAU) control group, we cannot exclude the possibility that any change would have occurred regardless of therapy e.g. regression to the mean.

Second, the trial is an Open Trial i.e. the outcome measures are not made blind. All participants are evaluated by members of the team who were involved in the administering the trial. The lack of blinding is the biggest drawback to the evaluation of therapy and is well documented by all meta-analyses in this area (Lynch, Laws & McKenna 2010; Wykes et al 2008).

So, the study suffers from a host of threats to validity: a) regression to the mean i.e. extreme scores at start simply move toward the (lower) mean on second testing (with no control we cant estimate or eliminate this); b) Hawthorne effect i.e. any change is due to the special circumstances patients find themselves in (with no control, we cant eliminate this); c) lack of blinding i.e. those measuring outcome were involved in the study and aware that all patients had received CBT - the authors don't provide detail on this, but presumably some or all of the 8 different therapists administering the CBT to the participants and/or those who designed the study

Losing Control adds up!

To summarise - no control group and no blinding of outcome measures.

Can we see what happens in such designs with medicated patients? Well, below is a table from a meta-analysis by the CBT guru Aaron Beck looking at effect size for pretes-posttest analyses in medicated patients (Rector & Beck 2002; so 'good' the journal republished it in 2012). Now Morrison et al found d=.85 for overall symptoms -Rector and Beck report a much larger mean effect size of d=1.31 for pre-post comparison (CBT-RC). So, the effect with unmedicated is substantially less - consider the size in some studies here - Pinto is almost three times that reported by Morrison et al!

This doesn't alter the fact that Morrison et al do report a substantial effect in unmedicated patients of .8+. However, the second thing to note about the table above is the comparison of CBT with a so-called active control (ST-RC) i.e. a condition that controls for the generalised impact of just interacting with another, receiving attention etc - here the controls are supportive therapy, befriending and so on. This shows that an effect size of .63 emerges in pretest postest designs for something as simple as befriending - not that much smaller than Morrison et al claim to attribute to CBT.

The final and key point though concerns the lack of blind evaluation. As Morrison et al note in their discussion:

"CBT for psychosis trials that attempt masking were reported to be associated with a reduction of effect sizes of nearly 60% (Wykes et al 2008)"

Actually, what Wykes et al say is

"There is a tendency for the unmasked studies to be overoptimistic about the effects of CBTp, with effect sizes of 50%–100% higher than those found in masked studies."

60% may be an average, but the reduction could range from 50-100%!

Crucially, the patients may not feel better. At the end of the study, the patients rated themselves as experiencing no recovery (pre and post); although they did report a minor improvement at the follow-up. In other words, the non-blind researchers perceived a much greater change than the patients themselves - approximately double the effect size!

To conclude: the use of a pretest-posttest design with no blinding means the authors are unable to draw conclusions about the efficacy of CBT in the study. The effect size is much smaller than in pretest-postest comparisons with medicated patients; of the .85 reported, how much is attributable to generic effects of interaction (and nothing to do with CBT) - well it could be .63; and if we assume a minimum of 60% of this may reflect lack of blinding - then what are we left with? An effect size of possibly less than .1 - in other words - (next to) nothing! And patients are not even convinced!

So, their conclusions that the "study provides preliminary evidence that CT is an acceptable and effective treatment for people with psychosis who choose not to take antipsychotic medication" seems unwarranted. Some might say, well its called 'exploratory' - but that is no excuse for a design that leaves the data uninterpretable and may even lead to some people changing their behaviour (withdrawing from their medication). I also understand that an RCT with blind assessment is being conducted - indeed, much of the press around this study has been about the uncompleted study - a bit cart before the horse! But at the moment - no data published exist to show that CBT reduces psychotic symptoms in unmedicated individuals

22 comments:

Keith, I have tried to have a go at responding to your points, both the explicit and implict ones. I do not pretend to speak for my co-authors. I only offer my opinion.

I would love to respond to your various statements on twitter too, but you have currently blocked me. This means I cannot read or respond. You called me a troll (and a stalker!) when I previously challenged you!

Anyway, here are my responses:

1. We sought media attention for the pilot study (implied point)

The media attention was for the ongoing RCT. My memory (which may be wrong) was that the pilot was hardly mentioned.

We clearly described evidence as preliminary and we clearly called for more work to be done. I think the paper makes a reasonable interpretation of the findings, and is appropriately cautious. I encourage readers of your blog to read the paper and judge for themselves:

http://tinyurl.com/8nx4ej9

3. Lack of control group, lack of blinding as threats to validity

I agree very much that the lack of a control group and lack of blinding are threats to interpretation. But if you are suggesting that we did not make this clear in our paper, then you are misrepresenting our paper. Again, I very much encourage readers of your blog to go and read the paper and compare to your account of it:

http://tinyurl.com/8nx4ej9

4. Our ES is smaller than some studies with people taking antipsychotics.

I find it entirely unsurprising that within-group effect sizes are larger for studies where people are also taking antipsychotics. Often in these trials, stability and medication-adherence are an inclusion criteria. My own view is that people who have decided not to take antipsychotic medication for at least 6 months are, as a group, probably more skeptical of receiving treatment generally. It is very important to note that we only worked with those who had already decided not to take antipsychotics, for whatever reason, and only for at least 6 months. We wanted to see whether it was possible to engage this group in therapy. Most people told us it would be impossible, hence the importance of both our paper and our trial.

You are absolutely correct that meta-analyses conclude the effect size attributable to supposedly inactive approaches is large, and that the specific benefits of CBT are smaller in comparison. But the data-sets in meta-analyses are dominated by large trials with very short treatment windows. Many authors of various meta-analyses (whether carried out by advocates or not), and critics of CBT for psychosis, have given little consideration to this issue.

Whether a specific benefit of CBT for people who do not wish to take antipsychotics (and have not done so for at least 6 months) emerges in single-blind, large-scale RCTs using active control is unclear. Such a trial will cost several million to run. Funders will only invest in definitive trials if cumulative evidence from previous phase trials (Phase I-III) suggests it is a ‘good bet’, and, more importantly, that it is safe:

http://tinyurl.com/bmjgrfv

6. Improvement in self-rated recovery not as large as PANSS ratings.

The effect size for QPR scores is small to moderate at end-of-treatment, but large at follow-up. The effect size for PANSS total scores is much larger at both time-points. It might be that this reflects a difference between self and observer ratings. Or it might be that the self-report measure (QPR) is less sensitive to change than the much older and well-established PANSS. It is also very true that recovery is not the same thing as symptom improvement – many argue the latter does not necessarily equate to the former. Perhaps we have been focusing on the wrong things (symptoms, rather than recovery)? I don’t know the answers to these questions, but further examination would seem useful. We are using the QPR in our RCT, so we can start to examine this in more detail.

7. Not entitled to claim we have preliminary evidence of benefit / acceptability.

I respectfully disagree with this. Imagine we ran the pilot trial and found that everyone either failed to improve or got worse, or that everyone dropped out. Imagine we refused to publish this data, or published it and tried to argue the results were actually evidence of benefit, or attributable to some 3rd variable. People like you would be quite right to haul us over the coals. Our pilot results are preliminary, which we make perfectly clear in the paper, and that is why we are devoting so much of our careers to examining this in a more rigorous way.

Thanks for bringing this study to my attentionI would say that it is a pilot study with many limitationsHowever as a pilot study it did demonstrate that that they could do CBT in a difficult to engage group of patients although the one I referred withdrew/ did not complete assessmentI agree with you that the evidence is weak that it is an effective treatment owing to the limitations and its hoped that the authors assume a much smaller effect size when doing the power calculation for study size.Bit puzzled as to why they didn't assess treatment fidelity as it is a hassle but could still have been doneI have seen Paul speak and he is aware of psychotherapy trials problems with lack of measurement of adverse effects.Maybe authors could have been a bit more circumspect about claims of efficacy but then it's part of the culture of researchers to over egg thingsAs to patients taking the wrong message well that's always going to be a problem. Patients come and tell you they've researched on the Internet and they only need dietary modification not meds etcLets wait for the RCT. it's an important group of patients who often don't engage and refuse meds. Even the RCT is likely to be a select group of patients. It would be an I treating comparison if a drug would get funding for a RCT with a non controlled non blinded pilot but I suppose given the existence of a group of patients that are hard to treat and engage its worth giving it a shot. But beware the generelisability of a trial conducted by enthusiasts to other services.

Samei, thanks for your considered response. Re the limitations of studies and whether researchers are aware of them - most are probably aware (as here) & even acknowledge them (as here in the paper itself) - the question is however, at what point do the limitations make the work uninterpretable? In my opinion, this study passes that point -we cannot know what the outcome measures indicate because of the lack of methodological rigor. As I have said, if it were a study of drugs or as left-field as 'Dolphin Assisted Therapy', it would be dismissed for lack of adequate methods.

Certainly the case that its " part of the culture of researchers to over egg things" - well know phenomenon and explains why 10 years ago, Beck & Rector (2002) had effect size of 1.3+ for CBT for psychosis while Tyl Wykes & co (2008) have .2 for high quality studies and Lynch et al (2010) have zero for nonblind studies - effect sizes shrink with time because of over-claims from bias publciation in favour of small positive samples combined with poor methods in earlier studies (though I would argue later still holds in many later CBT studies - like this one)

Had another thought about pilot justifying RCT I suppose a small blinded controlled study is likely to show no difference unless effect size is huge. So this type of trial demonstrates the treatment can be done and doesn't seem to worsen the condition so then you need a proper RCT. So a drug treatment would probably follow same path this. Maybe they could have used a waiting period before starting CBT just to demonstrate people weren't getting better anyway buy with this patient group asking them to wait months for treatment is unfeasible as they'd just drop out of the trial

Advocates of CBT have not tended to examine adverse effects of CBT (naturally), but here is one documenting adverse effects of CBT in psychosis recently published by Klingberg et al (2012) http://www.ncbi.nlm.nih.gov/pubmed/22759932

This is an extremely important and timely subject to be researching. It seems to me that the researchers went to great lengths to ensure that findings not be construed as a suggestion that people cease medication. Anti-psychotics often have horrible side effects and a questionable and biased evidence base. While many people benefit greatly from finding a suitable treatment for them for others the negative impact of the disabling side effects outweighs any symptom reduction and so stopping, or reducing them, seems a reasoned and understandable response. Given this it is vital we learn more about the alternatives to drug based treatment because as we all know there are simply no magic bullets when it comes to treating psychosis and we desperatly need options and choices. Yes this study may have limitations but it is quite clearly a pilot study designed to justify future bigger studies.

It is a study that suggests that CBT may help those who "choose not to be medicated". However, it has a design that makes it uninterpretable. I am not saying the authors want to encourage people to stop medicating, but it may have this undesired effect - all based on a study with no discernable outcome in favour of CBT. If their RCT (with a better design) shows clear effect that may provide a quite different scenario to consider and I would be first to applaud it.The issue therefore of whether it should be destined for the file-drawer or a high impact journal is a matter of opinion. Nonetheless, it does goes to show that low quality studies (producing spurious large effects) are readily published in this field ...indeed, the overwhelming majority of CBT for psychosis studies have "inadequate methodology" (according to Wykes et al 2008) - with only one-third of studies being described by her (an advocate of CBT) as even being adequate

Actually I would argue the lack of blinding is the major flaw - and acknowledging the flaws is not apparent in the main take-home conclusions of the paper nor does it dismiss them IMO. A lack of blinding and a lack of control group make the study uninterpretable. Lack of blinding has been the major downfall of CBT for psychosis studies for a long time, but they continue to appear - even advocates like Til Wykes focuys on this in their meta-analysis 9aside from ours of course: Lynch et al)

Thanks very much for this excellent blog, which summarizes in a very short space the main problems with the article publishing this study - methodological problems that are endemic in articles purporting to study the effectiveness of psychotherapy. The authors of Morrison et al. (2011) in Psychological Medicine could so easily have formulated their conclusion something like: "The study supports the idea of further study of CT as a treatment for people with psychosis who choose not to take anti-psychotics; such a study is in progress at the moment. However, the limitations of this preliminary study are such that it is not possible to make any statement about the effectiveness or not. It does not give any warning not to conduct a more extensive and costly study with full methodology."

That would have been very different from what they actually wrote, which, despite their qualifications in the main body, is I believe not at all supported by the evidence that they show.

However, they have a point that such a study is a useful prelim for a bigger one. The outcome might have been much more negative, in which case a major study might have been counter-indicated or at least might have needed to be surrounded by a different levels of precautions to avoid harming the participants. It could have been useful and fair to bring out that the conclusions were overstated, but it was a very good thing that the study was conducted and published.Henry Strick

Thanks Henry - I agree very much - as you say, this study is no exception in the arena of psychotherapy research - it is a symptom of a more widespread malaise. As I mentioned above - Wykes reviewed the literature and said that two-thirds of published CBT for psychosis studies are 'not methodologically adequate'. The Wykes et al paper (free at above link) is interesting as they clearly also shows the impact of blinding inflating effect sizes by as much as 50-100% alone (nevermind the lack of a control group!).

If the 'exploratory' study got them funding to do well-controlled study, then as I mentioned above, I look forward to seeing it when peer reviewed and published (as opposed to prematurely aired on a radio show; or here in a paper that doesn’t warrant that conclusion either) - if it turns out positive or negative, I will blog on

Comments on method / interpretation share with the report of the trial an optimistic emphasis on evidence / proof etc. Such eagerness to adhere to a strictly scientific approach -and the optimism- sound naive for this simple social psychological experiment, or, strictly speaking, experience. Both are flawed by their medicalizing perspective and positivistic jargon. Assuming the same medical perspective, i can't help but wonder: How was "Psychosis Due To A General Medical Condition" ruled out?

Reliability of the studied construct is a prerequisite in any trial / experiment, so that any evidence-oriented research on schizophrenia is bound to be flawed. Alternatives would be (a) focusing on a reliable (not necessarily a categorical) construct if research needs to be quantitative, or (b) starting with qualitative research, if a heterogenous construct has to be the research focus.

Reliability of the studied construct is a prerequisite in any trial / experiment, so that any evidence-oriented research on schizophrenia is bound to be flawed. Alternatives would be (a) focusing on a reliable (not necessarily a categorical) construct if research needs to be quantitative, or (b) starting with qualitative research, if a heterogenous construct has to be the research focus.

Cem - yes,the construct of 'schizophrenia' has many critics. Interestingly, almost all studies examining whether CBT impacts symptoms use the concept and diagnosis of schizophrenia i.e. they dont tend to refer to mixed groups with psychosis, delusional disorder, hallucinations or whatever. Also perhaps more interestingly, in the UK - the NICE committee advocating CBT dont mention any issues with the concept of schizophrenia (and indeed use this term throughout rather than psychosis). By comparison, the NICE committee who deal with 'depression' explicitly state that "the most significant limitation is with the concept of depression itself." Why different NICE committees would think the concept of 'schizophrenia' OK, but 'depression' is problematic ...is a little odd!I wouldnt be surprised to see if one response to 'poor results' amongst CBT advocates, is to explicitly argue that the concept of schizophrneia is too heterogenous etc ...and turn to focus on specific symptoms (hoping to garner evidence for specific symptom related effects)- we will see

My comment is only vaguely related to the above, but I am often struck by the difficulty of having an appropriate control for trials of talking/behavioural treatments, and to what extent it is reasonable to see the results of Hawthorne/placebo effects as being a legitimate part of a treatment's efficacy, and should be seen as having a 'real' positive affect on patient's health. Any thoughts from others on this would be appreciated.

Questionnaire scores can be influenced by patients feeling cared for, feeling that they are being given an effective treatment, etc, but no-one is impressed by homoeopathy trials which lead to subjectively reported improvements in this way. If there are not also improvements of more external outcome measures (levels of employment?) should these improvements be seen as 'real' improvements in health, changes in the way patients view or talk about their symptoms, or maybe just the result of patients trying to be polite or look on the bright side of things for the sake of a therapist they believe has tried to help them?

It might be useful to have a wider range of outcome measures, and perhaps some divide between those devising and carrying out a treatment, and those devising and carrying out assessments of efficacy (hopefully reducing the problems associated with being unable to run double-blind trials). I can't see a great and easy solution though.

Thanks CrumbYour point about Hawthorne Effects is important - far too many CBT for psychosis studies still lack any control group. Those that do have controls often lack an active control (to address the general Hawthorne related effects of receiving attention etc) - & which is assumed not to happen in Treatment as Usual control conditions.Also the issue of blinding - again I can only say why are so many CBT fo psychosis studies not blinded at outcome - 'Everybody' including CBT advocates such as Wykes et al (2008) know it makes a big difference i.e. assessors who are not blind to treatment condition massively inflate the size of the effects for CBT!

Thanks for the reply, but I think I might not have been quite clear enough previously, so pardon the repeated examples here.

One problem I have is in distinguishing helpful and worthless aspects of these sorts of non-specific effects.

Worthless eg: Patients feel cared for by a therapist they like who they believe is trying to help them get better, so at the end of treatment, they report positive improvements as a sign of gratitude/loyalty/encouragement. Or patients are told by an authority figure that thinking in a more positive manner about their control over symptoms is an important part of treatment, so after treatment, patients describe their symptoms more positively. Or patients are encouraged to believe that some action of their own will improve symptoms, so patients carrying out this action/ritual become invested in reporting improvements in symptoms to indicate their own success.

Valuable: Patients feel cared for by a therapist they like who they believe is trying to help them get better, so this reduces anxiety and sense of isolation, allows people to focus more upon the aspects of their lives they find fulfilling, and leads to some improvement in symptoms. Or patients are told by an authority figure that thinking in a more positive manner about their control over symptoms is an important part of treatment, and these positive cognitions serve to help break patterns of behaviour and thought which have left patients feeling trapped, and leads to some improvement in symptoms. Or patients are encouraged to believe that some action of their own will improve symptoms, so patients carrying out this action/ritual come to feel a greater sense of self-control and mastery over their own lives (and the action may itself be helpful) so symptoms improve.

With something like homoeopathy, these sorts of interactions would all be seen as subverting studies of efficacy so are (or should be) controlled for, but for cognitive and behavioural treatments they are often seen as a key component, and it seems to me that a lot of studies will struggle to measure 'real' efficacy when it is so difficult to distinguish between the worthless and valuable affects that they have. Certainly, having no control at all is ridiculous, but there also seems to be a real difficulty with developing meaningful control groups, especially when the effects of treatment are only small.