Previous studies have found that length of treatment isn’t correlated with improvement in therapy in the way we might expect. If we think about therapy as having a ‘dose-response’ effect, we might expect that the higher the dose (the more sessions of therapy) the better the effect.

Alternatively however, therapy could operate under a ‘responsive regulation’ model, where individual patients choose the right number of sessions for them. Consequently, some people get better sooner than others, and so the number of sessions doesn’t necessarily correlate with improvement.

The authors of a recent study (Stiles et al, 2015) point out that this finding could have important implications for policy decisions about prescribing a set number of therapy sessions. They also argue that although this ‘responsive regulation’ has been observed already, it needs to be ‘reproduced’, i.e. the same effect needs to be observed across multiple settings, for us to be sure of it. For this reason, they looked at the data from a large national database in the UK that included not just primary care but also secondary care, university and workplace counselling centres, and the voluntary sector.

The ‘responsive regulation’ model of therapy is where individual patients choose the right number of sessions for them.

Methods

The database they used is called the CORE National Research Database. It includes patients treated over a 12 year period from 50 services across the UK, treated by 1,450 therapists.

The CORE-OM assessment form is completed at the start of treatment and the end. It’s a 34 item self-report questionnaire assessing subjective well-being, symptoms, functioning and risk. Scores can range from 0-40 with 10 being the established cut-off to indicate a clinical problem (though the CORE-OM does not diagnose specific problems).

All patients between 16-95 who had a CORE-OM of 10 or more at the start of treatment, who completed pre and post CORE-OM forms and who were described by the therapist as having had a planned ending were included. A ‘planned ending’ refers to the patient and therapist agreeing to end therapy, or therapy ending once a pre-agreed ending had been reached.

49,618 patients were excluded due to having no post CORE-OM form. After other exclusions were applied, the final included sample was 26,430 patients.

The authors looked at recovery in relation to the number of sessions patients had. They defined ‘recovery’ as being a decrease in the CORE-OM of at least 4.5 points AND the total score being below 10 after therapy.

Results

60% of patients met the criteria for recovery (Nearly 80% met the criteria of a 4.5 or more reduction, but didn’t finish therapy with a total score less than 10).

There was a trend for patients who attended fewer sessions being more likely to have recovered.

Patients with a higher starting CORE-OM score tended to have more sessions than those with a lower score. This suggests the number of sessions was in response to the level of need when beginning treatment.

There was a trend for patients who attended fewer sessions being more likely to have recovered.

Conclusions

The authors concluded:

Patients averaged similar gains regardless of treatment duration… Finding that patients seen for many sessions average no greater improvement than patients seen for few sessions may seem surprising if treatment duration is considered as a planned intervention …but it may seem more plausible if patients and therapists are considered as monitoring improvement and adjusting treatment duration to fit emerging requirements, responsively ending treatment when improvement reaches a satisfactory level, given available resources and constraints. We call this ‘responsive regulation’.

I’m going to guess that some people will have a ‘Well, duh.’ response to this. People who decide to finish therapy are those who have gotten better! People who feel worse at the start tend to have more sessions whereas less ill people get better more quickly! Holmes, you astound me.

But actually, I think this is a neat finding. It demonstrates quite convincingly (using a large naturalistic sample from multiple sectors) that flexibility in therapy is effective. Rather than requiring a prescribed number of sessions, patients and therapists can work responsively, with the ‘right’ number of sessions being right for the individual rather than something we can quantify across the board. The authors further suggest that “Allowing greater scope for responsive regulation might yield still greater efficiencies”, for example allowing patients to schedule appointments in a way that suits them, which might not be weekly. This all points to a system that responds to patient need, rather than expecting patients to fit into a pre-ordained template.

Although I find the overall finding interesting and useful, it’s hard to ignore however that the largest group in the study were those who didn’t complete a form post-treatment and so we don’t know what happened to them. The authors do discuss this briefly, noting that the findings do not apply to ‘non-completers’. They suggest (rather optimistically) that these patients may have left early due to finding help elsewhere or due to feeling they’d achieved their goals, but acknowledge that “patients who did not return to complete post-session measures seem likely to have made smaller gains than had patients who did return.”

I think it’s important here though not to veer into criticising the study for what it didn’t set out to answer. The authors wanted to see whether the ‘responsive regulation’ observation held across patients who completed therapy in different services, using a naturalistic sample, and it did.

I think it’s essential that we don’t ignore those people who didn’t officially complete their therapy and work out why they left and whether they are still in need. But that wasn’t what this paper or method aimed to do. There is a lot unanswered here, but I think the authors were clear about what they intended to do and on the whole did it well.

Patients who did not complete a post-treatment form were excluded from the results, which may have skewed the findings.

Limitations

The Big One is the missing 49,618 patients with no post-treatment measure, but as I said above, in this case the authors aren’t making any claims to know what happened to those patients using this approach. Clearly though there is more work to be done to find out what happened to these patients.

Therapist characteristics aren’t recorded on the CORE and I’d be curious to see a sensitivity analysis that took into account therapist features (which could be background, years of experience, number of patients being seen) to see whether the effect holds.

87.4% of patients were white, a reminder of the well-recognised problem that minorities are less likely to access therapy despite similar or greater levels of need.

‘Recovery’ in mental health is a contentious term, and I’m sure some people would take issue with the paper for using a standardised quantitative measure to judge this. Again though, I’d argue that for this paper it’s a sound choice, enabling comparisons across a large data set using a well-validated measure that is commonly employed in practice. It’s also worth noting that the authors use the finding to highlight therapists and patients need to work out together what constitutes a ‘good enough’ gain for them, again shifting the focus from prescriptive guidelines toward negotiation between therapist and patient about what is best for them.

Finally, not a limitation but an observation: As a trials geek, I find it interesting to look at naturalistic studies like this and compare them to what we see in controlled trials. Two things stick out:

Firstly, the variety of problems that patients present with (anxiety, depression, low self-esteem, bereavement, trauma, addiction and more, noting that most patients presented with multiple problems)

Secondly the variety of therapeutic models employed, with ‘integrative’ (suggesting ‘bits of whatever worked’) being most common.

Compare this to controlled trials which will usually look at a very specific patient group and test a specific type of therapy. Are trials looking at effectiveness in situations that simply don’t translate to real world care?

Real people in the real world are complex and hard to compartmentalise.

Sarah is a Research Fellow with the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) Greater Manchester at the University of Manchester. She is a health researcher with a particular focus on evaluating mental health treatments and services. She works on a variety of randomised controlled trials, systematic reviews and qualitative studies. Her main research interests are implementation research, e-health and mental health technologies, co-morbidity of mental and physical health problems, moderators of treatment effects and patient and public involvement in research.