Here in the woodland we have a keen interest in the next generation of elves and what can be done to educate them in relation to their own and peers’ mental health. Classrooms and schools are natural environments for educational interventions of this kind, but the evidence for them is still thin on the ground.

The authors of the paper (Perry et al, 2014) state that young people diagnosed with a mental illness are very reluctant to seek professional help and cite the fact that only one in four Australians aged 16-24 accessed mental health services in the year immediately before the study. Lack of knowledge regarding mental health was found to be a key obstacle that prevents young people from seeking professional help. Hence, the need for research on educational interventions that improve mental health literacy.

This recent RCT investigated the impact that the HeadStrong mental health literacy intervention had on the literacy, stigma, help-seeking and mental health of young people in Australia.

Methods

The study in question was conducted in Australia. It was a cluster randomised trial of a mental health literacy intervention programme called HeadStrong. The sample comprised 5 Catholic and 5 independent schools. The participants were secondary school students in Year 9 or 10, aged 13-16 years (mean age 14.75). At the start of the trial, 5 schools were randomised to the control condition and 5 to the intervention condition. Pre-intervention assessments were completed by 207 participants in the intervention arm and 173 in the control arm, so the total number of participants at baseline was 380.

The intervention was delivered in classrooms to the intervention group. The control group received regular Personal Development, Health and Physical Education (PDHPE) classes. The intervention was delivered by the adolescents’ PDHPE teachers after they had been trained in delivering HeadStrong.

The intervention consists of five modules covering:

Mental health and wellbeing

Mood disorders

Helping others who are experiencing mental health problems

Helping yourself

Making a difference in the wider community

The intervention is designed to be delivered over 5-8 weeks, taking around 10 hours of total class time. In this trial, it was delivered over the course of Term 1 of the school year.

The primary outcome that the researchers were interested in was mental health literacy. Other outcomes that they also measured were help-seeking behaviour and stigma. They also measured the adolescents’ level of psychological distress and suicidal ideation as an indicator of the state of their mental health.

The main outcome, mental health literacy, was measured using a version of the Depression Literacy Scale modified for adolescents. Other outcomes were also measured with questionnaires: the personal stigma subscale of the Depression Stigma Scale was used to measure the adolescents’ personal stigma (perceived stigma was not measured); the Inventory of Attitudes towards Seeking Mental Health Services tool was used to get an idea of their attitudes towards help-seeking. In addition, the Depression Anxiety and Stress Scales and an adapted version of the Moods and Feelings Questionnaire were used to measure their psychological distress and suicidal ideation. These questionnaires were administered at baseline, immediately post-intervention, and six months after the intervention had finished.

Sensibly, while the intervention was delivered at class level, randomisation occurred at school level to minimise the chance of cross-contamination. This raises some questions about the comparability of the groups at baseline, which is not very well reported in the paper. It also needs to be pointed out that this was the first stage of a three-stage trial, so the sample size is only a third of the full trial size. Findings should be interpreted with that in mind.

All participants were properly accounted for at the conclusion of the trial, but there is quite a lot of data missing: 380 participants filled out their questionnaires at the start of the intervention, but only 322 at follow-up and 208 six months post-intervention. This effectively makes the six-month follow-up data meaningless, as it is only available for 55% of the original trial participants. We are told that this was because some schools simply did not return the students’ questionnaires to the researchers; maybe more of an effort could have been made to obtain them.

Using PHDPE classes as a control intervention is a big strength of the study; when studying behavioural or educational intervention, using attention control is good practice. It wasn’t possible to blind teachers or students to the intervention for obvious reasons.

Data from just over half (55%) of the trial participants was included at 6-months follow-up, which makes the findings considerably less reliable than they could have been.

Results

Mean score on the Depression Literacy Scale

Intervention group

Control group

Baseline

11.34

10.87

Post-intervention

14.76

12.07

6-month follow-up

14.27

13.09

Even disregarding the six-month follow-up data, this represents a moderate to large improvement in mental health literacy.

Mean value on the Depression Stigma Scale

Intervention group

Control group

Baseline

10.9

11.95

Post-intervention

9.8

11.79

This result shows a small positive effect on stigma in the intervention group, compared with no significant change in the stigma scores in the control group.

The adolescents’ attitudes to help-seeking were not significantly impacted. The researchers discuss that one reason for this may be that the ‘dose’ of the programme may have been insufficient.

The intervention also did not influence psychological distress or suicidal ideation, leading the researchers to state that this was so because these conditions were not present in the current sample at baseline; all baseline scores for psychological distress were in the normal range.

As this was the first stage of a three-stage trial, the sample is as yet too small to lead to truly generalizable results. The authors also point out expressly that the sample population, consisting of pupils at Catholic and Independent schools, was fairly homogenous, and that replication in a more diverse sample was needed.

The trial found improvements in mental health literacy and stigma from the HeadStrong intervention, compared with control.

Discussion

In so far as they are trustworthy, the results seem encouraging, as the main outcome was positively influenced. I find the authors’ speculation on why help-seeking behaviour was not affected a little puzzling as they also state that the sample had a low baseline level of psychological distress. Looking through the description of the intervention, I noticed that it included a self-help component, which to me sounded like the coping skills part of a CBT intervention. For adolescents with mild mental health problems/at low risk of developing mental health issues, this taken together with the increased understanding of mental health issues which the intervention also teaches, may be exactly the kind of help that they need, and hence they wouldn’t need to seek help elsewhere.

Mental health education in the classroom is still a fairly new area and we do not know what interventions may be best. It would be good to see the next stage of the trial apply it in a more diverse population, particularly one with a higher baseline risk or incidence of mental health problems. Educating people of any age about their mental or physical health is nearly always going to be a good thing, but in people who are already quite healthy, the effect is likely to be smaller than in people who have more issues.

These results need to be replicated in a more diverse population with higher baseline levels of mental health problems, before the results can be applied more widely.

Lisa is a Clinical Librarian at North East London NHS Foundation Trust. She's responsible for information skills training in the context of evidence-based practice as well as providing evidence and current awareness to several communities of practice and public health teams to enable them to make evidence-based decisions. She’s interested in making research accessible and understandable for a wider audience.
The views expressed in her blogs are her own, not her employer's.