Results of the largest ever evaluation of a school-based positive psychology program, the UK resilience project are now available at Journal of Consulting and Clinical Psychology. The results are, uh, not impressive.

The intervention, the 16-hour UK Resilience Programme (UKRP), was carefully based on the Penn Resiliency Program (PRP) for Children and Adolescents. Jane E. Gillham, the corresponding author for the UK study was also one of the developers of the Penn program.

The study enrolled almost 3,000 students, with 1,000 students in the intervention group. The UK study is thus larger than the 17 previous studies combined. The largest past study had a total of only 697 students.

The authors reported that students receiving the intervention reported lower levels of depressive symptoms than students assigned to the control group, but the effect was small and did not persist to 1-year or 2-year follow-ups. There was no significant effect of the intervention on symptoms of anxiety or behavior at any point.

The authors concluded that the UKRPproduced small, short-term effects on depressive symptoms and that

These findings suggest that interventions may produce reduced impacts when rolled out and taught by regular school staff.

In this blog post, I’m going to be arguing that

What the authors represent as weak findings may be even weaker than they portrayed.

There is nothing particularly new or positive psychology about the intervention package. It is a rehash of conventional (dare we say, bad old negative psychology?) treatment of depression applied to a student population in which the levels of depressive symptoms were low.

Under these circumstances, the intervention could not be expected to have an effect.

If we are truly committed to improving the well-being of students, we need to rethink the nature and focus of such interventions, and whether students should be required or coaxed to attend. As this intervention stands, it wastes staff and student time that could better be used for other ways of improving student well-being.

But first, some more details of the study:

Sample.

The 2,844 students were ages 11–12, 49% were female, 67% were white and they were drawn from 16 schools.

Students were not randomly assigned, but entire classes of students were arbitrarily enrolled in the intervention (UKRP) or control (usual school) conditions based on class timetables.

There were some baseline differences between the intervention and control groups and between schools. Some schools assigned students of above average academic achievement to the intervention groups, whereas other schools assigned students the intervention group because of concern about their emotional well-being or behavior.

Outcome measures.

Three standardized, normed self-report measures were used to evaluate the intervention:

Assessments were administered at baseline, immediately after the intervention, and at 1-year and 2-year follow-up.

The intervention package.

The article provides a web link to obtain more information about the intervention. When I went to the site, extensive information was requested that would be associated with me actually using the manual in a study. However, the description of the curriculum is available here.

The curriculum teaches cognitive-behavioral and social problem-solving skills and is based in part on cognitive-behavioral theories of depression by Aaron Beck, Albert Ellis, and Martin Seligman (Abramson, Seligman, & Teasdale, 1978; Beck, 1967, 1976; Ellis, 1962). Central to PRP is Ellis’ Adversity-Consequences-Beliefs (ABC) model, the notion that our beliefs about events mediate their impact on our emotions and behavior. Through this model, students learn to detect inaccurate thoughts, to evaluate the accuracy of those thoughts, and to challenge negative beliefs by considering alternative interpretations. PRP also teaches a variety of strategies that can be used for solving problems and coping with difficult situations and emotions. Students learn techniques for assertiveness, negotiation, decision-making, social problem-solving, and relaxation. The skills taught in the program can be applied to many contexts of life, including relationships with peers and family members as well as achievement in academics or other activities.

The control group.

The intervention received by the control group varied across the schools, but was generally Personal, Social and Health Education (PSHE) classes. In some of the schools, the control group was regular academic lessons.

Were effects of the intervention even weaker than presented?

Confirmation bias is common in presentation of results of test of interventions, especially when one of the developers of the intervention is among the authors or a consultant. To reduce the risk of bias, investigators are commonly required to preregister their design, including their plans for analysis of data. This commits investigators to a particular choice of outcomes and assessment points for evaluating the intervention. The alternative is that investigators can undertake a full range of analyses and report those that make the intervention looked strongest. This trial was apparently not preregistered.

Another check on risk of bias in reporting the results of a study are including all participants who were assigned to the intervention or control group in the primary analyses. The risk of not doing what is called an intent-to-treat analysis is a bias because selective retention on dropout of participants may affect results. In this particular study, results were quite weak and the appearance of significance could be influenced by even a small loss of participants from the analysis. If there is such a loss, a variety of techniques are available for adjusting.

Contrary to what the investigators say in the article analyses were not true intent to treat. Participants were excluded if they did not complete follow up assessments. Analyses indicate that students who came from special education classes or had initial high scores on depressive symptoms were less likely to complete subsequent assessments. The effect was bigger than the difference between intervention and control groups. No effort for compensating for loss of participants from follow up was reported. They were simply dropped.

For practical reasons, the study was not a true randomized trial, and the means of selecting participants resulted in differences in baseline characteristics. The investigators attempted to compensate these differences with statistical control. If there were any differences between the intervention and control groups, this could prove inadequate. Ideally, in such situations, investigators provide results without such corrections and then with them. If the two sets of results agree, it is more reassuring that apparent effects were not simply due to baseline differences between the intervention and control groups.

The article does not present simple differences in depressive symptoms, anxiety, and behavior problems at the end of the intervention. It is possible that already small differences between the intervention and control groups would disappear in a presentation of the simple analysis.

For their primary analysis, the investigators compared the intervention and control group and overall level of depressive symptoms. There were no significant differences. That would usually rule out continuing onto subgroup analyses examining the different time points. However, the investigators went on to look at depressive symptoms at each of the three post-assessment time points, and found a small difference at the first assessment that did not persist. This provided the basis for their bragging rights for having found a small, rather no effect, which is emphasized in their abstract and discussion.

Thus, by conventional standards, it could be concluded that UKRP produced no significant effects, not merely small effects.

How is this intervention a positive psychology intervention?

In a Great Debate article, Howard Tennen and I complained about proponents of positive psychology often drawing a false distinction between what is special about positive psychology versus the rest of conventional, “negative psychology” (Seligman, 2002).

Positive psychology articulates a role for hope, wisdom, courage, spirituality, responsibility, and perseverance in human adaptation in sharp contrast, proponents claim, to the negative biases of a conventional psychology that is too focused on distress and psychopathology to the exclusion of positive experiences.

Elsewhere in debates and on listserves and Facebook, I have argued that much is what effective about so-called positive psychology interventions is not new, and what is new about them is not effective.

This intervention is a warmed-over set of “negative psychology” interventions developed decades ago.

The UKRP intervention was carefully modeled after the Penn Resiliency Project and a key developer of the Penn project provided training and consultation and was the corresponding author for this article. Along with the Comprehensive Soldier Fitness Program, the Penn Resiliency intervention represents a premier positive psychology intervention package. But how does this intervention represent the distinctive ideas of positive psychology?

Yet, key elements of the intervention come directly from Aaron T. Beck’s cognitive theory of depression and Albert Ellis’ Rational-Emotive Therapy (RET), or as Ellis later called it, his Adversity-Consequences-Beliefs (ABC) model. Both are conventional models of depression and its treatment that predate positive psychology by decades.

The primary outcome was a reduction in depressive symptoms, not any improvement in a characteristic positive psychology outcome, such as positive well-being or flourishing. As far as I can see, the only thing new about this intervention is that was taken out of its usual context of a treatment for clinical depression and put into the schools where it was provided to all students, who happened, as a group, to be low in depressive symptoms. If any students actually showed high risk of clinical depression, they were evaluated and potential referred to conventional depression treatment.

So, does this important test of positive psychology in the schools merely examine whether conventional treatments for depression will produce lower levels of depressive symptoms subsequent to students receiving the intervention?

Why the intervention could not be expected to have an effect.

There was on average so little elevation in depressive symptoms, so the intervention could not be expected to have much of an effect. The investigators state:

At baseline, 60% of students in our sample scored 8 or below (average or below- average levels of symptoms), and 12% scored 0 or 1.

Because of this, we encounter a strong floor effect: Students without many symptoms and with low risk of depression do not have much room for improvement.

For the time span covered by the intervention and the follow-up periods, depressive symptoms are relatively stable. Even students assigned to the control group are unlikely to face situations in which whatever is provided by the intervention would be of much use up to them, in terms of avoiding an increase in depressive symptoms.

Most of the students who were present could not be expected to benefit from the intervention.

There is the possibility that post hoc (unplanned and after the fact) subgroup analyses would suggest that some subgroup had benefited. But given normative data suggesting that the intervention would be ineffectivewhy subject a large group of students to such intervention?

With only weak or probably no effects, the UK Resilience Programme cannot be presumed to be cost-effective. And in calculating the costs, we need to consider lost opportunities for the students enrolled in the program.

Arguably, students at risk for depressive symptoms would include those who had academic deficits which are readily identifiable. Why not devote the week and a half to remedying those deficits?

Is it ethical to require that students submit to a program that is unlikely to demonstrate benefits in the primary outcomes by which the program is evaluated?

The rollout continues…

As evidence of the practicality and sustainability of the intervention, there are now 85 schools teaching it in the United Kingdom, with over 800 teachers trained at 10 training courses. At least 250 of these teachers will have had their places funded entirely by the schools they work for, with the remainder being funded by some combination of school and LA [local area] funding. This demonstrates that schools and LAs are able and willing to provide the financial backing for the program.

About James Coyne PhD

James C. Coyne, PhD is Professor of Health Psychology at University Medical Center, Groningen, the Netherlands where he teaches scientific writing and critical thinking. He is also Visiting Professor, Institute for Health, Health Care Policy & Aging Research, Rutgers, the State University of New Jersey.
Dr. Coyne is Emeritus Professor of Psychology in Psychiatry, where he was also Director of Behavioral Oncology, Abramson Cancer Center and Senior Fellow Leonard Davis Institute of Health Economics. He has served as External Scientific Advisor to a decade of European Commission funded community based programs to improve care for depression in the community. He has written over 350 articles and chapters, including systematic reviews of screening for distress and depression in medical settings and classic articles about stress and coping, couples research, and interpersonal aspects of depression. He has been designated by ISI Web of Science as one of the most impactful psychologists and psychiatrists in the world. His books include Screening for Depression in Clinical Settings: An Evidence-Based Review edited with Alex Mitchell (Oxford University Press; 2009). He also blogs and is a regular contributor to the blog Science Based Medicine and to the PLOS One Blog, Mind the Brain. He is known for giving lively, controversial lectures using scientific evidence to challenge assumptions about the optimal way of providing psychosocial care and care for depression to medical patients.

Share this page

A note to readers…

The PLOS BLOGS Network is made up of two types of blogs, the six staff-written blogs from PLOS journal editors or departmental teams, at the top of the next column, and PLOS BLOGS Network-hosted independent blogs, listed below them. Independent blogs are not pre-screened or edited by PLOS; as such any views presented are solely those of their authors, and do not necessarily represent views of PLOS. Unless otherwise noted, all posts on active PLOS BLOGS are published under a Creative Commons CC BY 4.0 license, making them available for reuse by anyone, for any purpose, with appropriate attribution. For questions or comments please contact blogs@plos.org