Your study suggests that even modest programs and interventions might lead to meaningful impacts on youth. Would you explain?

There is a sense among many people that because the underlying causes of crime are hugely complex, reducing crime must require very costly and intensive intervention. Our program, like CBT more generally, is of short duration. In our case, participants in the program averaged only 13 sessions. Yet we see an enormous—and cost-effective—drop in violent-crime arrests, and enduring improvements in school engagement that might translate into more young people with high school diplomas. While the program is obviously not a panacea, given that even treatment youth still drop out and commit crime, it does suggest that adolescent behavior might be much more malleable than previously thought.

What challenges did you face in conducting this study?

This was a large-scale field experiment in the real world, which carries all the challenges you might expect. We selected our study sample using administrative lists of those youth the district expected to enroll in the study schools the following school year. But of course youth are mobile, so not all the youth assigned to treatment even attended a school where the program was available. And though we designed the study to sort out the effects of different parts of the intervention (in school versus after school), there ended up being too much cross-participation (for example, those assigned only to in-school participating in after-school activities as well) for us to be able to statistically separate the effects. Fortunately, none of this undermines the integrity of random assignment; it just changes the interpretation of the intent-to-treat estimates—which is part of the reason we focus on the instrumental variable estimates of the effect of actually participating.

We also faced the challenge of working only with administrative data. This has a lot of benefits; surveying youth would have been entirely cost-prohibitive, and we don’t face survey attrition, non-response bias, or social desirability bias (such as treatment youth being less willing to admit to committing crimes for fear of letting down the program providers). But administrative data does make it difficult to nail down the mechanisms at work. We do what we can to eliminate potential explanations like incapacitation (we found the crime effect wasn’t limited to days when programming was offered), but that’s not the same as having detailed measures of psychological and social processes. A new follow-up study on BAM is going to collect more of that kind of information.
How did you get involved in this work?

I was very fortunate to be a graduate student in the right place at the right time. I joined the University of Chicago Crime Lab, which is an organization focused on generating rigorous evidence on what works—or doesn’t—in reducing crime, early in my graduate school career. This was one of the very first Crime Lab studies, so I was privileged to be a part of building the relationships and the access to data needed for studies like this.

What resources would you recommend to people who want to do more with CBA?