Menu

Evaluating the evaluating

Last week saw the Justice Data Lab publish reports evaluating the performance of a further two charities working with ex-offenders. A total of 59 reports can be found on the Ministry of Justice part of .gov.uk.

The Lab has been running for a year. Charities working with ex-offenders provide the lab with a list of the clients with whom they have worked. The team in the lab then estimate what proportion of each charity’s clients re-offend within a year. They do this by using the administrative data Ministry of Justice holds as people pass through the criminal justice system. The analysis is undertaken by the Ministry of Justice statisticians who operate the lab. For each report, they also calculate what the re-offending rate would be for a suitably matched sample.

At a recent event, the MOJ team outlined what they had seen over the year of operating a lab. With 59 reports now published, they can provide a nice summary of the results so far. As well as providing very interesting findings for those interested in the criminal justice system, the Justice Data Lab is also providing evidence about evaluating.

Firstly, providing a lab to tabulate results considerably lowers the barriers that small delivery bodies face in undertaking evaluation. This works at a number of levels. The analysis undertaken in this data lab is quite low cost, allowing a lot of the processes of evaluation to be automated. The estimates are high quality. In fact, they are ‘official statistics’ produced within the quality assurance processes of the regulated government statistical service. The delivery body or its analyst do not access the disclosive underlying data held by MOJ. The lab just gives results. I like to call this a tabulation unit, as the Justice Data Lab is preparing standardised tables of results. It avoids a lot of the complexity associated with giving evaluators secure access to sensitive data in a research data lab.

Secondly, having standardised tabulations allows a degree of comparability across all the results that just is not possible in more bespoke evaluations. Each of the 57 reports uses the same approach to matching clients to the administrative data; each report uses the same calculation to estimate re-offending; each also use the same propensity score matching approach. Each of these stages may not be perfect for the particular delivery partner, but just having the same approach across so many different interventions is providing a unique evidence base, allowing synthesis across all the studies for both delivery bodies and policy makers to learn lessons.

There are other aspects of evaluation that the Lab is shedding light on. But these two particularly drive the demand and supply of evaluation in the direction needed for smaller delivery bodies, increasing the chance that evaluation can be for all.