Title

Authors

Document Type

Working Paper

Publication Date

2005

Center/Program

Columbia Center for Contemporary Critical Thought

Abstract

Actuarial methods – i.e., the use of statistical rather than clinical methods on large datasets of criminal offending rates to determine different levels of offending associated with one or more group traits, in order to (1) predict past, present or future criminal behavior and (2) administer a criminal justice outcome – now permeates the criminal law and its enforcement. With the single exception of racial profiling against African-Americans and Hispanics, most people view the turn to the actuarial as efficient, rational, and wealth-maximizing. The fact is, law enforcement agencies can detect more crime with the same resources if they investigate citizens who are at greater risk of criminal offending; and sentencing bodies can reduce crime if they incapacitate citizens who are more likely to recidivate in the future. Most people believe that the use of reliable actuarial methods in criminal justice represents progress. No one, naturally, is in favor of incorrect stereotypes and erroneous predictions; but, to most people, it makes sense to decide who to search based on reliable predictions of criminal behavior, or to impose punishment based on reliable estimates of reoffending.

This article challenges our common sense. It sets forth three compelling reasons why we should be skeptical about – rather than embrace – the new actuarial paradigm. First, the reliance on predictions of future offending may be counterproductive to the primary goal of law enforcement, namely fighting crime. Though this may seem counterintuitive, it is, surprisingly, correct: the use of actuarial methods may increase the overall amount of the targeted crime depending on the relative responsiveness of the targets (in comparison to the responsiveness of non-targeted citizens) to the changed level of law enforcement. The overall impact on crime depends on how the members of the different groups react to changes in the level of enforcement: if the profiled persons are less responsive, then the overall amount of profiled crime in society will likely increase.

Second, the reliance on probabilistic methods produces a distortion of the carceral population. It creates a dissymmetry between the distribution of actual offenders and of persons who have contact with the criminal justice system through arrest, conviction, incarceration, or other forms of supervision and punishment. It produces a disproportionate rate of correctional contacts among members of the profiled group in relation to their representation in the offending population. This, in turn, compounds the difficulty of many members of targeted groups to obtain employment, pursue educational opportunities, or lead normal family lives. It represents a significant social cost that is often overlooked in the crime and punishment calculus.

Third, the proliferation of actuarial methods has begun to bias our conception of just punishment. The perceived success of predictive instruments renders more appealing theories of punishment that function with prediction. It renders more natural theories of selective incapacitation and sentencing enhancements for citizens who are at greater risk of future dangerousness. In sum, it reshapes the way we think about just punishment. Yet the development of these actuarial devices are fortuitous advances in technical knowledge from disciplines such as sociology, psychology, and police studies that have no normative stake in the direction of our criminal laws and punishments. These technological advances represent, in this sense, exogenous shocks to our legal system. And this raises very troubling questions about what theory of just punishment we would independently embrace and how it is, exactly, that we have allowed technical knowledge, somewhat arbitrarily, to dictate the path of justice.

Instead of embracing the actuarial turn in criminal law, we should rather celebrate the virtues of the random: randomization, it turns out, is the only way to achieve a carceral population that reflects the offending population. As a form of random sampling, randomization in policing has significant positive value: it reinforces the central moral intuition in the criminal law that similarly situated individuals should have the same likelihood of being apprehended if they offend – regardless of race, ethnicity, gender or class. It is also the only way to alleviate the counter-effect on overall crime rates that may result from the different responsiveness of different groups to policing. Randomness in the policing context is simple: law enforcement could use a lottery system for IRS audits, random selection for airport screening, or numerical sequencing for consensual car searches on the highway. In the sentencing area, randomness means something quite different, but no less straightforward: it means imposing a sentence based on a proper metric and then avoiding the effect of prediction by eliminating parole or other devices that are prediction-based. Randomness does not mean drawing names out of a hat in deciding who to parole or how long to sentence. It means, instead, eliminating the effect of prediction.

In criminal law and enforcement, the presumption should be against prediction. Actuarial methods should only be employed when it can be demonstrated to our satisfaction that they will promote the primary interest of law enforcement without imposing undue burden or distorting our conceptions of just punishment. Barring that, criminal law enforcement and correctional institutions should be blind to prediction.

Comments

This paper was presented at the Criminal Justice Roundtable, Harvard Law School, May 13, 2005.

Recommended Citation

Bernard E. Harcourt,
Against Prediction: Sentencing, Policing, and Punishing in an Actuarial Age,
Chicago Public Law and Legal Theory Working Paper No. 94
(2005).
Available at:
https://scholarship.law.columbia.edu/faculty_scholarship/1373