Manage your subscription

Biased policing is made worse by errors in pre-crime algorithms

4 October 2017
, updated 27 April 2018

Why are we here again?

New York Times/Redux/Eyevine

By Matt Reynolds

PREDICTIVE policing that aims to work out when and where a crime will take place promises a future of data-driven law enforcement. But a flaw found in the design of the software used suggests that instead of fixing biases in policing, predictive algorithms are to blame for a whole new set of problems.

Pre-crime tech is catching on in the US. PredPol – a market-leading system – is already used by police departments in places such as California, Florida and Maryland. Their hope is that such systems will bring down crime rates while simultaneously reducing human bias in policing.

But when researchers in the US examined how PredPol predicts crime, they found something disturbing. Their study suggests that the software merely sparks a “feedback loop” that leads to officers being repeatedly sent to certain neighbourhoods – typically ones with a high number of racial minorities – regardless of the true crime rate in that area (arxiv.org/abs/1706.09847).

Advertisement

The problem stems from the logic that PredPol uses to decide where officers should be sent. If an officer is sent to a neighbourhood and then makes an arrest, the software takes this as indicating a good chance of more crimes in that area in future.

What this means, says Matt Kusner at the Alan Turing Institute in London, is that the PredPol system seems to be learning from reports recorded by the police – which may be higher in areas where there are more police – rather than from underlying crime rates.

“That’s how dangerous feedback loops are,” says Joshua Loftus at New York University, who wasn’t involved in the study. Although these loops are only part of how PredPol makes its predictions, he says they may explain why predictive policing algorithms have sometimes seemed to recreate exactly the kind of racial biases their creators say they overcome.

“A ‘feedback loop’ in software leads to officers being repeatedly sent to certain neighbourhoods”

To better understand how the system comes to its conclusions, the study team created a simplified mathematical model of the PredPol software. The algorithm chooses how to distribute a certain number of officers between two locations. If more are sent to one location, they tend to make more arrests there. The team found that this feeds back into the system and leads it to send even more officers to that same place.

That means the software ends up overestimating the crime rate in one neighbourhood, without taking into account the possibility that more crime is observed there simply because more officers have been sent there – like a computerised version of confirmation bias.

There might be a way to stop the feedback loop. The authors also modelled a different system, in which the algorithm only sent more officers to a neighbourhood if the area’s crime rate was higher than expected. This led it to distribute officers in a way that much more closely matched the true crime rate.

Loftus says that many more problems need to be solved before policing algorithms can be truly called fair. “Human decisions affect every aspect of the design of the system,” he says. The algorithm could be thrown off if officers are more likely to arrest racial minorities, for example.

This article appeared in print under the headline “A flaw in the pre-crime system”