How Government algorithms are judging you

Technology columnist Richard MacManus delves into the algorithms used by WINZ, Corrections and the police to try to predict behaviour. He finds at-risk communities have little say in how they operate.

Statistics NZ has released a report about how algorithms are used in Government services and what, if anything, needs to be improved. The report’s key finding was that human oversight is critical, despite the ever-increasing reliance on algorithms in decision making.

Although the report found that our Government services do have an appropriate level of human oversight, it didn’t address some of the finer points of relying on algorithms. In particular, who is really in control and what rights do citizens have over the use of algorithms?

The report focused on “operational algorithms that result in or materially inform decisions which impact significantly on individuals or groups.” It analyzed algorithms in fourteen agencies over June and July of this year. The agencies included Inland Revenue, ACC, Department of Internal Affairs, and NZ Police.

One of the use cases given in the report illustrates the potential ethical minefield of using algorithms for decision-making. Work and Income’s Youth Service, NEET, uses an algorithm “to help identify those school leavers who may be at greater risk of long-term unemployment, and proactively offers them support in terms of qualifications and training opportunities.”

Some of the data collected and processed by the NEET algorithm is not only personal, but is outside the control of the young people identified. Such data includes “whether a young person’s parents were on a benefit” and “whether a young person has ever been the subject of a notification to Oranga Tamariki [Ministry for Children, which supports children at risk of harm].”

The algorithm uses these and other data points to produce “risk indicator ratings” for school leavers. It then automatically “refers the high, medium and low risk (40 percent) school leavers to NEET providers who make contact and offer assistance.” The remaining 60 percent are identified as “very low” risk and are not contacted.

Of the more than 60,000 young people who have accepted assistance from NEET since 2012, one-third were offered the service through the automated referral system.

There’s no question this service is well-meaning and is likely helping many at risk young people get the assistance they need. But Dr Emily Keddell, a Senior Lecturer in Otago University’s Social Work programme, has some concerns about the rules used to decide if someone is deserving of help.

In Keddell’s view, an algorithm is “essentially a very complex classification tool that sorts people into categories, based on a set of programming rules.” Given this automated categorization of individuals by a computer, she wonders about “the assumptions built into those rules that inevitably contain implicit ideas about who is deserving and undeserving, or those who are deemed easier as opposed to harder to help.”

Furthermore, Keddell thinks questions about who is deserving of assistance in the context of restricted resources in our society is an “inherently moral decision.”

Can computers be moral?

The problem is, computers are incapable of making moral judgments. All an algorithm can do is build a statistical profile of people; in other words, put them into groups. Also, while having an algorithm identify school leavers at risk of unemployment is useful on a macro level, it’s a potentially stigmatizing process for young people on an individual level.

I don’t know about you, but I’d be rather insulted to receive a letter telling me I’m a high risk of becoming unemployed. My response would be: you don’t know anything about me. Also, remember these are teenagers receiving these letters – young people who are already nervous about their full-time employment prospects, because it’s always hard to land that first job. So it’s hardly a boost to a teen’s self-confidence to receive a letter telling them they’re even less likely to get a job than other young people.

Messing with a young person’s self-confidence is one thing, but algorithms may already be causing serious harm in other sectors. Keddell thinks “the horse has already bolted” for algorithms in the justice system.

“In the most intrusive and potentially rights-affecting uses of algorithms, in the criminal justice system, populations who are most socially marginalised are already heavily affected,” she noted.

Probability of crime to be committed

Another case study in the Statistics NZ report explains how the police use two algorithms to “assess the risk of future offending” in family violence. One of the algorithms “calculates the probability that a family violence perpetrator will commit a crime against a family member within the next two years, based on data held in police systems such as gender, past incidents of family harm, or criminal history.”

The report clarifies that both algorithms are “used only in the context of a family harm investigation” and “to support human judgment.” So no automated action is taken.

On the face of it, it makes total sense to consider past history of family violence or criminal history when weighing up whether to monitor an individual in case they offend again. Yet Keddell reminds us that an algorithm “sorts people into categories based on probabilities derived from a whole population.”

She thinks it could potentially be “a breach of human rights to make legal judgments based on a person’s statistical similarity to others in their group, rather than them as an individual”.

With these rights at stake, have we as a society – and in particular the poorer, socially marginalised communities in our society – been consulted enough about the use of algorithmic tools by Government agencies?

“It’s easy to avoid consulting with people caught up in the criminal justice system about the use of the Department of Corrections’s ROC ROI [Risk of Reconviction / Risk of Re- imprisonment] and the Family Violence prediction tools,” said Keddell. “Their views are easy to dismiss because they spring from the most marginalised and impoverished groups in society. Whether they are even informed that the tool has been used is unclear.”

Machines making judgments

I agree that we shouldn’t underestimate the affect on people who are highlighted by government algorithms – especially if income, sex or race are key data points. As two Oxford University research fellows in data ethics wrote recently, predictions about future behaviour made by computers can “impact on our private lives, identity, reputation, and self-determination.”

Overall, the Statistics NZ report about algorithm use in this country is helpful in understanding the influence computer technology has in decision-making. But it also raises questions that cannot be easily checked off in a list of policy recommendations. It glosses over some of the more subtle implications of algorithms categorizing us. Such as: how do we know when we’re being targeted by algorithms, and what are the implications for us as individuals?

You may not be a teenager with a self-esteem issue or someone with a criminal history, but we should all be concerned about machines making judgments about our fellow citizens based on mere statistical probabilities. We’re all individuals, after all.

Newsroom is powered by the generosity of readers like you, who support our mission to produce fearless, independent and provocative journalism.

Comments

Newsroom does not allow comments directly on this website.
We invite all readers who wish to discuss a story or leave
a comment to visit us on Twitter
or Facebook. We also welcome
your news tips and feedback via email: contact@newsroom.co.nz.
Thank you.