A body of organizations and activists are already fighting for social, environmental, and health justice. We’re long overdue for an AI watchdog.

It is a grim truism of modern life that everything from civil rights violations and health crises to environmental degradation and educational barriers are disproportionately suffered by the people least financially and socially equipped to deal with them. Black Americans are incarcerated at five times the rate of whites. Of the 28 million nonelderly Americans lacking health insurance, over half are people of color. Rich Americans have far greater access to healthy food and productive schools than poor Americans.

Being wealthy means algorithms will find you vacation homes on AirBnb. Being homeless means robots will move you if you sleep too close to buildings.

The same is true with computational systems — and on this front, a bitter fight is emerging. At issue is whether a society now indivisibly dependent on computer technology and its underlying programming can ensure that its vast benefits and inevitable burdens will be distributed equally across social and economic classes. If some version of this egalitarian principle, which I call “computational justice,” does not soon become commonplace, we run the risk of hard-coding all manner of injustice for generations to come.

In 2016, mathematician and former investment banker Cathy O’Neil explored the idea that algorithms in the economic, financial, and educational realms contribute to the structural effects that maintain divisions in wealth. She successfully argued that algorithms’ effect of keeping the rich rich and the poor poor is no accident; it’s evidence of a set of biases algorithms inherit from their creators.

O’Neil’s “weapons of math destruction,” disembodied models that swallow people and spit out biased recommendations, are an illuminating first step, but only one example of a computational injustice. Class affects how an algorithm applies to different people, but it also affects which algorithm applies to different people. For example, being wealthy means algorithms will find you vacation homes on AirBnb; being homeless means robots will move you if you sleep too close to buildings. Algorithms find work for the well-educated while taking it away from those without education. AI is decreasingly disembodied, often embedded in robots who exist in and interact with the physical world. Without oversight, the opportunities for injustice are abundant.

Understood like this, computational justice is the newest version of an old principle. Starting in the 19th century, through the fight for labor laws and civil rights especially, social justice became commonplace. In the 1980s, people began to talk about environmental justice, understanding that segregation often meant more than Jim Crow — in particular, that “pollution is segregated, too.” It’s now well documented that low-income areas fare worse than wealthy ones when natural disasters strike, as when Hurricane Harvey hit Texas, Katrina hit New Orleans, or Maria hit Puerto Rico. Most recently, “health justice” has entered our vernacular, with advocates opposing the unequal quality of care received by the wealthy, white, and able on one hand and the poor, brown, and disabled on the other. Like the lenses of social, environmental, and health justice, the lens of computational justice shows us that discriminatory structural effects do, in fact exist, that they affect outcomes, and that they result from conscious choices.

AI’s unique talent for finding patterns has only perpetuated our legal system’s history of discrimination.

This fight hasn’t broken out yet, though it is picking up. The ACLU has recently helped pass algorithmic discrimination legislation in NYC and the Electronic Frontier Foundation has testified on behalf of algorithmic justice, but neither organization is primarily focused on this issue and both maintain the narrower view of “algorithmic justice.” Groups that do spend the bulk of their time on computational justice, like the AI Now Institute at New York University, are research institutes less directly involved in activism.

Collaboration between these groups is promising, but at this late stage of technological integration, we need to recognize computational justice as a virtue and create standalone structures dedicated to actively defending it.

Nick Thieme is a research fellow at the University of California Hastings Institute for Innovation Law and freelance writer with work appearing in Slate Magazine, BuzzFeed News, and Significance Magazine. His research focuses on AI regulation, cybersecurity, and pharmaceutical patent trolling.

See What Others Are Saying

Kradek

To say that a credit system that takes income and payment history into account is discriminatory is disingenuous since that was it’s purpose. Any predictive system discriminates the instant that variables are determined. Predictive policing is another example of the author restating the obvious as a conspiracy while all it does is put assets where crime has occurred in the past.

I don’t know what you could do regarding credit since the willingness to lend is a function of the expected recovery rate. As to policing the problem is not the distribution of those assets but their training. The “poor” are those most negatively effected by crime. They welcome protection. It’s the harassment that results from poor personnel selection and training of police that they object to.

If you object to the use of predictive algorithms your foolish. Don’t you ever watch the weather report. I can sum up why this article is stupid with the example of home ownership. Those owning in the most dangerous (to value) locations are bimodal. The rich build on the seashore, sloping ground and other dangerous locations. Those choices are subsidized by the government through flood insurance and disaster relief. The poor live in dangerous locations as well. The primary danger to the poor are environmental costs. These could be mitigated but society is in the process of ending environmental regulation since it reduces ROI by an estimated .005. It’s not the math that discriminates it’s people.