Post navigation

Amazon Built an AI to Evaluate Job Applicants. One Problem: It Seemed Sexist.

Gender Gap

In 2014, Amazon built an AI to evaluate job applicants’ résumés. By 2015, it realized the system had a major flaw: It didn’t like women.

According to five sources who spoke to Reuters, Amazon spent years developing an algorithm that used machine learning to sift through job applicants in order to identify the best candidates.

But, in a decision that reads like a metaphor for the diversity-challenged tech sector, the company abandoned the effort in 2017 after it realized it couldn’t guarantee the AI wouldn’t discriminate against female applicants.

An Amazon spokesperson provided this statement to Futurism: “This was never used by Amazon recruiters to evaluate candidates.”

Bad Data

The problem, according to the unnamed Amazon sources, was that the company’s developers trained the AI on résumés submitted to the company over a 10-year period. Though the Reuters report didn’t spell it out, it sounds like researchers were probably trying to train it to identify new résumés that were similar to those of applicants who the company had hired in the past.

But because most Amazon employees are male — as of late last year, men filled 17 out of 18 of its top executive positions — the AI seemingly decided that men were preferable.

Biased World

Training AIs with biased data — and thereby producing biased AIs — is a major problem in machine learning.

A ProPublica investigation found that an algorithm that predicts the likelihood that criminals will offend again discriminated against black people. And that’s to say nothing of the Microsoft-created Tay, an artificially intelligent Twitter chatbot that quickly learned from online pranksters to spew racist vitriol.

The tech industry now faces a huge challenge: It needs to figure out a way to create unbiased AIs when all the available training data comes from a biased world.