The 'Intelligent' Robot That Became Racist

The study involved a self-learning artificial intelligence system, GloVe. It has the ability to read and interpret online text. “We replicate these using a widely used, purely statistical machine-learning model—namely, the GloVe word embedding—trained on a corpus of text from the Web,” reads the study. In the associated task, the system is prompted to link certain words like “flowers” and “insects” with other words that the scientists define as “pleasant” or “unpleasant.” For example, the robot had to pair the word “family” with “accident”; it did so successfully. But when researchers offered GloVe a list of white and black-sounding names, the robot went absolutely racist! The intelligent system identified names that are common among white people as “pleasant”, while African-American names were categorized as “unpleasant” words. “Our results indicate that language itself contains recoverable and accurate imprints of our historic…