Why We Can’t Just Let Algorithms Be Algorithms

Picture this: you’re in a busy restaurant having a quiet meal with a friend. Suddenly, one of the patrons, obviously drunk, starts getting loud and obnoxious, going from table to table insulting the other diners. Within a minute or two, all of the other customers are very uncomfortable and wishing the management would throw the bum out. That’d be the sensible thing to do, wouldn’t it? But the management is actually powerless to do that. Instead they ask everyone to leave. Then they shut down the restaurant until they can figure out a way to prevent other random loudmouth drunks from ruining their business.

Well, Microsoft just had a similar experience on Twitter. In 2014, the company launched a learning “chatbot” driven by artificial intelligence on two popular social media platforms in China. The chatbot, named Xiaoice, has been a huge success; tens of millions of users enjoy interacting with “her.”

But recently, when Microsoft launched on Twitter the same kind of chatbot, this one named Tay, things went disastrously off the rails within a matter of hours. As you probably know, there are certain Twitter users whose favorite activity is sowing chaos and disruption on the platform. When word quickly spread through their grapevine that Tay was programmed to learn through its interactions, they bombarded its account with sexist, racist and anti-semitic tweets. The result? Very quickly, Tay itself started tweeting highly offensive hate speech. Helpless to “throw the bums out,”Microsoft quickly issued an apology and took Tay offline while their engineers figure out how to prevent a recurrence.

Should we just learn to expect these kinds of incidents and just chalk it up to “algorithms being algorithms?” Why is this a big deal?

I could argue that allowing algorithms to reflect and especially to magnify intolerant biases runs counter to our values. And while I believe that, I don’t even think I have to go there to argue that this is a problem worth trying to solve. From a strictly pragmatic point of view, biased algorithms are bad for business. Who wants to risk offending and alienating large segments of their market? Sure, Google and Microsoft are big enough to survive embarrassing incidents like these, but many businesses probably aren’t.

Algorithms can’t just be programmed to learn from data. They must be programmed to discern which data is worth learning from and which data should be discounted.

As the great-grandson of the Mexican general who shot off Pancho Villa’s leg, H.O. embodies the spirit of a true revolutionary. His infectious passion and uncanny ability to predict the future has led him to found many successful start-ups including: Flightlock (acquired by Control Risks), Finetooth (now called Mumboe), and one of the first online non-profit media organizations, the Texas Tribune. He leads the Umbel team with one eye focused on battle tactics, one eye focused on long-term vision, and one eye focused on his iPhone. H.O. studied electrical and biomechanical engineering at the University of Texas at Austin.