Artificial intelligence is quickly becoming as biased as we are

When you perform a Google search for every day queries, you don’t typically expect systemic racism to rear its ugly head. Yet, if you’re a woman searching for a hairstyle, that’s exactly what you might find.

A simple Google image search for ‘women’s professional hairstyles’ returns the following:

Here, you’ll find hairstyles, generally done in a professional setting by stylists.

It’s the nature of Google. It returns what it thinks you’re looking for based on contextual clues, citations and link data. In general, and without further context, you could probably pat Google on the back and say ‘job well done.’

That is, until you try searching for ‘unprofessional women’s hairstyles’ and find this:

In it, you’ll find a hodge-podge of hairstyles sported by black women, all of which seem, well, rather normal. On a personal note, I can’t see anything unprofessional about any of these, yet the fact it surfaced when I typed in that query proves not everyone sees it that way.

Again, this is the nature of the beast. These images appear because of the context in which they’re talked about. In this case, you’ll see a barrage of tweets complaining about being told by bosses, colleagues or others that their hair was unacceptable for the workplace.

What’s concerning though, is just how much of our lives we’re on the verge of handing over to artificial intelligence. With today’s deep learning algorithms, the ‘training’ of this AI is often as much a product of our collective hive mind as it is programming. Artificial intelligence, in fact, is using our collective thoughts to train the next generation of automation technologies. All the while, it’s picking up our biases and making them more visible than ever.

As Donald Trump spouts off at the mouth in racist, sexist, xenophobic rants about how to make the country great again, the language is being used to train Twitter bots that share his views.

This is just the beginning, and while offensive, the AI mentioned above is mostly harmless. If you want the scary stuff, we’re expanding algorithmic policing that relies on many of the same principles used to train the previous examples. In the future, our neighborhoods will see an increase or decrease in police presence based on data that we already know is biased.

Wrap your head around that for a second.

We’re still very much in the infancy of what artificial intelligence is capable of. In five years, 10 years, 25 years, you can imagine how much of our lives will be dictated by algorithms.

What’s becoming clear though, is that not everything we’re teaching them is worth passing on.