Artificial intelligence (AI) is becoming an increasingly important influence on our daily lives. However, it brings with it a huge potential to cause problems.

Algorithms in our lives

Algorithms are omnipresent in our lives. A few companies and their algorithms govern the search results and videos that appear in our newsfeed. Software systems determine the routes we take driving our cars, the songs we listen to, and information we see about our friends. These systems try to guess what will capture our attention and what we are most likely to select.

The effects of these complicated and secretly constructed algorithms are becoming increasingly obvious. Personalization has become paramount as it helps us navigate through the mountain of data available online. Artificial intelligence has become the new norm in helping deliver the personalization everyone expects. Moreover, everyone is quick to claim they are part of the AI club.

At IBC 2017 the most common feature that media companies boasted about was artificial intelligence. However, the validity of all the claims is questionable. IBM’s Watson, which has successfully created exciting movie trailers automatically, certainly fits the bill. Other companies’ claims, unfortunately, are best described as simple keyword matching.

Earlier this year, Amplify Partners’ David Beyer captured the marketing war of words best:

“Too many businesses now are pitching AI almost as though it’s batteries included. I think that’s dangerous because it’s going to potentially lead to over-investment in things that overpromise. Then when they under-deliver, it has a deflationary effect on people’s attitudes toward the space.”

It is not just overpromising that poses problems.

AI gone awry

For decades Silicon Valley’s mantra has been “move and break things.” This credo has served the area well as advances in technology are frequently built on a mountain of mistakes. However, these days the mistakes have much bigger consequences.

Last year Microsoft released an artificial chatbot called Tay. The program was designed to appeal to 18–24-year-old women, a prime social media demographic. However, Microsoft removed it from Twitter within 24 hours after it had morphed into an internet troll. The Redmond software company was horrified when the program began to use foul language and quote from Nazi literature such as Mein Kampf.

YouTube experienced setbacks when its AI algorithms began deleting videos from users who identified themselves as LGBT. The intent was to curb the release of videos deemed inappropriate. Unfortunately, the program began eliminating all videos remotely associated with the keyword. Youtube’s approach had the exact opposite effect of its intention!

YouTube has also had problems with fake news appearing at the top of its search results. For example, during the Las Vegas shooting when users searched for that term, the fifth result was titled ‘Proof the Las Vegas shooting was a FALSE FLAG attack.’

With such serious catastrophes as these Emily Dreyfuss, a senior writer for WIRED, stated that the mantra of “move and break things” has “outlived its appropriateness.” Companies are taking note.

The future of AI

Since the fake news fiasco, YouTube has ‘accelerated’ its rollout of planned changes to promote authoritative sources. However, changes to these algorithms may not be fast enough to keep up with the bad guys. Gartner estimates that by 2020 AI creation of fake content will outpace the ability to detect it. Meaning, if the study is true, we could expect more things to “move” and “break” in the world of digital media.

Technology companies are working hard to fight against AI’s unintended consequences, and consumers are becoming more aware. Will it be enough? Whether these moves will be enough or not is unclear. In the meantime, the battle between good and bad AIs rages on.