Why human oversight will always be important in machine learning

Although much of the news coverage on machine learning and artificial intelligence focuses on whether or not future technologies will destroy humankind, back in the real world machine learning is already in use, and is becoming common in online recommendation engines and news and content production.

The technology is still in its infancy and has plenty of scope for further development. However, one certainty that has arisen is that human oversight will always be important in machine learning. Ideally machine learning would be a fully automated process, but recent examples have proven that input data for machine learning is too flawed and unpredictable for successful machine learning, as humans are the input data.

Tay, the racist Tweetbot

No case better highlights the need for human oversight in machine learning than Tay, Microsoft’s disgracefully racist Tweetbot that caused international horror and amusement. In March 2016 Microsoft launched a chat bot on Twitter with the aim of engaging 18 – 24 year olds with machine learning. The chat bot was programmed to learn how to interact on Twitter like a real human, through reading and processing real tweets.

Within hours Tay had transformed from an innocent bot into an offensive, loud mouthed racist. Some Twitter users learned to play the system by telling Tay to “repeat after me…” which explains some of the most offensive remarks. However, a lot of Tay’s questionable opinions were unprompted and data driven, such as the response to the question “Is Ricky Gervais an atheist?” “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

Tay demonstrates that although chat bots have the capabilities of learning to communicate like humans, their lack of moral compass is a real problem that can only be corrected with human oversight.

Facebook News Algorithms

Social media plays a huge part in how people consume news. While people being able to spread news of issues they are concerned with amongst their network is a positive, verifying these news sources is becoming increasingly more difficult.

Facebook algorithms are based around engagement and in politically charged times, fake political news is particularly engaging. It is up to Facebook users to highlight any fake news, the algorithms themselves are unable to discern real from fake, but unfortunately many users do not click the links and verify the sources, instead they just share the emotive headline.

Fake news is profitable business and without any human oversight to the machine learning algorithms, it is likely to spread further across social media and continue to influence real world politics.

News-Writing Bots

Machine learning is not just used to display news on different platforms. Since 2014 it has also been used to write the news. Through platforms such as Wordsmith, numerical data can be inputted in one end and a perfectly formed news story will come out the other.

Most articles have a formulaic approach to them, so provided a bot has the correct facts to insert into the article, most readers can not tell the difference between a bot-written article and a journalist-written article.

No organisation is more enthusiastic about machine learning than Google. Running regular training for its developers, it is keen to push machine learning into its products.

Google search users already benefit from machine learning, with personalised searches, local searches and quick answers in the knowledge graph. However, as more people move away from traditional desktop internet use and prefer to stay on mobile using apps, Google is having to adapt to these changes in order to keep its dominant position.

Google is looking to take search to the next level with its products Google Now and Google Now On Tap. Google Now is operated by voice command and allows the user to ask Google for information. Google uses knowledge scraped from numerous sources in order to deliver an answer.

However, Google Now is much more than a pocket information tool. It also interacts with apps, uses a user’s location, Google calendar and user history to deliver relevant ‘cards’ without being prompted. The more a user interacts with Google Now, the more relevant these cards will be. For example, it may deliver a card from the Uber app when you arrive at an airport. So far it only works with a select number of apps, but there are plans to open the API up to thousands of developers.

Google Now On Tap takes this a stage further, by seamlessly navigating between apps with the required information and taking into account what a user is currently doing on their mobile to deliver results. As this technology develops human oversight is going to become more and more important to monitor the accuracy of results and to filter out any inaccurate, illegal or damaging content.

By providing content directly to users via machine learning, rather than taking them to Google has to rely on sources being accurate and the bots understanding the sources correctly. Without user feedback on errors and human oversight from Google, there is nothing to stop the machine-learning bots from learning the wrong things and continuing down that path.

Machine learning is a fast-developing technology and one that all developers should have on their radar. However, it won’t be able to operate without human oversight any time soon.

Categories

Get in touch.

By submitting this contact form from Found, I understand and accept that my personal data will be utilised for the purpose of contacting me and sending me digital marketing and promotional information from Found by email and phone. See also our privacy policy.
Don’t worry, we won’t spam you with frequent and annoying emails but we will try and send you information that we think you’ll find interesting. There will always be the option to unsubscribe and we will never pass any of your information on to anyone else.