Machines that Speak and Write Will Make Misinformation Worse

Executive Summary

The world’s dominant technology players are rushing to create software — intelligent or not — to both converse and write as effectively as humans. The problem is that the coming era of proficient talking and writing machines holds many challenges that few seem to be seriously addressing. For example, how do we ensure improved conversational technologies don’t result in a new generation of pernicious and convincing fake news bots? How do companies protect themselves from rogue bots speaking on their behalf? How can we avoid scenarios in which machines try to please us by telling us only the things we want to hear? What methods can we employ to prevent verbose machines from dragging all human discourse down to the lowest common denominator? Those creating these powerful technologies should focus attention on how to leverage them for the true benefit of the consumer. If we can create bots that spread fake news, can we create technology that exposes such falsehoods? An AI-powered bot designed to help consumers spot bias in reporting could provide a counterweight, or possibly even a deterrent, to talking and writing bots aimed at swaying opinion or increasing social and political division. As we race headlong into the era of conversational machines, it’s time to start thinking about their cons, and designing tools to combat them.

Tim Robberts/Getty Images

Technologists have long dreamed of building machines that converse as nimbly as humans. Early practitioners famously underestimated the magnitude of the challenge. Yet in 2019, we appear to be mere steps from the goal. The world’s dominant technology players are rushing to create software — whether intelligent or not — that can both converse and write as effectively as humans.

So, what’s the problem? We are not adequately prepared to address the hazards that could come with our successful launch of conversational machines. To anticipate the challenges ahead, it helps to take a quick look at the underlying technologies.

Amazon is certainly at the forefront of the voice assistant industry, and executives on the Amazon Alexa team are candid about their aim to make Alexa more conversational. The Seattle company started the Alexa Challenge three years ago to motivate the best and brightest academic teams. Their goal is to make Alexa more effective in “open domain conversations,” meaning discussions that roam freely across topics.

Insight Center

A peek under the covers of these conversational bots shows a mix of low-tech pre-scripted responses and cutting-edge machine learning algorithms. Predictive neural networks enable the best bots to generate convincing responses on the fly. To construct statements, software scrapes websites and open content databases. Common sources include the Cornell Movie Dialogs, Reddit, and IMDB. Bots are trained to quickly combine bits of movie dialogue, Reddit comments, and other content into remarks and responses that plausibly fit into an ongoing conversation.

Other content-generating bots, sometimes called robo-writers, use financial data and sports scores to automatically write reports and articles virtually indistinguishable from those authored by humans.

A glimpse into the potential dangers of our new world emerged from recent research by OpenAI. The team at the non-profit developed software to convincingly auto-generate a cogent argument on a hot-button topic. In fact, the program was so proficient, OpenAI worried the software might be misused to generate fake news, so they opted not to release the full code to the public.

The coming era of proficient talking and writing machines holds many challenges that few seem to be seriously addressing. Here is a small list to ponder:

How do we ensure improved conversational technologies don’t result in a new generation of pernicious and convincing fake-news bots?

How do companies protect themselves from rogue bots speaking on their behalf? The most often cited example is Microsoft’s Tay, which notoriously stumbled by mimicking extremely inappropriate statements picked up from malicious interlocutors.

How can we avoid scenarios in which machines try to please us by telling us only the things we want to hear? Deep Mind released a study on recommendation engines, pointing out the role these systems play in creating echo chambers that wall a person off from different viewpoints.

What methods can we employ to prevent verbose machines from dragging all human discourse down to the lowest common denominator? Recall that software used to generate automated responses relies heavily on datasets of the most mundane human-generated content and conversations.

Those creating these powerful technologies should focus attention on how to leverage them for the true benefit of the consumer. If we can create bots that spread fake news, can we create technology that exposes such falsehoods? Perhaps we can go a step further and highlight one-sided points of view? Such a service could help keep consumers from falling into the trap of turning off their critical thinking receptors.

Imagine a smart bot that whispered in your ear to be on the lookout for possible slanted reporting in articles with the following headlines:

Mayor Calls for Sensible Reduction of Killer Dogs

Mayor Harasses Owners of Falsely Maligned Dog Breed

Your “objectivity” bot might point you instead to a third piece of content, with the headline:

Mayor Takes on Controversial Issue of Pitbull Restrictions

An AI-powered bot designed to help consumers spot bias in reporting could provide a counterweight, or possibly even a deterrent, to talking and writing bots aimed at swaying opinion or increasing social and political division.

It should no longer come as a surprise that every technology created by humans comes with pros and cons. As we race headlong into the era of conversational machines, it’s time to start thinking about their cons, and designing tools to combat them.

Amy Stapleton is founder and CEO of Tellables, a publisher of conversational stories for talking devices. Prior to founding Tellables, Stapleton was an Information Technology Manager at NASA.