Stay on target

Earlier this week Google showed off the new Duplex feature of the Google Assistant. It was a stunning show, and possibly one of the first bit of software to clear the Turing Test. The stage demo showed the assistant calling a restaurant to make a reservation and seemingly avoiding detection of the restaurant workers. It was a truly impressive show, but TechCrunch notes that it’s disconcerting evidence that Google doesn’t have the serious ethical implications of AI in mind when they’re designing these systems.

“Google’s experiments do appear to have been designed to deceive,” Dr. Thomas King, a researcher at Oxford Internet Institute’s Digital Ethics Lab. “Because their main hypothesis was ‘can you distinguish this from a real person?’. In this case, it’s unclear why their hypothesis was about deception and not the user experience… You don’t necessarily need to deceive someone to give them a better user experience by sounding natural. And if they had instead tested the hypothesis ‘is this technology better than preceding versions or just as good as a human caller’ they would not have had to deceive people in the experiment.”

That may sound like a small issue, but there are serious ethical concerns with experiments, especially those done on or involving humans.

“Even if they don’t intend it to deceive you can say they’ve been negligent in not making sure it doesn’t deceive, Dr. King added. “I can’t say it’s definitely deceptive, but there should be some kind of mechanism there to let people know what it is they are speaking to… I’m at a university and if you’re going to do something which involves deception you have to really demonstrate there’s scientific value in doing this.”

At issue is the fact that knowing who and what you’re interacting with changes how we react. There is a myriad of small decisions we make based on those cues. Tone, for instance, can allow us to shift and better empathize with a human being. You may soften if you realize the person you’re speaking with is stressed or had a rough day.

“And if you start blurring the lines,” Dr. King said, “Then this can sew mistrust into all kinds of interactions — where we would become more suspicious as well as needlessly replacing people with meaningless agents.”

Google CEO Sundar Pichai said that it was the pinnacle of all of Google’s recent work.

“It brings together all our investments over the years in natural language understanding, deep learning, text to speech,” Pichaei said. The calls, he said, were real, meaning that it’s unlikely that Google employees called ahead to give a heads up to the callees.

King also notes that humans come with biases about other humans, and robots could unintentionally reinforce some of them.

“If you were to use a robotic voice there would also be less of a risk that all of your voices that you’re synthesizing only represent a small minority of the population speaking in ‘BBC English’ and so, perhaps in a sense, using a robotic voice would even be less biased as well,” Dr. King said. “If it’s not obvious that it’s a robot voice there’s a risk that people come to expect that most of these phone calls are not genuine. Now experiments have shown that many people do interact with AI software that is conversational just as they would another person but at the same time there is also evidence showing that some people do the exact opposite — and they become a lot ruder. Sometimes even abusive towards conversational software. So if you’re constantly interacting with these bots you’re not going to be as polite, maybe, as you normally would, and that could potentially have effects for when you get a genuine caller that you do not know is real or not. Or even if you know they’re real perhaps the way you interact with people has changed a bit.”

These situations are delicate, but it’s important that even at these very early stages, we take AI ethics very, very seriously. As countless minds have before said, there are countless risks endemic to the creation of even pseudo-AI. And if we want to avoid the robot apocalypse, we best take those concepts seriously.