Good Ideas Department?: Teaching Robots to Deceive

A pair of researchers at the Georgia Institute of Technology recently gave robots the capacity for deceptive behavior. We caught up with one of them, Alan Wagner, to find out how troubled we should be about this.

Georgia Tech Photo: Gary Meek

Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner look on as the black robot deceives the red robot into thinking it is hiding down the left corridor.

Well, we know that deception is something that concerns a lot of people, especially with robots and artificial intelligence. That’s why we purposefully endeavored to explore also the ethical ramifications of the research.

What did you program these robots to do?

We developed an algorithm that allowed a robot to look at situation, determine whether the situation warranted use of deception, reasoned over its model of the other individual [that is, imagined the point of view of the robot it was deceiving], and selected the best deceptive action for the situation. We programmed that on a robot and set up a very simple hide-and-go-seek paradigm to test the algorithm and the theory.

So what specifically were the robots doing?

There were three different directions the robots could go, left, right, and center. To begin with, the deceiver circled the area and learned the problem a little bit. There were markers the deceiver could knock over to mark the path. Next the deceiver reasoned, “If I was going to hide to the right, I would knock over these markers. I’ll go to the right, knock over those markers, and then I’ll pick another direction to go off and hide in.” That’s where it used our algorithm to select those different types of procedures. Next a seeker robot would look at what markers were still standing, and would select a direction.

Was it successful? Did the deceivers manage to avoid knocking over the markers of the space it hid in?

Since the motion was imperfect, sometimes it would knock them all over. There was a lot of noise in the system, which is good in a sense, because it mimics in a simplistic way the noise that goes on in real deception. Real deception doesn’t always work all the time, so it’s important to include that noise in the study.

There are many situations in which the best ethical decision might be to deceive a human. For example, in search-and-rescue environments it might be necessary to deceive someone in order to calm them down in order to get them rescued. It also might be necessary to deceive people in order to get them to take medicine necessary for their health.

I don’t think you should use these with paranoid schizophrenics just yet. That would maybe play into irrational fears they already have.

Yeah, I agree.

Can you elaborate on what sorts of search-and-rescue situations deception might come in handy?

There’s been a study done on this, studying what people’s reactions were to a robot coming up to them. Imagine a situation in which they’re trapped, they’ve perhaps been under rubble for hours. They would feel very scared when a robot with claws and a flashlight and different sensors approaches them. A lot of times they’re in a position of almost submission if they’re lying down or crushed under something. So it might be necessary for the robot to act in a manner, to deceive them about its capabilities in order to save them. For example to say its claws, or that its arms are not capable of whatever it is capable of. The whole point would be to calm the victim down in order to get them into a more pliable manner to save them.

I see. So if the victim says, “Well, I’ll only climb in your claw if you promise me that your claw is incapable of crushing me,” the robot might say, “Sure, yeah, that’s true.”

Exactly. That’s exactly right.

So is your work in deceptive behavior something like the robotic equivalent of a white lie?

This work has been compared to lying. I would say lying is a particular type of deception. The robots here [in the hide-and-seek experiment] didn’t lie. I tend to stray away from comparisons to lying itself.

Why was the Office of Naval Research interested enough to fund your experiment?

The research is done on the general exploration of animal models for social autonomy. This is one method that makes robots more autonomous. Developing different aspects of social autonomy will be necessary for them to operate out in the wild. I believe Dr. Arkin also had research that was funded by the Army to explore the ethical ramifications of robotic architecture, so he developed an ethical governor, with the stated purpose that it might be possible to develop robots that are more ethical than people, because robots ideally won’t become impassioned by their situation. The Office of Naval Research and the military in general is interested in autonomous robots but also ethical robots and robots that will do more good than harm.

How have people reacted to this work?

I’d say the general reaction from people to our press release about deception and robots has been, you know, concern. And understandably. This poses a difficulty. But just to reiterate that we are and will continue to consider the ethical ramifications of the work, and what this work really hopefully does is tell us a little more about why people deceive, what it means to deceive, and the types of situations and reasoning that underlies that, in not just humans but animals in general.