Selmer Bringsjord, Department of Cognitive Science chair, talks about the future of robotics on Tuesday, Feb. 25, 2014, at Rensselaer Polytechnic Institute in Troy, N.Y. Bringsjord is part of a research project, funded by the U.S. military, on how to bring ethics into the world of robotics. (Cindy Schultz / Times Union) less

Selmer Bringsjord, Department of Cognitive Science chair, talks about the future of robotics on Tuesday, Feb. 25, 2014, at Rensselaer Polytechnic Institute in Troy, N.Y. Bringsjord is part of a research ... more

Robots ussing a prototype morality software that is being developed at Rensselaer Polytechnic Institute udner a multi-university research project sponsored by the U.S. Navy.

Robots ussing a prototype morality software that is being developed at Rensselaer Polytechnic Institute udner a multi-university research project sponsored by the U.S. Navy.

RPI researchers design robots to do the right thing

1 / 3

Back to Gallery

Troy

It's the stuff of countless songs and stories — knowing that something you want to do is wrong, doing it anyway and then regretting the choice.

Last month, two researchers from Rensselaer Polytechnic Institute who are studying how to impart notions of human morality into artificially intelligent robots demonstrated this phenomenon — known in philosophy as akrasia — at an international conference.

The military is interested in developing so-called moral robots, as well as the consequences of their actions. The thinking is that such robots will be made incapable of acting ruthlessly and immorally in war, and, for example, killing the innocent or abusing the helpless.

"We're talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don't have to tell them what to do," Bringsjord said. "When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite rule set created ahead of time by humans can anticipate every possible scenario."

Last month in Chicago, during the first symposium on ethics and technology by the Institute of Electrical and Electronics Engineers, the two RPI researchers demonstrated a kind of "morality" software in a robot simulation that involved guarding a supposed enemy.

The scenario revolved around an all-too-human form of akrasia — a desire for revenge, Bringsjord said. One robot was assigned the role of guarding the other, with the first robot aware that the second was an enemy that had attacked it earlier.

In Bringsjord's approach, all robot decisions would automatically go through a preliminary, lightning-quick ethical check using simple logics inspired by advanced artificially intelligent and question-answering computers.

If the check revealed a need for deep, deliberate moral reasoning, such reasoning would come from newly invented logics made for the task, called "Deontic Cognitive Event Calculus," or DCEC. It is a machine language that is meant to function as a kind of moral governor on robotic behavior.

In the first scenario, the guard robot did not contain the morality software, but did have a program that dictated it could act in its own defense. In this case, the robot attacked its adversary even though it had been directed to guard it.

In the second scenario, the robot was equipped with algorithms in programming language, called an ethical substrate, that had it examine the possible outcomes from two courses of action — attacking its captive, or not — while being logically aware that one choice was right, and the second choice was wrong. In this case, the guarding robot did not attack.

"Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," Matthias Scheutz said. He is head of the naval research project, professor of computer science at Tufts School of Engineering and director of Tufts' Human-Robot Interaction Laboratory. "The question is whether machines — or any other artificial system, for that matter — can emulate and exercise these abilities."

The group brings together extensive research expertise in theoretical models of moral cognition and communication; experimental research on human reasoning; formal modeling of reasoning; design of computational architectures; and implementation in robotic systems.