Can You Give A Robot A Conscience?

At the Franklin W. Olin College of Engineering, roboticists like professor David Barrett have been working on developing robots with something akin to a human consciences. They use the term “eusocial” from biology, with the goal of creating robots that factor in the needs of others before taking action rather than merely completing individual missions. For example, termites are eusocial.

“When robots begin to have a sense of what the mission is, when they become collaborative, when they begin to anticipate what people need —and more importantly, when they are willing to sacrifice themselves, they become eusocial robots,” he said. Think of a fire-fighting robotic dog that sacrifices itself to rescue people or being them oxygen in burning buildings. Or of a robot riveter that stops short of punching through a human workers’ misplaced hand because the robot has been programmed to value human safety over assembly line efficiency.

Barrett, who came to Olin after working at Disney and iRobot, said rapidly changing technology adds some urgency to the challenge of making robots that only benefit humans. “It’s not too soon to start the process (of imbuing robots with conscientious traits),” he said. “Letting super smart, super fast robots out of the bag ... after the fact is probably a dangerous thing to do.”

Ethical dilemmas for robots are as old as the idea of robots in fiction. Ethical behavior (in this case, self-sacrifice) is found at the end of the 1921 play Rossum's Universal Robots, by Czech playwright Karel Čapek. This play introduced the term "robot".

Isaac Asimov's famous fundamental Rules of Robotics are intended to impose ethical conduct on autonomous machines.

The same issues about ethical behavior are found in films like the 1982 movie Blade Runner. When the replicant Roy Batty is given the choice to let his enemy, the human detective Rick Deckard, die, Batty instead chooses to save him.

(Roy Batty debates saving Rick Deckard in Blade Runner)

Science fiction writers have been preparing the way for the rest of us; autonomous systems are no longer just the stuff of science fiction. For example, robotic systems like the Predator drones on the battlefield are being given increased levels of autonomy. Should they be allowed to make decisions on when to fire their weapons systems?

The H-II transfer vehicle, a fully-automated space freighter, was launched Japan's space agency JAXA. Should human beings on the space station rely on automated mechanisms for vital needs like food, water and other supplies?

Ultimately, we will all need to reconcile the convenience of robotic systems with the acceptance of responsibility for their actions. We should have taken all of the time that science fiction writers have given us to think about the moral and ethical problems of autonomous robots and computers; we don't have a lot more time to make up our minds.