British Roboticists to Test Whether Robots Can Be Ethical

With big names in science and tech alike warning humanity of the dangers of a hostile robot takeover, researchers from the University of Liverpool are beginning a new project that will attempt to determine whether robots are capable of consistently ethical behavior.

In order to make this determination, the researchers will attempt to devise "verification tools," which will allow the humans who design the robots to mathematically prove how a robot will behave in any given situation. This way, even if the robot acts autonomously in the sense that it is not directly controlled by a human, we will still be able to guarantee its general patterns of behavior, and therefore can prevent any Age of Ultron-type situation from occurring, in which the robot decides to wipe out all of humanity.

Professor Alan Winfield said, "If robots are to be trusted, especially when interacting with humans, they will need to be more than just safe. We've already shown that a simple laboratory robot can be minimally ethical, in a way that is surprisingly close to Asimov's famous laws of robotics. We now need to prove that such a robot will always act ethically, while also understanding how useful ethical robots would be in the real world."

Asimov's laws, which he wrote in his short story, "Runaround," read as follows:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

From this initial information about the study, the usage of the term "ethical" seems problematic. The laws, and the stated goals of the study, don't ask whether the robots are capable of engaging in moral thought processes, but rather whether they can consistently follow directions that a human deems to be ethical. But even humans can't always agree on the most "ethical" action to take in any given situation; the laws only ensure that the robots will place the well-being of humans as paramount, which would actually be perverse if we ever succeeded in creating AI that has any form of consciousness.