Ethical, Autonomous Robots of the Near Future

Morally competent robots might be perfectly programmed to be "better moral creatures than we are" -- or at least better decision makers with fewer inconsistencies.

The engineering of autonomous, morally competent robots might include a perfectly crafted conscience capable of distinguishing right from wrong and acting on it. In the near future, artificial intelligence entities might be "better moral creatures than we are" -- or at least better decision makers when facing certain dilemmas.

Roboethics vs. machine ethics
In 2002, roboticist Gianmarco Veruggio coined the term roboethics -- the human ethics of robots' designers, manufacturers, and users -- to outline where research should be focused. At that time, the ethics of artificial intelligence was divided into two subfields.

Machine ethics: This branch deals with the behavior of artificial moral agents.

Roboethics: This branch responds to questions about the behavior of humans -- how they design, construct, use, and treat robots and other artificially intelligent beings. Roboethics ponders the possibility of programming robots with a code of ethicsthe could respond appropriately according to social norms that differentiate between right and wrong.

Naturally, to be able to create such morally autonomous robots, researchers have to agree on some fundamental pillars: what moral competence is and what humans would expect from robots working side by side with them, sharing decision making in areas like healthcare and warfare. At the same time, another question arises: What is the human responsibility of creating artificial intelligence with moral autonomy? And the leading research question: What would we expect of morally competent robots?

"Roboethics" because it's the human ethics that robots' designers, manufacturers, and users need to have when designing, manufacturing, or interacting with a robot. It's not about the ethics of the robot but the ethics of the humans, which need to be in place.

"I don't see how having these actions carried out by robots rather than human operators really changes anything."

In a war scenario some actions carried out by robots instead of humans may be beneficial. If you send an autonomous vehicle with the capacity to make decisions to a war zone to deliver supplies, for instance, instead of a regular vehicle driven by a human the human driver can be assigned a different task where a human presence is more needed.

Ahhh, you are anticipating to one of my next articles, i.e. building emotion into AI. Both building emotion and building ethics, which we discuss now, are challenging, I believe, as human emotions and ethics are so many times so conflicting and inconsistent, far to be perfect.

Yes, I see your point referring to iRobot. Also, have you seen Spielberg's AI: Artificial Intelligence? That's another good reference when discussing this type of research.

"a "NS-4 model" robot saved him instead of his 12 years old daughter as NS-4 analyzed 45% chance of his survival vs 11% chance of his daughter's survival."

That's a great example that you bring here. :)

NS-4's decision was made based on logic according to his analysis rather than on emotions. Will Smith's character was driven by a negative emotion: Hate, as consequence of the experience. For NS-4 saving one human instead of letting two die was the best moral decision.

Designing an ethical code is easy (example: the Ten Commandments). The difficulty lies in designing the exceptions to that code. Zeeglen, your list has merit (although it also has a pretty tilted set of options). Asimov's Three Laws recognizes conflicting situations rather than simple absolutes, but I am sure that in Lawyer mode there could be a lot of room for interpretation,

Surely the robots will make these ethical decisions based on policies and procedures that humans have devised? I don't see how having these actions carried out by robots rather than human operators really changes anything. What's important is transparency - the policies implemented by the robots need to be publically accessible and subject to legal challenge, not just hidden away in the computer code.

A very interesting series I watched on AI morality is Ghost in the Shell: SAC, an idea way ahead of its time. A different outlook from most 'western'(no offence meant) concepts on robot morality.

Involved multiple autonomous semi-tanks called 'Tachikoma' which shared a single conciousness which synced among all of them every night. Definitely worth a watch but can be a bit of an investment in time :)

I always feel fiction is a great place pick up on hints on topics like these, especially a lot of Asimov's works.

"Designing autonomous, morally competent robots may be inspiring and fascinating, but it certainly will not be easy."...

I agree completely. This is a fascinating research topic but seems like an impossible task. Not sure how this could be achieved in the near future. What seems impossible to me is building "Emotion" into the AI...if you have watched the movie iRobot, it is relatively easier for me to explain. The very reason why Del Spooner (Will Smith) used to hate robots...when he met an accident along with his daughter Sarah in their car, a "NS-4 model" robot saved him instead of his 12 years old daughter as NS-4 analyzed 45% chance of his survival vs 11% chance of his daughter's survival.