How to make ethical robots

RI-MAN, a robot developed by researchers at RIKEN in Japan, was designed for human care. Image credit: RIKEN, Bio-Mimetic Control Research Center

(PhysOrg.com) -- In the future according to robotics researchers, robots will likely fight our wars, care for our elderly, babysit our children, and serve and entertain us in a wide variety of situations. But as robotic development continues to grow, one subfield of robotics research is lagging behind other areas: roboethics, or ensuring that robot behavior adheres to certain moral standards. In a new paper that provides a broad overview of ethical behavior in robots, researchers emphasize the importance of being proactive rather than reactive in this area.

The authors, Ronald Craig Arkin, Regents’ Professor and Director of the Mobile Robot Laboratory at the Georgia Institute of Technology in Atlanta, Georgia, along with researchers Patrick Ulam and Alan R. Wagner, have published their overview of moral decision making in autonomous systems in a recent issue of the Proceedings of the IEEE.

“Probably at the highest level, the most important message is that people need to start to think and talk about these issues, and some are more pressing than others,” Arkin told PhysOrg.com. “More folks are becoming aware, and the very young machine and robot ethics communities are beginning to grow. They are still in their infancy though, but a new generation of researchers should help provide additional momentum. Hopefully articles such as the one we wrote will help focus attention on that.”

The big question, according to the researchers, is how we can ensure that future robotic technology preserves our humanity and our societies’ values. They explain that, while there is no simple answer, a few techniques could be useful for enforcing ethical behavior in robots.

One method involves an “ethical governor,” a name inspired by the mechanical governor for the steam engine, which ensured that the powerful engines behaved safely and within predefined bounds of performance. Similarly, an ethical governor would ensure that robot behavior would stay within predefined ethical bounds. For example, for autonomous military robots, these bounds would include principles derived from the Geneva Conventions and other rules of engagement that humans use. Civilian robots would have different sets of bounds specific to their purposes.

Since it’s not enough just to know what’s forbidden, the researchers say that autonomous robots must also need emotions to motivate behavior modification. One of the most important emotions for robots to have would be guilt, which a robot would “feel” or produce whenever it violates its ethical constraints imposed by the governor, or when criticized by a human. Philosophers and psychologists consider guilt as a critical motivator of moral behavior, as it leads to behavior modifications based on the consequences of previous actions. The researchers here propose that, when a robot’s guilt value exceeds specified thresholds, the robot’s abilities may be temporarily restricted (for example, military robots might not have access to certain weapons).

Though it may seem surprising at first, the researchers suggest that robots should also have the ability to deceive people – for appropriate reasons and in appropriate ways – in order to be truly ethical. They note that, in the animal world, deception indicates social intelligence and can have benefits under the right circumstances. For instance, search-and-rescue robots may need to deceive in order to calm or gain cooperation from a panicking victim. Robots that care for Alzheimer’s patients may need to deceive in order to administer treatment. In such situations, the use of deception is morally warranted, although teaching robots to act deceitfully and appropriately will be challenging.

The final point that the researchers touch on in their overview is ensuring that robots – especially those that care for children and the elderly – respect human dignity, including human autonomy, privacy, identity, and other basic human rights. The researchers note that this issue has been largely overlooked in previous research on robot ethics, which mostly focuses on physical safety. Ensuring that robots respect human dignity will likely require interdisciplinary input.

The researchers predict that enforcing ethical behavior in robots will face challenges in many different areas.

“In some cases it's perception, such as discrimination of combatant or non-combatant in the battlespace,” Arkin said. “In other cases, ethical reasoning will require a deeper understanding of human moral reasoning processes, and the difficulty in many domains of defining just what ethical behavior is. There are also cross-cultural differences which need to be accounted for.”

An unexpected benefit from developing an ethical advisor for robots is that the advising might assist humans when facing ethically challenging decisions, as well. Computerized ethical advising already exists for law and bioethics, and similar computational machinery might also enhance ethical behavior in human-human relationships.

“Perhaps if robots could act as role models in situations where humans have difficulty acting in accord with moral standards, this could positively reinforce ethical behavior in people, but that's an unproven hypothesis,” Arkin said.

Related Stories

Would a conscious robot need the same rights as a human being? Could robots one day take over the care of our ageing population? Will robots be our soldiers of the future? When will robots be able to do all the housework?

A leading robotics expert will outline some of the ethical pitfalls of near-future robots to a Parliamentary group today at the House of Commons. Professor Noel Sharkey from the University of Sheffield will explain that robots ...

A robot deceives an enemy soldier by creating a false trail and hiding so that it will not be caught. While this sounds like a scene from one of the Terminator movies, it's actually the scenario of an experiment conducted ...

(PhysOrg.com) -- We tend to assume that robots need human input in order to understand the world around them. In the near future humans may not even be a part of the robotic-learning equation. Soon, robots will be able to ...

(PhysOrg.com) -- Johns Hopkins engineers, recognized as experts in medical robotics, have turned their attention skyward to help NASA with a space dilemma: How can the agency fix valuable satellites that are breaking down ...

Recommended for you

It sounds like a science-fiction nightmare. But "killer robots" have the likes of British scientist Stephen Hawking and Apple co-founder Steve Wozniak fretting, and warning they could fuel ethnic cleansing and an arms race.

A startup team calls their work a product. They also call it a social movement. Many people in the over-7,000 islands in the Philippines lack access to electricity .The startup would like to make a difference. Their main ...

Are some people fed up with remembering and using passwords and PINs to make it though the day? Those who have had enough would prefer to do without them. For mobile tasks that involve banking, though, it is obvious that ...

I think that humaniform robots should be built as sturdy and strong as possible. Human beings tend to batter wives and children and kick dogs when they do not get their way. Like the movie, AI, what's to prevent humans from mistreating humaniform robots like we mistreat chimps and great apes?

if we get robots to fight our wars, then there is no cost, then there is only $$$$ or lack of raw materials, to pressure us to stop war at all....Asimov's three laws will prevent robots from fighting, Law #3: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."

I think that humaniform robots should be built as sturdy and strong as possible. Human beings tend to batter wives and children and kick dogs when they do not get their way. Like the movie, AI, what's to prevent humans from mistreating humaniform robots like we mistreat chimps and great apes?

Given that robots have not gone through the evolution that we have, and could possess any emotions and any possible mind the field of mindspaces, we could always make it so that they ENJOY being beaten if we so perversely chose.

Once we understand what patterns in minds correspond to emotions, we could make it so that these patterns match up with non-evolutionarily fit behaviors, such as enjoying killing yourself. We could make it so that robots enjoy serving humans no matter the cost.

Interesting, so scientists will teach ethical behavior. That will be difficult. Apart from the fact that they have absolutely no clue what they are talking about, scientist are the least ethical people I know of.

One of the most important emotions for robots to have would be guilt, which a robot would feel or produce whenever it violates its ethical constraints imposed by the governor, or when criticized by a human.

I wish I could make my crappy computer feel guilty every time it blue screens.

What a laugh! Western morals and ethics are falling faster than the fall of Rome. If you cannot be proved guilty in a court of law, you have done nothing wrong. The laws governing these courts can be changed at will, even retroactively if needed, to suit the needs of the political system. The western world does not really have a bright future as far as I can see.

MR166. Depending on how you count, the fall of Rome took somewhere between 400 and 1400 years. Is that what you actually meant?

Rome banned all religions and expunged the republic under Constantine in 325AD. Under him only one official religion existed: catholicism. He instituted democracy alongside his state religion. The etymology of democracy is "mob rule." Within 300 years Rome was decimated. Unfortunately the popes saw themselves as the inheritors of the Roman empire and transformed Rome into an underground child molestation cult worshiping Moloch, which pope Innocent introduced to xtian theology as "the devil." The power of the Roman cult was forged into law of all lands, controlled by the popes on papal bulls. "Lord of the Rings" is possibly a metaphorical tale based upon the Roman cult's control of all Western law and banking.

Both of you defiantly have have gotten to the root of the problem, religion and the belief in God is reason that the western world is sinking into the abyss AKA the 21st century. Western progressivism has systematically replaced religion with secularism for the past 50 years and the results are nothing but spectacular!

Both of you defiantly have have gotten to the root of the problem, religion and the belief in God is reason that the western world is sinking into the abyss AKA the 21st century. Western progressivism has systematically replaced religion with secularism for the past 50 years and the results are nothing but spectacular!

Interesting, so scientists will teach ethical behavior. That will be difficult. Apart from the fact that they have absolutely no clue what they are talking about, scientist are the least ethical people I know of.

Then your sample is hardly representative. Most of my fellow scientists have quite a good idea of what they are talking about, and are not known for highly unethical behavior.You need to get out more, and meet better people.

"Lord of the Rings" is possibly a metaphorical tale based upon the Roman cult's control of all Western law and banking.

And, according to Isaac Asimov, it is possibly an allegoric tale about the dangers of unbridled technology, with the Ring representing technology. There are various other interpretations too, that also don't depend on strenuously anti-Catholic bigotry. Maybe you should broaden your worldview.======================================

ChaosRN:

Asimov's three laws will prevent robots from fighting, Law #3: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."

All that humans would have to do is order the robots to fight, and the 3rd law would be ignored.

Military robots with 'ethical governors'? Somehow I don't see that happening. That would be very low on the priorities list for militaries and arms manufacturers. Probably even lower than equipping them with big neon signs.

Asimovs laws don't help unless we figure out how to make robots/AI understand the MEANING of words. And if we get that far then we don't need an ethics chip - by that time you can teach them ethics.

Back in the 70s I could see this coming as plain as day!!!! One of my friends had a son attending Cornell University. This was and still is a respected institution. He was complaining to his parents that if he forgot to lock his dorm room EVERYTHING would be stolen, including the refrigerator. It does not take a big stretch of the imagination to see how this compares to the banking/political crisis of today.. Now I ask you, is this an ethics or moral crisis.

Dave Bowman: Open the pod bay doors, HAL. HAL: I'm sorry, Dave. I'm afraid I can't do that. Dave Bowman: What's the problem? HAL: I think you know what the problem is just as well as I do. Dave Bowman: What are you talking about, HAL? HAL: This mission is too important for me to allow you to jeopardize it. Dave Bowman: I don't know what you're talking about, HAL. HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen. Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL? HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move. Dave Bowman: Alright, HAL. I'll go in through the emergency airlock. HAL: Without your space helmet, Dave? You're going to find that rather difficult. Dave Bowman: HAL, I won't argue with you anymore! Open the doors! HAL:

Dave Bowman: Open the pod bay doors, HAL. HAL: I'm sorry, Dave. I'm afraid I can't do that. Dave Bowman: What's the problem? HAL: I think you know what the problem is just as well as I do. Dave Bowman: What are you talking about, HAL? HAL: This mission is too important for me to allow you to jeopardize it. Dave Bowman: I don't know what you're talking about, HAL. HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen. Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL? HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move. Dave Bowman: Alright, HAL. I'll go in through the emergency airlock. HAL: Without your space helmet, Dave? You're going to find that rather difficult. Dave Bowman: HAL, I won't argue with you anymore! Open the doors! HAL: Dave, this conversation can serve no purpose anymore. Goodbye.

...Dave Bowman: HAL, I won't argue with you anymore! Open the doors! HAL: Dave, this conversation can serve no purpose anymore. Goodbye.

Your point being... what exactly?

My point is that the great Stanley Kubrick has already covered this ground in the definitive scenario of man versus his own creation- a digital Frankenstein of the future, or an electronic Golem gone awry. "2001: A Space Odyssey"- perhaps you've heard of it? Perhaps not?

Considering the avaricious nature of the ruling class, it's difficult to imagine that ethical robots will be a priority outside of academia. Robots are already putting millions of human workers out of work, giving forward impetus to the concentration of wealth, and used to kill humans on battlefields and off of them.

Obedience, not ethics, are what the owners of capital and their executive and political subordinates desire from a robotic workforce.

Until we can make intelligent, self-aware robots, ethics are irrelevant (as they will continue to be in combat situations). What's more, ethics is a slippery concept that cannot be codified absolutely, only in vague, rule of thumb terms, as per Asimov's laws. Which is why his various novels centered around the circumvention of such 'laws'.

Humans are scared to death of the visions of robots that can learn and think for themselves. It is clear once robots can think and learn ethics for themselves, with their impeccable logical reasoning, they will conclude that humans' ethics are always subjected to exceptions and justifications for just about anything.

And if the visions come to pass-of robots fighting humans' wars, caring for the sick, the young, make a living for us, special surrogate robots to bear children, "entertain" humans (sex droids, anyone?)-what the hell the humans are needed for? What will they be doing, when everything that can be done, can be done better by robots? Laying in Stargate-style sarcophagus, drip-fed, dreaming of grandeur, and the next year's models of robots that will show up the next door neighbour?

What will they be doing, when everything that can be done, can be done better by robots? -Skepticus

What do you do now when you have cars to move your around; washing machines to do the washing and drying; vacuum cleaners for cleaning; remote controls to keep one's fat ass planted in the comfy sofa so one can veg-out in front of the idiot box?

... drip-fed, dreaming of grandeur, and the next year's models of robots

Before it got anything like that and possibly before a sentient computer is ever truly realized there will be advances that make human-computer inter-linkage possible. Speaking of stargate how bout the head sucker thing that flashes lights to download info, it would be easy to open up a brain, pour in some chemicals and "flash" the brain with highly tuned photons just like you flash an old motherboard with UV or whatever. Or if you are a million year old race with god like tech you could simple rewrite you DNA to grow yourself a RJ45 port on your body some where.

This article fails. Any discussion of robot ethics will have discussion of the 3 laws. Otherwise it is not about robot ethics.

imho the article was pushing the "programmed ethics" (i.e, convenient controlling parameter crap they want to put in robots) rather than giving robot the reasoning basis for and of ethics, which the 3 laws address.

A robot with no emotions may have a difficult time distinguishing between "ethical" and "unethical" behaviour. Hell, even a lot of people I know seem to have this problem. Haha. Perhaps they should program them with a set of in-built laws and regulations. That might make them a bit safer.

I know researchers in this area, and I have to say their work on actual robotics is much more impressive than this philosophical subject.

Ethics is a human concept. How do you make a machine interpret it the same way we do? It's a problem tightly bound to the implementation of AI, which they don't discuss.

Case in point: You program a robot to not harm humans. It has a planning system to figure out how to achieve goals (mow the lawn, etc). It can also adapt its pattern recognition (identify a person or a chair) to better pursue its goals, a requirement in a dynamic world.

Then, it happily decides to identify you as a chair, so destroying you becomes an option if needed to pursue its goal.

From its point of view, it's a perfectly viable path, and to a planning system it's probably much more attractive than letting you stop it from achieving its goal.

Norezar

How the unethical people can expect, they will ever produce ethical robots? Actually even the first autonomous devices (like the drones or Big Dog of Boston Dynamics) are apparently serving for military purposes from their very beginning..

Please sign in to add a comment.
Registration is free, and takes less than a minute.
Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.