Engineering Ethics Into A Robot

Researchers are exploring how to engineer ethics into artificially intelligent beings that would interact and help humans as socially assistive robots.

"As robot designers, we are responsible for developing the control architecture with all its algorithms that allows for moral reasoning," he says, "but we do not make any commitments to the ethical principles with which the robot will be endowed. This is the job of those deploying the robot, i.e. to decide what ethical principles, rules, norms, etc. to include. As a result, the question of responsibility of robot behavior will be a critical one for legal experts to determine ahead of time, especially in the light of instructible robots that can also acquire new knowledge during task performance."

How to engineer ethics into a robot
A decision-making process that mimics what humans tend to do in morally challenging situations may be the answer to engineering ethics into a robot. This is done by first recognizing morally challenged situations, followed by deploying reasoning strategies that include moral principles, norms, and values. This corresponds to Prof. James H. Moor's third kind of ethical agent, the "explicit ethical agent," as described by Moor:

Explicit ethical agents can identify and process ethical information about a variety of situations and make sensitive determinations about what should be done. In particular, they are able to reach "reasonable decisions" in moral dilemma-like situations in which various ethical principles are in conflict.

Scheutz has argued that current technological advances in robotics and artificial intelligence have enabled the deployment of autonomous robots that can make decisions on their own. However, most of these currently employed robots are fairly simple and their autonomy is limited. Therefore they carry a potential for becoming harmful machines due to their lack of robotic decision-making algorithms that could take any moral aspects into account. EETimes asked Scheutz what exactly his fear is.

"If we do not endow robots with the ability to detect and properly handle morally charged situations in ways that are acceptable to humans, we will increase the potential for harm and human suffering unnecessarily," he says, "for autonomous robots will then inevitably make decisions that we deem 'morally wrong,' e.g. failure to provide a patient with pain medication when it was warranted."

@Crusty..."humans made the computer so they are responsible for the computers inability to process correctly." Ywah, that other old rule, Garbage in, Garbage out.....

"It should be a computer literate shopper who writes and tests the programme."

He'd need to be literate to write it, but I would say you should use a computer ILLITERATE shopper to test it.

I think one of the problems I had was that I pulled something out of the bag, then put it back in, packing it better so I'd only need one bag. That is the sort of thing that these @#$%^& auto-checkouts need to be able to cope with.

Re hand-scanners...I don't have a problem with the scanners, they work quite well, but the algorithms that sense whether I have put everything in the bags only after scanning need a bit of tweaking. When the auto checkouts approach the level of friendliness and intelligence of even the dumbest checkout chick person then I'll use them, not before.

Ah well if we are chopping logic then on a circular argument humans made the computer so they are responsible for the computers inability to process correctly.

Trouble with Point of sale programmes is that they most often do not get the right people to write the algorithms. It should be a computer literate shopper who writes and tests the programme.

I remember when the first attempts at biological cell recognition was starting, that biologists did not understand computers and the computer coders did not understand biology. Got a few years of paid bidirectional translation between the biologists and the hardware / software guys.

Personally I will only shop in outlets that use hand scanners, these work well but do require an element of honesty from the shopper.

Hi David, Susan, From what I have seen so far in the articles and others to date, we are well on our way to allowing logical entities to start lying.

A medical autonomous robot would not do the patient much good if it said you are 99.9% certain to die from your wounds. Ethically the surgeon or nurse will easily bend logic to increase the will of the patient to live, but we all know they are lying for the best reasons?

David I think the Point of Sale check out computers get so bored with the speed of humans that they have fun with us at the till. Should we leave himour out of the autonomous robots reactions?

HI Susan...sorry, did not realise this article was a continuation of the first one.

I tell you what DOES need some ethics - those self-serve supermarket checkouts. I tried them a couple of times but they were forever accusing me of not putting things in the bag, or taking things out of the bag, or putting extra things in the bag without checking them. I was once about to punch the screen of the stupid thing. They will have to get a LOT better before I use them again. There's something for your ethical embedded programmers to start on!!

...will they then try embedding ethical behavior into humans? Now THAT would really be worthwhile. It's a goal that our entire education, upbringing, and social culture "experts" seems to have largely abandoned in the past 40 years or so.