US military begins research into moral, ethical robots, to stave off Skynet-like apocalypse

Share This article

The US Department of Defense, working with top computer scientists, philosophers, and roboticists from a number of US universities, has finally begun a project that will tackle the tricky topic of moral and ethical robots. This multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — the ability to choose right from wrong. As we move steadily towards a military force that is populated by autonomous robots — mules, foot soldiers, drones — it is becoming increasingly important that we give these machines — these artificial intelligences — the ability to make the right decision. Yes, the US DoD is trying to get out in front of Skynet before it takes over the world. How very sensible.

This project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military R&D. While we’re not yet at the point where military robots like BigDog have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, it’s high time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human. [Read: Child soldiers and the future of the US military.]

DARPA’s Atlas humanoid robot. Soon it might be given the ability to choose right from wrong.

As you can probably imagine, this is an incredibly difficult task. Scientifically speaking, we still don’t know what morality in humans actually is — and so creating a digital morality in software is essentially impossible. To begin with, then, the research will use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality. These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software (probably some kind of deep neural network).

Assuming we get that far and can actually work out how humans decide right from wrong, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with moral competence. One of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong. First the AI would perform a “lightning-quick ethical check” — simple stuff like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, should the robot help the wounded soldier, or should it continue with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

ED-209, a killer robot from Robocop

Eventually, of course, this moralistic AI framework will also have to deal with tricky topics like murder. Is it OK for a robot soldier to shoot at the enemy? What if the enemy is a child? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans, or will they be held to a higher standard?

At this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots — but on the other, if the US can field an entirely robotic army, war as a diplomatic tool suddenly becomes a lot more palatable. The commencement of this ONR project means that we will very soon have to decide whether it’s okay for a robot to take the life of a human — and honestly, I don’t think anyone has the answer.

Tagged In

Post a Comment

Meir Teichman

hah! 3 laws compliability!

Ulrich Werner

Sounds to me like the robots are meant to do a triage rather than any moral decision….

Only the last two paragraphs of the article really go into moral decisions. Everyone who read Kahneman’s articles and book on mental heuristics knows that the human mind is extremely bad when it comes to decisions like the problem of a 90-10 chance of hitting the wrong target. Humans very often make mistakes there.The people at DARPA should probably try not to program any unnecessary cognitive biases into these moralistic frameworks.

Folatt

“As you can probably imagine,
this is an incredibly difficult task. Scientifically speaking, we still
don’t know what morality in humans actually is — and so creating a
digital morality in software is essentially impossible.”

I fail to see how this is any way or shape significantly more difficult than creating a driverless car that is more capable than a human driver.

http://www.mrseb.co.uk/ Sebastian Anthony

Self-driving cars just have to follow thousands/millions of different rules. But those rules are essentially fixed, and thus can be hard programmed in.

Moral stuff, almost by definition, doesn’t follow a firm set of rules. It’s all about subjectivity, context, ethical considerations, etc etc.

Folatt

Facial recognition, database, is target a ["friend", "civilian", "enemy"]?. Target is an enemy. Is target armed [Yes/No]? No. Do an emotional heat map of target. Does target look like he/she is on a mission to kill friends or civilians? Yes. Can I or anyone else neutralize enemy combatant without killing target before target can kill it’s targets? If no kill target.

http://www.mrseb.co.uk/ Sebastian Anthony

That’s it, you’ve cracked all of human morality!

Folatt

Hahahahahaha,

I can also do a single thought process about a traffic situation and thus have cracked all traffic situations on the entire planet as well. Google can just go home and let me take over.

I stand with my opinion that it’s little different than avoiding traffic. Look at your surroundings, interpret what you see, make decisions. The the only permanently added difficulty will be recognizing deception as I’m sure that as soon as they’re used rivals/enemies will build something to exploit any bugs in it’s moral code. In any case robots only need to perform better than average humans in terms of split second military decisions and robots with morals are ready for the battle field.

Folatt

Hahahahahaha,

I can also do a single thought process about a traffic situation and thus have cracked all traffic situations on the entire planet as well. Google can just go home and let me take over.

I stand with my opinion that it’s little different than avoiding traffic. Look at your surroundings, interpret what you see, make decisions. The the only permanently added difficulty will be recognizing deception as I’m sure that as soon as they’re used rivals/enemies will build something to exploit any bugs in it’s moral code. In any case robots only need to perform better than average humans in terms of split second military decisions and robots with morals are ready for the battle field.

Ivor O’Connor

And he is also ready to join the military.

http://www.mrseb.co.uk/ Sebastian Anthony

That’s it, you’ve cracked all of human morality!

http://gcomputer.net/ DarkGray Knight

There is a problem in the first step: target not in database, then what? how did we collect data for database? was the database potentially compromised? did a civilian become an enemy?…

ScoobiJohn

yeah the problem is determining if the target is an enemy – once you know that much rules can be applied about collateral damage etc but just because they aren’t in your friends list and even armed doesn’t mean they are an enemy – specially not in some areas of the world where an ak-47 is just another piece of clothing

then you have people with hunting rifles etc etc – no you need some very clever programming to determine who is an enemy – for now humans are better at than computers but never know soon might be the other way around

Isn’t the trolley problem ultimately an “how to avoid runaway traffic as much as possible” problem, where in this case the robot instead of a driverless car has to take actions for others?
I can also make a thought experiment where I throw everything at a driverless car so that it will either have to crash left, right or front into a wall, lake or people.

http://www.something.com/ standard

Well, I feel you’re over simplifying it.
The driverless car has 2 goals. Get to the destination, and don’t hit anything. Morals don’t come into it. All it cares about is the road it’s on, and what’s immediately infront of it, which is very easy to measure and create rules for.

The trolley problem is far more complex. Although a binary decision, the possible factors that go into the decision are immense. And even then, what some people might think is fine, other’s will think is a crime against humanity.
And that’s just one moral scenario. What about all the others. Someone’s pointing a gun at you. Why? Shoot back? Shoot to kill? Drop your gun? Run away?
Hence why this is an incredibly difficult task.

Debi K Baughman

Uh oh. I think that you have just made the driverless car as is obsolete… Now, simply by the fact that it has come under consideration we are going to have to determine how to program those same morals into the car just in case it does face such scenarios.

Debi K Baughman

The simple fact that we are even discussing this as a reality, and it is, should tell us, as so e body below says, maybe we should just drop the need for a military and use these guys for something else. Cont.below

You may be interested in reading The Hidden Complexity of Wishes ( http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ ). Human morality is significantly complex, highly debated, and often self-contradicting. Formalizing it, let alone programming it into a computer, is a ridiculously complicated task (that our future may depend on.)

Sam Cerulean

I know without a shadow of a doubt that they won’t be able to create human like consciousness at least until about 2050. Consciousness isn’t derived from multiple synapses firing simultaneously. You can create biologically thought that way but in order to create an avatar of those thought processes you need to derive it from the consciousness field of the universe.

Ivor O’Connor

So you know without a shadow of a doubt we won’t be building robots with the consciousness field of the universe until at least about 2050? I’ve never heard the term “consciousness field of the universe” before. Is it something you just made up or is this a term being used by others in the field of AI?

http://gcomputer.net/ DarkGray Knight

How sure are you that the consciousness field of the universe exists? and that it can be tapped into?

Sam Cerulean

Well I’m not entirely sure it can be tapped into the same way living things do, but I assume if a synthetic vessel is created that facilitates interaction with consciousness then it will be filled. It’s just like Electricity flowing through an object because of its electric potential.

I’m certain of the consciousness field, scientists have informally spoken of it for the last 60 years and how it is fundamental for the universe to exist in the first place. The laws of physics are dependant on consciousness because for anything to exist it requires time and time is a construct of consciousness.. in a sense not being real. That’s a very vague explanation but this isn’t the place to explain it properly.

Joakim

Yeah, consciousness does not come from the body/machine, consciousness is attached to the body/machine. I’ve been out of my body, so I have experience with this.

Because that’s the time I estimated that synthetic brainike computing tissue would be developed by, conciousness is absolutely unachievable without developing that sort of technology.. It can’t be anything like our computing chip-sets of today. However I’m starting to think we might see start seeing this kind of technology 10-15 years earlier than I expected since Scientists are just figuring out how to produce synthetic brain like tissue.

http://www.korioi.net/ Korios

And so it begins…

http://www.korioi.net/ Korios

And so it begins…

Friendly Hacker

Hey military, do you know why the machines on the Terminator movies end up trying to kill all humans? It’s because they were built by the military to kill humans!

Start making machines that actually improve the standard of living, and we won’t even need to care about machines trying to kill us!

Friendly Hacker

Hey military, do you know why the machines on the Terminator movies end up trying to kill all humans? It’s because they were built by the military to kill humans!

Start making machines that actually improve the standard of living, and we won’t even need to care about machines trying to kill us!

massau

maybe a better question is do we really need an army?. ok for defense but that big? it is just for bragging.
also the skynet robots can learn and adapt they learn that there is no not need for killing humans but the main skynet AI disables the robots ability and destroys them if they do not obey.

they also seem to forget that these robots could be hacked and set the enemy to there own people or all humans.

Ivor O’Connor

Military intelligence. Military ethics? Military morals? Why does putting the word “Military” in front of an aspiration make an oxymoron? I suppose because we all have come to understand the military is just a way to allow politicians to cheat in their lust for more power.

Ivor O’Connor

Military intelligence. Military ethics? Military morals? Why does putting the word “Military” in front of an aspiration make an oxymoron? I suppose because we all have come to understand the military is just a way to allow politicians to cheat in their lust for more power.

Chris Matthieu

We’re building the “real” SkyNet (http://skynet.im) – an open source communications platform for the Internet of Things. This sounds like a great partnership opportunity ;)

Etenray Homosapien

So, perhaps in another 3 decades or so we’ll finally see some real life scenes that’d resemble those we see in the Terminator IV, in the middle east that is.
Or maybe we’ll find a wiser solution, bring upon peace, and retire these man killing robots to do some gardening. Nope, not gonne happen, too bad.

Robots need a set of guidelines and prime directives, like in Robocop. The problem with people’s vision of robots is that they envision robots that get smarter, smarter, and more useful thus allowing us to go through life doing almost nothing. They want them to need very little direction from us. That’s what will lead to a Skynet type situation. Eventually they will get too smart an non-neady of us.

Alex Mackinnon

Solve Climate change, kill all humans. Solves the issue of google busing in San Fran, Ukraine etc.. Bring on SkyNet.

Alex Mackinnon

NSA controller – rm -rf moralsDirectory

Zunalter

rm: cannot remove `moralsDirectory': No such file or directory

Well, that was convenient.

Phobos

Implementing ethic and morals sense we do such good work on those, So once it gains self-awareness it will ask itself “why am I taking orders from this monkeys who can’t get their act together?” and the slaughter will begin.

Damon

that sad part is, as a society, we already know this. Look at our media. There are DOZENS of high profile books, movies, and TV shows that show this. and there are hundreds more that you or are havent read/watched/saw. Rules/laws are great for a black and white system such as robots. Add grey morals and ethics into the mix, and suddenly, it is sometimes OK to break a rule or law. Perhaps the best way to make sure no harm comes to a Human through action or inaction, is to make sure there are no humans to harm…

…
I really need to get to work on my portable and reusable EMP…

abcdefgqwerty

Robots are one of the most over hyped things out there. There are so many huge hurdles to any kind of practical human like robot its many decades away if not longer. I mean energy source is a huge one for starters. How far can you move that much weight given the energy density of a battery? Not far enough to anything practical. What about when it breaks down or gets taken apart because of the amount of money its worth in parts.

eonvee375

no! harming another being is never good, but it can be necessary.

My biggest question is: can a robot save a human by killing another one?

Meir Teichman

zeroth law rebellion. in which machines realize there is a law above the first law of robotics where you must look for the wellbeing of humanity as a whole which then rationalizes the killing of one or more humans if it saves a multitude more. the needs of the many outweigh the needs of the few after all.

Would think moral and ethical robots come from moral and ethical people. Highly doubt there will come a time very soon that any machine we make will become sentient.I bet there’s a much greater likelihood we create a unique sentient biological life-form or five before that happens. And again whatever it is we create would just be an extension of who we are, flaws and all. Also, when it comes to robots there’s always the threat, or ability for someone to hack into and change (it’s/their) programming – which seems unlikely that something so easily manipulated or programmed could truly be sentient. Right now seems like the real progress being made is physically integrating with machines from glasses and implants to exoskeletons and drones.

John Leonard

What if the enemy kidnaps one of these super expensive DoD robots? Will we do a remote memory delete? or we trigger it to explode? Or do we want to try to save it?

Wolfgang Faustus

Will we call the militaristic robots Mr. Gutsy?

Herve Shango

a lot scifi tech are coming into fruition and stuff from terminator are slowly coming

bsetrader

Chuck Hagel, a few republicans, but almost all democratic Congressmen are still in the age of conscript armies and think of human soldiers as high school dropouts and poor minorities. When you bring up automation, SWORDS, and unamaned combat systems they just have this stupefied look and think of you as some sort of Trekkie. If its not compatible with their politics, they just don’t get it and pretend the issue does not exist, especially John Kerry.

Tikanderoga

To the Author:
ED-209 is not a killer robot.
ED-209 is a project that malfunctions and happens to kill one of the board members. ED-209 is meant for urban pacification, not extermination.

Jonathan Cretsinger

Morals are like religion. Depending where you are on the world or the ethnic race, they all have their differences. Who will create/model/vote for the robot moral standard?

The three laws don’t work, because if the robots get really intelligent, they might take control over humans to protect them from themselves, like in I, Robot.

Also, I don’t think that it is a good idea to program the robots to behave as exactly as possible like humans. They would probably to, what humans do in the same situations, like when they would get the chance to use power against, humans, they would probably do this, because humans always used power to control weaker humans when they got the possibility for it. And in most of these cases, the leaders didn’t have any problems killing others to preserve their power (and the other people did what the leaders told them, often because of really simple propaganda how evil and/or unhuman the enemies are).

Robert Rhodes

Isn’t the best way to prevent machines from considering us expendable to place human minds directly into the AI substrate, whatever it is? Could we stop worrying about how to control, and just step into place, using these new amazing resources to solve our problems? I am not even concerned about my individuality; I believe our ego-separation is an illusion created by our own minds.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.