A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies. (Source: Warner Brothers)

Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality? Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront. Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger. However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors. Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers. With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare. This logic would be mixed with traditional rules based programming.

The new report looks at many issues surrounding the field of killer robots. In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners. And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research. It warns the Navy about the dangers of premature deployment or complacency on potential issues. U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors. War robots will have to kill, but they will have to understand the difference between enemies and noncombatants. Dr. Lin describes this challenge stating, "We are going to need a code. These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets. While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face. The offending robots were serviced and are still deployed in Iraq.

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

Oh for God's sake. A.I. CANNOT MAKE THEIR OWN ALGORITHMS WITHOUT US EITHER PUTTING IT THERE OR TELLING IT TO MAKE ONE THEMSELVES. How can an A.I. that was originally designed to receive orders, shoot and move magically gets insane processing power from the air and decides: "I'm gonna doughnut across the desert!" and just randomly writes its own line of code saying so? It doesn't make any goddamn sense! If it does end up doing a doughnut across the desert, 99.9999999% of the time, it's just buggy and the programmer screwed up somewhere.

My hunch, albeit just a hunch...tells me the team researching this and writing this kind of code are a notch above the average run of the mill programmer who just got their degree from. Now I don't know anyone personally on these forums - so perhaps some of you are akin to a programming God , maybe you have multiple PhD's, perhaps you already have the foundations down for designing a time machine, curing cancer and solving the problem with world hunger.....BUT I think you also might just be overly down playing the skills of these folks and their knowledge just a tad.

Do you seriously think that I am actually suggesting that there is a company named Cyberdyne which reverse engineered a chip taken from the arm of a destroyed T-800 sent back in time to kill Sarah Connor?

Or giving you the benefit of the doubt, maybe you just had an idiot moment and meant to reply to someone else's comment.

quote: Do you seriously think that I am actually suggesting that there is a company named Cyberdyne which reverse engineered a chip taken from the arm of a destroyed T-800 sent back in time to kill Sarah Connor?

That’s what I thought. You have no idea how relieved I am to know you weren’t serious.

quote: TextDo you seriously think that I am actually suggesting that there is a company named Cyberdyne which reverse engineered a chip taken from the arm of a destroyed T-800 sent back in time to kill Sarah Connor?

quote: TextThat’s what I thought. You have no idea how relieved I am to know you weren’t serious.

Yea, I was worried about that too. In real life the company is call Cybertech Autonomics LLC. They had to change the name in the movie. The movie did not want to pay for the royalty rates to use the real name of the company. :)

Actaully, some advanced programming languages allow for the code itself to be replaced automatically at run-time depending on certain variables, so its quite possible for code to be "written" without any human involvement.

the problem isn't in telling the bots what to do. it's telling them what to do, having a malfunction, and them not stopping.

where already telling them to kill other humans, although to us those humans are "the enemy".

this discussion is about giving robots ethics, in other words, allowing them to seperate friend from foe themselves.

which, in my oppinion, is the first step to the terminator universe. hell we try to teach a robot ethics then a human orders him to go kill another human, but not *those* humans.

where in the process does the bot learn right from wrong? and what would the bot percieve as wrong? would a bot operating for nazi germany, percieve it ethically wrong to kill jews and rise up against their masters (which would be a terminator situation).

this debate is far more philosophical then just debating wether the code is *capable* of doing it.

any player of world of warcraft will tell you that eventually, code will start to get a mind of his own. i swear that game just wants me dead at times (out of the blue for no reason pulling an entire FIELD of npc's).

Great post, exactly my thoughts. Oh, and on the WoW thing I know exactly what you speak of. Sometimes I think there is a blizz employee sitting at a screen screwing around and doing it on purpose, lol.

it's good to know my ideas and observations can be nullified because i happen to enjoy a paticular silly game (which i quit 2 weeks back mind you).

here's why the comparison is valid: the NPC's, or Non Playable Charracters, are completly AI driven. a human told them to patrol that area, and attack anybody within 10 yards of range. atleast at a certain level, but that's no different then a certain level of threat a bot might experience in real life. otherwise, they are completly void of any human interaction.

there's a field of them each watching their own 10 yard space. so imagine my suprise when AI up to 200 yards away starts charging for me out of the blue to kill me.

suppose you have several AI bots patrolling your base (in real life). out of the blue they all attack everybody within sight, which they aren't supposed to.

this is the greatest fear of armed bots, and in WoW i've already seen it happen. the game consists of millions of lines of code, like real life bots, the NPC's have no human controlling them, like real life bots (the ones where discussing here atleast), they have a build-in response to threats, like real life bots, and they will engange if i pose a threat to them, like real life bots.

you might laugh now, because i mention world of warcraft. if this situation actually happens in 10-20 years, i'll laugh my ass off. people might hate me for it, but i'll laugh even harder at it. poetic justice i suppose.

and get my lvl 80 mage to kill the rogue bots, but that's a different discussion.

I think the concern is that the AI is going to have a learning ability (necessary to overcome new experiences out in the field) This ability to learn and write its own code based on experience coupled with millions of lines of existing human code and its inevitable bugs gives a potential for unpredictable outcomes. I'd say it's a very real threat. We're not just talking about a hardwired machine here, it's a dynamic evolving system with millions of permutations. And there will be "surprises"

Very, very true. Our military robots aren't just going to 'wake up' one day and decide to start killing everyone.

I could imagine, though, writing a learning algorithm to help the robot identify threats. If the robots could communicate their improvements to the algorithm (based on threats detected and destroyed for instance) it would only take one to robot to learn the wrong identifiers to bring down the whole network of robots.

Of course, even so I would think direct commands would still work. As long as there is a low level command to shut down the system (a command that doesn't go through the threat detection system) there shouldn't be a problem.

Since this is so close to SF: If MS can build a firewall, then the robot could do that too, at some point in time ;-)

If all depends on code, how do you prevent the robot or your enemy from writing/adapting code to prevent a remote shut down. Could your enemy hack into the robot's system? So you want it to be open to you, but sealed tight to your enemies and the robot itself. To an intelligent being, that would feel like a mind prison: "We will tell you what to do, don't make up your own mind."

I didn't mean for my post to come of as Sci-Fi, let me explain more thoroughly my thoughts. I can imagine writing software that allows a swarm of robots to communicate with each other such that each robot can send information about was happening around them when they are destroyed.

This information could be used to build a set of rules about what is and isn't a dangerous situation. If you allow the robots a finite but limited list of behaviors (flee, attack, take cover, etc) they could try new things depending on the situation, record and/or broadcast the results of the engagement. Things that work get used more often, things that don't work get used less often.

Now all it takes is one bug in the program for a robot to identify civilians as enemies. Since every time the robot attacks an unarmed civilian it will probably win, this behavior could quickly become dominant, spreading like a virus to the other robots in the group.

What won't happen though is the robot changing it's own code or suddenly learning subjects that it doesn't have algorithms to learn. The robot won't re-write it's basic command system because the command system isn't designed to learn.

Basically, the robot is closed out of the command system because there isn't an algorithm that allows that behavior to be edited. The enemy is closed out because the commands would be sent via encrypted signals (no way, short of treason will those will be broken).

No way short of treason... because when you're seconds away from being tortured to death by the enemy, or THEIR robots, it would be unheard of to give up any info to save your own life, your main concern isn't that the other captives will give up the info anyway, it's that if you survive you might be found guilty of treason later (if the robots don't kill everyone anyway)?

What is ultimately the most fair and ethical thing for the war robot to do? Kill all soldiers, anyone who poses any kind of potential threat to others by supporting, carrying, or being in any way involved with weapons.

The only /logical/ thing a robot could do is exterminate all who seek to engage in war, then keep warlike movements from gaining sufficient strength in the future.

If there is a low level command to shut down the system, aren't we opening up a huge security hole for the enemy robots to capture and exploit? Use very deep encryption perhaps? Unique identifiers and authentication keys adding to the complexity of the system so even fewer of those deploying, using, and designing them know what to do when things do wrong?

After all, if there's one thing we always have plenty of in a war zone, it's robotic engineers that can take out haywire killer robots.

quote: Scientists have been pursuing Strong AI for decades *unsuccessfully* and now people fear it will happen accidentally because too many people are working on too many lines of code.

Yeah. Unless the military has a super-secret self-learning CPU somewhere, this is completely moot. The only way we are going to get to a 'Skynet situation' is if technology advances to the point where silicon and software are both able to change on their own with zero input from us squishies. Also, why does AI automatically mean 'death to humans'? I realize the article is talking about military AI/robotics, but I'd highly recommend reading any books of the 'Bolo' series by Keith Laumer. They're great reads with a prime example of AI that doesn't view humanity as something to be exterminated (and Bolo's themselves are weapons of war).

It doesn't. The fact that they would be autonomously killing humans is the trouble. We can't guarantee that its software wouldn't go haywire for any reason and start killing everyone. Whether or not it thinks of itself in a sentient manner is not the issue. We're talking much more basic, as in how can we make it discern friend from foe (which we already can't, apparently), and if it has a bug, we can't stop it from firing on friendlies or civilians because it is autonomous.

I used to have an interest in programming a smart AI too but had since given up. It takes patience and a lot of time. I agree with your comments though. There are plenty of talented people in the AI field and there are plenty of work going on around it but unfortunately a self learning or even very smart AI isn't anywhere near what most would like to think we be in. Same for robotics, although there have been some pretty innovative robot designs (not the AI itself) in recent years.

Having said that I'm not entirely sure if this article put such consideration before publishing it. Purely throwing out articles base on one person's comment, and limited comment at that, as news isn't wise. For one, it just stirrs the crazy to be even more crazy! :)

100% agree, that's why I think the article heading is terribly misguided (good for clicks though :D). No-one is actually talking about these robots creating a Strong AI out of the blue, rather about screwing up and causing friendly fire or doing something else unexpected causing human life loss.

With semi-autonomous robots actually pulling the trigger the risk of something like that happening is considerably higher than with previous complex military systems.

I'd fear ethical laws programming more than AI. The storyline depicted in "i, Robot" is in relation to ethical law programming moreso than robots turning against their humanoid creators in reference to the Terminator series.

In "i, Robot" the ethical programming of the machines became the point that humans themselves, living, is an endangerment of their own life. That robots must control our lives for our own sake because humans are susceptible to free will and have the possibility of doing wrong. Inevitably, hindering war and other human-on-human violence. How could ethical programming, in the sense of obtaining morality in forms of AI (which is what they're trying to accomplish), ever logically process the good in HELPING us in war, than backing away from war and defending itself?

I'm all for AI, and I could care less about these claims. I just thought I'd chime in because this is one of the first times I've heard of any military leadership mention "ethical programming" before.

I think MS has probably helped instill this fear of too many people working on a large codebase that could never be bug-free.<joke>In my not-so-humble opinion, as long as MS isn't the creator of the AI "operating system", we should be fine.</joke> :)

Hmmm and just because we have had decades of falures to create a real honest to God AI means that it will never happen? Look at computing power. Its come a hell of a lot closer in the past 10 years to the capability of the human brain then in the past 20. Look at our ability to program. How the hell do we know where the tipping point is between a program designed be intelligent in a narrow confine and one that has the ability to program itself. Something that starts doing thing could easily be seen as simply a programming error on the part of a human and overlooked. This is where the "code is getting to big" comes into play.There is a certain amount of arrogance based on past failures in your statement that is exactly why an article like this should be "considered" No I'm not talking OMG! RUN FOR THE HILLS! THE ROBOTS ARE GONNA GET US! DESTROY ALL TECH NOW BEFORE ITS TOO LATE!!!!111oneoneoneBut start taking this seriously as we move forward.

AI is possible, but I think the program of self-learning and run-time self-programming new behavior is a different story. I mean, you can create a set of choices and then use sensory information to effect a decision by the system, but providing ways to adapt new approaches is a different thing. Of course, one could argue that we could just provide an infinitely large set of atomic subroutines and then let the system arrange the order of execution and the choice of subroutines as it's running. In essence, this really isn't *too* different from humans.

Well, there's what the program is intended to do and there's what it actually does. And then, there are just plain old bugs... That leaves a lot of wiggle room for unexpected behavior, which is a little disturbing when the said computer is wielding a gun.

There are TONS of programs that do OTHER than what the programmer designed, wrote or intended the program to do. It's called having bugs in the code.

My counter question is name me ONE piece of software (reasonably speaking here ...don't tell me your 9th grade VB code for "Hello World" is bug free) of significant popularity in the work place (government or private take your pick) that doesn't have at least one bug that was unexpected.

Let's put the bug in the target selection code, place the fully armed bot in downtown Wash. DC and set it loose.

A bug can sometimes be an unanticipated feature :P

Remember the point of the article is that when an autonomous weapons system is placed in the field there is a danger of the weapon system engaging targets that it is supposed to protect. All it takes is one or two mistyped characters in the code. You can add safeguards by requiring multiple sources of target verification before allowing damage to the target.

Regardless of safeguards and cutouts, the system will be REQUIRED to damage targets designated unfriendly. What makes a target unfriendly? Weapon pointed at robot? Supporting troops really need to watch the way they hold their rifles. Wrong uniform? So the enemy changes into civvies. Carrying a weapon & not friendly uniform? Put on an outfit that looks friendly, walk up to robot and apply shaped charge with delay fuse and walk away.

These and many other real life situations will be faced by a real combat robot. Now will you guarantee all the code involved in decision making to be error free? Look at medical equipment today. Due to the danger to patients the code in medical devices is heavily bugchecked and tested. In spite of this there have been devices, including a robotic irradiation machine deployed with deadly bugs.

Yes it is science fiction. NOT science fantasy. Real hard science fiction is building a story around reasonable extrapolations of what can be done if selected advances and/or discoveries become reality. Geo-sync satellite? Arthur C. Clarke. Water bed? Robert A. Heinlein. Many devices in use today, including cell phones, personal computers, pocket music players, internet, video phone etc. were "invented" by science fiction authors who then wrote a story that included the idea that the fanciful device was an everday item.

Granted, you could argue that the program was written to play chess but I would argue that the program plays chess an order of magnitude better than the programmer.

The programmer didn't program every specific situation into the program nor program every specific strategy. There are loads of programs with emergent behavior, behavior that wasn't coded for but is an unexpected result of the code. The situation is very common with learning algorithms and can often produce very unusual, unexpected behaviors.

Chess AI will "play" better than its programmer, but it "thinks" differently. The basics of the algorithm are not that complicated. The AI plays ahead as many different moves as it can. Weights are assigned to the outcomes (the heart of the algorithm), and the outcome with the most weight is chosen. Of course weaknesses are built in so we are not obliterated by good algorithms. Only Grand Masters stand a change against these when they are not restricted. Perhaps the unexpected behavior you are thinking of is expected as a function of the restrictions.

Regardless, the Chess AI has a very limited scope and a single programmer can/should understand it in its entirety. AI for Warbots is another story.

The Chess AI is only better because it's mathematically faster... Just as a computer would be better at multiplying 125 * 438. The programmer could do it, it would just take longer.

And unless programmed to do otherwise the AI will always output the same moves based on the situations it encounters.

Of course computers can do many things that humans can't, but they can't do anything that we can't envision or that we haven't programmed them to do. The Chess "AI" is more Superficial then Artificial. It doesn't "Think" and make choices that deviate from its programming path.

You're kidding right? You know race conditions exist right? You know that in highly multi-threaded applications, the outcome is almost impossible to determine if there are enough concurrent threads running asynchronously on the same shared pool of data. Of course, programmers ALWAYS work to avoid this type of thing, so it's uncommon. But you could implement race conditions in AI for instance. Create a circuit. Under different circumstances, different paths drive the signal at different times.

But about the whole chess thing, that's just silly. The programmer who developed the algorithm for the chess game will inherently understand what moves to make so that the computer CAN'T produce the right decision for it. It's just a set of analyses based on the situation and simulating the outcomes and finding the optimal ones. Just implementation of combinatorics and optimization. Why do you think Blue Gene is good at it? Because it can go through an infinite number of cases and deduce the best outcome. But it's not learning. It's not developing new strategies and fundamentally changing its algorithms using run-time reflection.

Asimov's Robots could and did kill. The Zeroth law allowed killing of humans when it was necessary to prevent a greater harm to humanity. This 4th Law was added when R. Daneel Olivaw shows up in the Empire Trilogy.

In the earlier books with only Laws 1,2 & 3 operating, hardware and software errors made it possible for a robot to harm or kill human beings.

When you're looking at perfectly functioning code installed in a warbot you need to consider combat damage. The enemy will not check the User's Manual to see what damage they are allowed to infict :P

Then if he has all the answers we just have to find a way to clone enough copies of him to do all the work, educate them all, etc., but with a different education suddenly it's not Isaac anymore except in basic DNA.

Friendly fire isn't the only concern here. There's quite an emphasis on leaving civilians unharmed as well. And if you try to hand these chips out to civilians, the enemy's military will end up with them, making the whole chip system meaningless.

Presumably the chip is planted under the skin, right? That way, when the enemy captures some solders they don't just take them captive, first thing to do is cut out the chip for their own use, or of course just create fake duplicate chips. Such tech might work well for awhile against 3rd world countries but against those we have less need for robots this advanced.

An actively replying IFF chip is a targeting device. Just set up an automated weapon with a directional chip detector. Turn it on and then keep an eye out for enemy sappers trying to kill your weapon. The enemy's IFF chips tend to insure that targets are found and served :)

What's wrong with having humans still ultimately decide who lives and dies, but let the bots be in the line of fire? We don't have to give these bots the responsibility of deciding whether or not to pull the trigger.

Shoot, we could even develop a system where us humans basically tag 20 people out of a group of 1000 to kill and the bots, on command, kill all 20 nearly simultaneously. The bots wouldn't have to decide to pull the trigger on any of those people, but they'd still be able to get the job done in an extremely efficient manner. Why do they have to have judgement built in?

quote: What's wrong with having humans still ultimately decide who lives and dies, but let the bots be in the line of fire?

Because in that situation you have to be able to communicate effectively with the robots. This communication can be jammed rendering your robots worthless, or worse, it can be hacked, and your robots can turn against you.

Against the advanced forces of the Taliban, it isn't a big deal. In a fight with someone a little more sophisticated, it can pose a problem.

I agree, the main concern would be preventing the enemy from taking control of the robots. But I still think humans should be pulling the tigger here. Not only that, but the rewards of using these bots would likely far outweigh the risk of them being taken over. Once one is compromised, shut them all down until the proper modifications can be made. Then use them again until one gets compromosed, shut them down, etc etc. You could use these with humans controlling them without being stupid and reckless about it.

Obviously you would DESTROY the handful that you had in combat at the moment and STOP DEPLOYING them until you got the issue resolved. It's not that complicated. Collect them? Again, don't be stupid and reckless by deploying all at once, only a handful at a time. Sheesh...

If a war isn't worth dying for it's not worth fighting. It's one thing to use them for reconaissance or as supplements alongside combat troops but the idea of robots performing actual combat is disturbing. Without a human cost to war there is much less restraint on what leaders, good and bad, might choose.

Welcome to 1984, constant war with no casualties so fewer objections, only in reality and without the entirely falsified media which is impossible in today's world.

"He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare. This logic would be mixed with traditional rules based programming."

Right. "Ethical Warfare"? That's the best oxymoron I've ever heard. Just a few questions:

1) Who gets to define the "ethics" included in the programing?2) If an AI decides it is a conscientious objector, is it taken out of service?3) And of course, the eternal question in one of its many forms... If by killing a dozen random people you can be 100% sure of killing a terrorist that is prepared to kill thousands, which would an ethical AI choose?

2. Not what the article is concerned with, but still...Hopefully the designers would have the sense to teach it only sensible, practical ethics. Robots that can fantasize about utopia would be useless.

3. Really? People still bring this kind of thing up? What world do you live in? As long as bad people do bad things and we don't all get along, someone is going to have to make difficult decisions about who lives, who dies, and what is an acceptable loss. The kind of BS you posted really minimizes the trauma that THOSE people go through to keep the impact of military operations to a minimum. Do you want to make those decisions? No? Don't have the stomach for it? Put yourself in their shoes before you get self-righteous. Because someone HAS to do it. Imagine the world if they didn't.

If you aren't willing to kill, you'd better be willing to see everything you care about die.

1. You have no idea about how I would define ethics for an autonomous robot capable of lethal force, so your answer demonstrates my point. You assume that you and I would disagree on ethics because, more often than not you would be right. The beauty of having humans at the controls is that every single one is an individual with their own background to help them determine right from wrong. While you may have occasional atavistic individuals making socially unacceptable choices, they are just individuals. Fielding a force of robots all programmed by one person or group of persons changes the "balance of power" in ethics by slanting decisions one way.

2. Practical ethics? Interesting. I'm fairly sure that ethics are based on absolutes. It is the human based decisions of when we must make decisions for the "greater good" that contravene our ethical training which are important. In short, humans have the capacity to decide when they must do things they are know are "wrong" in order to accomplish a greater good. I'm assuming by your reactions that you are in some way connected to the military. If so, how often have you sat through training sessions where you discuss the difficult decision of when to disobey orders? I've had to sit through QUITE a few of those. The reason we do it is because it is important that troops understand that at some point, the cost in broken ethics is worse than the damage that may be caused by refusing to obey orders.

3. People bring it up quite often. How many times did you hear it during the course of the elections? I heard some version of this A LOT. Your tirade is interesting, mostly because while you seem to think we disagree I think we mostly agree. PEOPLE have to make the decisions of when to contravene their ethics. People who are capable of understanding that their choices have consequences and the moral strength to make those decisions knowing that they will suffer from the guilt for years afterword even though they know they did the right thing.

Warfare is unethical in most of the major societal groups around the world. And yet, we recognize that at times it is necessary to go to war in order to defend some greater ideal. That doesn't change the fact that war is an evil which should be avoided at all costs.

I know, I know. Philosophy on a tech site. And long-winded philosophy at that. Sorry about that...

Good luck figuring out the ethics thing. Business ethics aside (which may be a better model for warfare ethics), philosophical ethics are definitely not absolute; though I would love to see the Categorical Imperative in C++. Humans have been trying to wrap their minds around ethics and morality since the beginning of time, there is no one/simple answer, and there is always an exception to every rule (or is there?!).

Frankly, if war is sadistic and horrible now, imagine if we take the last element of humanity out? It might be better in some cases, but in most cases I think it would get ugly.

And if there is one robot in the world for every human all they need to do is build a couple more robots and have them all stand next to people and go boom....instant genocide with a few bots left to rebuild.

And that's why we don't have fellas like you developing war machines =) Because of a computer system is complicated enough to code its own subroutines and inject them while it's running, then I'm sure it's smart enough to find a way to get the bomb off its back ;p

quote: If humans often cannot discriminate between friend and foe, combatant and civilian...

Very true. In Desert Storm there were 44 dead, 57 woundedfrom friendly fire. Hopefully by the time humans are capable of such technology, we will be advanced and civilized enough so that war will be a thing of the past. Not likely though.

While the sensationalist blog title was enough to make me read this, I must say there was very little new content.

While the robot's programming team may include 100's of programmers there will be a system architect and I'm sure there will be serious amounts of unit and system testing. Yes, they will know what each part of the programming does.

We need to look at the benefits of this which include having our young soldiers not coming back with limbs missing

quote: We need to look at the benefits of this which include having our young soldiers not coming back with limbs missing

LOL Ok, now, what if your young soldier is coming back safely and is smiling at you and suddenly, your war robot is shooting at you and make you all dead (including the young soldier). Which one do you prefer? Only 1 man dead or all mans dead (including you)? ==;

Not regarding the question of whether the robots will start killing us all, I am curious as to how they will program the robots to separate between targets and non-targets. With non-targets I am really talking about civilians, I am sure they can make the friendly soldiers wear some kind of radio transmitter or whatever to ID them.

Will the AI know the difference between an enemy soldier and a kid with a stick in his hand? How about if the kid is holding a water pistol? How about the difference between a guy carrying a log on his shoulder and a soldier carrying an RPG?

The examples are obviously endless, and it is difficult enough for human soldiers to make these decisions in combat situations. My vote is for not allowing any AI-controlled fighting machines into areas with civilians until they pass a Turing test.

More likely, as mentioned in the piece, the perceived exigency has/will create oversight errors. Code injected by less-than-nice miscreants will hide and run. The results will depend on the capacity stored for projecting damage. Quick, clean kills and you fingernails stay clean.

This is actually an interesting discussion. AI is a lot farther along than most people think. Just because technologies like quantum computing, laser hard drives, nano technology, etc… are not common place in the public sector that doesn’t mean they aren’t out there. Stuff like that is out there and most people don’t know it (example of tech I bet no one else knew existed till I posted this: http://www.atomchip.com/_wsn/page4.html )

I personally don’t want robots doing the fighting for us, if its worth fighting for (diplomacy failed), then its worth dying for. Now I also don’t think that many things are worth dying for, and I would rather people get to a place where they learn “ethical code of war” than teaching a robot. The real task at hand should be teaching better people, not building “smarter” robots. Reason and Logic in people should be improved/encouraged, not just in robots.

Either way, in the end it all comes down to EMP. If we don’t need to communicate with the robots or give orders, and they are completely self contained then it isn’t an issue (unlikely scenario, when was the last time in war that “everything went smoothly” and there was no need to change tactics or come up with a different plan quickly?). BUT, since we will have to send signals and transitions (Patch Tuesday anyone?) to the robots that leaves them susceptible to Electro Magnetic Pulse and or communication jamming. Anything at isn’t fully, and I do mean FULLY shielded (outer shell and inner wiring/circuitry) would be susceptible.

So in the end unless we can make robots sooooooo smart that we don’t EVER need to give them orders none of this matters anyway. Though the good news is, like NASA, these research projects from the government give us a lot of great stuff that trickles down to the consumers. http://www.cnn.com/2007/LIVING/worklife/10/04/nasa... and this is by no means a complete list there are thousands!

So continue your research and theories, but I hope to never see the kind of AI they want implemented in my life time.

As un-PC as this sounds, having robots that can be dropped behind lines and unleash indescriminate hell upon the enemy sounds like and excellent deterrent to other nations who are thinking about entering armed conflict with the US and allies.

If you are interested in this general topic of military robotics, Peter Singer's "Wired for War" is worth a read. I'm about half way through it. (http://www.amazon.com/Wired-War-Robotics-Revolutio... ) In the short run, the issue is how much autonomy we give our automated systems, as we tend to trust their intelligence more than our own judgement when seconds count, and the possibility of the machine being wrong, leading to dire consequences. For example, http://en.wikipedia.org/wiki/Iran_Air_Flight_655 .

I think that it is more of a slippery slope set of issues than a black and white picture of humanity versus the machines.

Do some google about the basics. Nasty windows worms rewrite their code all the time to avoid detection by virusscanners. But we still have to write the basic code that does the code modifying. At the moment we do have chips that can reconfigure themselves. Meaning making alternate circuits. And writing code that can evolve, (by making a copy, change it and after a warm reset booting from the new code) is also within our limits. Current AI is further then you think. The trick is with the hardware software combination. But still, writing code that is written to modify itself for a robot that can shoot is insane. Even with humans, soldeirs are drilled for a reason. Now, an explorer, that is another question.

What is needed is independent Monitoring agents that follow the ethics rules and enforces the rules to the other program modules if the job tasks are not following their rule objectives. Look at it as your independent auditor. It can not be involved with the real process. The agents only monitor and align the tasks with the program ethics routine.

The way the robot could identify friend vs foe is through a quick scan of some type, friendlys in the area would have a chip imbedded on their person and be identified, where as the enemy would not and would be killed, see ya.

This is a software engineering problem. There is plenty of critical code that runs avionics, cars, power plants and so on. This critical code needs to meet a standard way (we’re talking about a world of difference) beyond Windows or Linux or Word. If it doesn’t, then don’t make excuses and don’t deploy it.

Given current performance of high-end SMP workstations it may already be possible. You simply have to let all the processing be done by the CPUs, the video card receives a 2D image to display. Then the video card would only need be fast enough in memory and ramdac for the 2D resolution, perhaps a Geforce2 era card? Problem is, the CPU power to do it, and the programming, will cost more than the video card which has already evolved to meet the goal.

If it enters into this world then it comes in as a free entity, regardless of how much money you spent.

I think we should respect current models that gave us good service, F14, instead of scraping or selling them to other counties. I think we should offer freedom when or if a self-aware terminator comes.

Robotics/Synthetics/Artificials,whatever, might not be so eager to fight wars they deem to be not their own, forcing a fight might actually get you one.

I have already chosen my side for a few years now.

"We can't expect users to use common sense. That would eliminate the need for all sorts of legislation, committees, oversight and lawyers." -- Christopher Jennings