Military “Killer” Robots Closer to Reality? Pentagon Says Yes

The idea of robots on the battlefield is nothing new, but the reality of them being more widely used is becoming more real each day. Robots have been used for years to assist with counter-IED efforts. The South Korean military alone has 7,000 reconnaissance and bomb detection robots. However, they are not capable of making life and death decisions.

The Pentagon has announced an initiative that it is researching and developing “killer robots” that are capable of killing without the aid of a human controlling it. How they can discern between friend and foe? No details on how that will be accomplished have been provided by the Pentagon, but they assure us they are making a concerted effort in that regard. Whatever that means.

A practical concern is this – What percentage of these robots is intended to replace humans? And what percentage is intended to support them? Robots have already been proven to be effective in supporting soldiers in combat by reducing human exposure to threats, but are the robots ever going to be capable of making the quick, life-or death decisions that humans have to make?

As the military personnel count is being reduced in size, is it possible that they are planning for an increase in artificial intelligence use? While it is not likely, that decision will ultimately be decide by people who will be thousands of miles away from the hot-zone if things go terribly wrong.

If North Korea’s dictator Kim Jong Un ever orders troops into the demilitarized zone, an army of South Korean robots could be waiting.

A Samsung subsidiary plans to deploy sentry robots to the tense South Korean border. The machines will be equipped with machine guns and cameras, thermal imaging and laser range finders capable of detecting intruders up to 2 1⁄2 miles away.

Samsung Techwin says the decision to fire must be made by a human in a remote bunker. Experts have suggested, however, that an operator could hack into the robot to enable it to make its own lethal decisions.

“If there has to be a decision, somebody has to turn on a trigger or put a key in for the lethal part,” said Alex Pazos, Samsung Techwin’s director of application engineering in Latin America, where it uses unarmed versions of the surveillance robots.

The robots represent the cutting edge of cyber technologies that increasingly give machines control over life-or-death decisions. For now, the robots are adept at making stark choices in places such as the Korean demilitarized zone, where no people are allowed.

Though unmanned drones in the sky have drawn a lot of attention, a Tribune-Review investigation finds that ground-based droids — the real-world descendants of Hollywood sci-fi movies — are becoming smarter and deadlier, pushing the line at which ethical questions must be resolved. The Army has more than 7,000 less-sophisticated ground robotics systems for missions such as reconnaissance and bomb detection and removal.

“There are moral, ethical reasons to not delegate the authority to kill people to machines,” said Peter Asaro, co-founder of the International Committee for Robot Arms Control, an international nonprofit opposed to military robots. “Just because you can mathematically distinguish civilians and combatants, that doesn’t tell you it’s appropriate in the situation to use lethal force, even against a lawful combatant.”

Navigating mapped areas, such as a factory floor, a robot can make rudimentary sense of changes and alert humans to an unauthorized visitor. They do not fare as well on uncertain terrain or at distinguishing foes among friends.

It’s a huge leap then to setting them free in the wildly chaotic human world, said Jim Gunderson, the founding chief technology officer of Vigilant Robots, a Denver startup that makes unarmed sentry robots.

“I know how smart these things are, which means I also know how dumb they are,” he said. “The whole ‘Terminator’ thing turns a lot of people off: We put weapons in the hands of the robots, and then the robots decide they don’t need us anymore.”

Pentagon directive

The Pentagon has tried to stay in step with killer robots.

A Department of Defense directive issued in November requires special approval for robot systems that kill without human supervision. It calls for manufacturers to minimize chances that robots could engage in unintended attacks — or fall under the control of hackers.

“The intent of this directive is to get out ahead of technology,” said Lt. Col. Jim Gregory, Defense Department spokesman. “It is not motivated by any particular event or specific capability, but rather an appreciation that technology is advancing such that the ability to employ more autonomous systems will only increase over time.”

Although the military does not have machines that can go out and kill on their own, it has autonomous robots that can initiate nonlethal attacks or defend ships and troops.

The Miniature Air Launched Decoy-Jammer, for example, flies a pre-programmed mission with an electronic weapon designed to disrupt enemy radar.

Raytheon’s Phalanx system — nicknamed R2-D2 for the “Star Wars” character it resembles — is on 145 Navy ships and can continuously scan the skies and water for incoming objects. Once turned on, the system can detect an object at 10 miles, identify it as an incoming enemy attack at five miles and destroy it at two miles. On its own, Phalanx can fire as many as 4,500 rounds of 20mm tungsten bullets per minute.

The naval version has never fired at an enemy, but a land-based Phalanx system in Iraq stopped 177 incoming attacks, with no reported mishaps.

“Who knows how many lives it saved?” said John Eagles, Raytheon spokesman.

BECAUSE, FACTS DON'T TAKE SIDES

More From NEWSREP

Comments

Join our community. To comment on this article please join/login.
Here's a sample of the comments on this post.

WayTooMuchGear

In the case of the Phalanx, it's easier to relegate the task of determining hostile intent to the machine. On the ground, robots like this would have to be integrated into a network that keeps track of blue forces and is determining no-fire zones in real-time, not technically impossible, but a challenge that will certainly find more than one test case that will break it.
I'm hoping to make my way to AUVSI this year to see what is new and which systems are most appealing to potential customers. In all of the talks I've attended, I've yet to hear anybody be super eager to make ground or air unmanned platforms prosecute targets autonomously.

Jonathan O

Second Variety, by Philip K. Dick

Yankee Papa

.http://previews.123rf.com/images/jossdiim/jossdiim1206/jossdiim120600009/13968380-illustration-of-robot-on-white-background--Stock-Vector-robot-cartoon-soldier.jpg
...Dumb machines (mines) kill anybody and everybody. But so long as the minefield marked... no ethical issues. But letting a mobile platform make the call is going down the wrong road. (Of course that won't deter some of our enemies...)
.
-Yankee Papa-

john p

I always thought of the show battle bots. Contract those guys to send all different types of crazy looking robots into a Taliban held compound. Some explode and just hack away with knives and axes or hammers.some are cameras watching. Some drones overhead flying watching from above. And in the mist of all the commotion send in a Delta team or something and let them finish em off lmao

Thanks for reading. Try 4 weeks of unlimited access for $1.99
SUBSCRIBE NOW

We’re glad you’re enjoying NEWSREP. You’re almost out of free articles for this month.
SUBSCRIBE NOW

YOU HAVE ALREADY REACHED YOUR MONTHLY LIMIT TO ACCESS OUR ARTICLES.

PLEASE SUBSCRIBE TO CONTINUE READING.

Your subscription is important and supports our editorial integrity. Advertisers are sometimes afraid of being associated with controversial news topics, and your subscription is vital to ensuring we can continue to publish the courageous news we are known and respected for.