US cyber-weapons exempt from “human judgment” requirement

"Just what do you think you're doing, Dave?"

As custom government malware becomes an increasingly common international weapon with real-world effects—breaking a centrifuge, shutting down a power grid, scrambling control systems—do we need legal limits on the automated decision-making of worms and rootkits? Do we, that is, need to keep a human in charge of their spread, or of when they attack? According to the US government, no we do not.

A recently issued Department of Defense directive signed by Deputy Secretary of Defense Ashton Carter sets military policy for the design and use of autonomous weapons systems in combat. The directive is intended to minimize "unintended engagements"—weapons systems attacking targets other than enemy forces, or weapon systems causing collateral damage. But the directive specifically exempts autonomous cyber weapons.

Most weapon systems, the policy states, "shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force," regardless of whether the system is using lethal "kinetic" weapons or some form of non-lethal force. If bullets, rockets, or missiles are to be fired, tear gas is to be launched, or systems are to be jammed, a human needs to make the final decision on when they are used and at whom they are aimed.

But the policy explicitly exempts "autonomous or semi-autonomous cyberspace systems for cyberspace operations." And the development efforts for those sorts of systems is now being pursued much more openly. For instance, on the same day the new directive was issued, the Defense Advanced Research Projects Agency (DARPA) solicited bids for "Plan X," an effort to create a "foundational cyberwarfare" capability that would allow DOD to better monitor, exploit, and attack an enemy's network and computer systems. (A synopsis of DARPA's Plan X project is available as a PDF document.)

As part of the effort, DARPA is examining the use of commercial tools used by security experts as part of penetration testing and hardening, including Metasploit and Immunity's Canvas, as well as a "mission runtime environment" that can run automated sets of attacks. And the focus is decidedly on "automation"—keeping "manual" human oversight and judgment in the equation is simply too slow when it comes to speeds measured by the microsecond. It also doesn't scale. Here's how the Plan X overview puts it:

The current manual approach has defined the way cyber operations are conceived and would be conducted—as asynchronous actions. Manual processes provide no capacity for real-time assessment and adjustment to adapt to changing battlespace conditions. The current paradigm is a simple progression of plan, execute, plan, execute, plan, execute... however, if the process can be technologically optimized and the time-intensive requirements minimized, commanders will be able to leverage cyber capabilities in a more flexible manner, consistent with kinetic capabilities, to achieve real-time, synchronous effects in the cyber battlespace.

With systems like Plan X, planners will instead deploy attack libraries from a "play book," "similar to a football play book that contains specific plays developed for specific scenarios," according to the proposal. Those plays may contain "checkpoints" during a mission execution where the code pauses for human input or direction, and the code will be built with checks on what sorts of things it can do without human direction. But most of the time, the code will make its own decisions while running in the wild. Though that's likely to cause spillovers and certain unintended problems, the government is willing to live with them.

Network and software attacks can potentially have the same sorts of impact as non-lethal, or even lethal weapons, plus they have much more widespread impact beyond the intended target. But the DOD is handling them under separate rules of engagement. Remember how Stuxnet managed to spread beyond Iranian nuclear research facilities? Such scenarios will likely become more common—soon.

It's not like you can expect your malware to phone home for permission before taking an action. Sure there are lots of non-military bots farms out there that do wait for updated instructions, but if you are trying to infiltrate secured systems that aren't directly connected to the internet, the time for such decisions is before you release the software into the wild. After it is released, you can't reasonably expect you will have control over it any more. That does mean you'd better pay careful attention to testing and be wary of unintended consequences, but if they were to hobble our cyber warfare efforts with requirements that the malware keep phoning home for orders, we might as well not bother.

Author, you ask a silly question...sorry. A government likes ours will do any and everything it can to protect our country from attack. You can't limit that. Would you want to? Slow down our response? I hope not.

It's called "Governing Lethal Behavior in Autonomous Robots" and it's got a chapter on this sort of thing - indirect activities that could be lethal, like shutting down power grids. The whole book itself is a fascinating look at the ethics and technology behind unmanned weapons. It's a bit dense but if you can handle the writing style I highly recommend it.

(this is the Gov't green-lighting the Singularity) -- which is fine - hell - maybe the Machines can make better decisions thna the Gov't can because they just seem to be making stupid choices left and right.

Stuxnet did spread beyond Iran, but did it cause any real damage outside of the centrifuges? My understanding was that the payload was meticulously specific to Iran's uranium enrichment SCADA systems.

Naivety at its finest.

Anyone gets their hands on the code - and has half a brain worth of a skillset to make midifications and how long do you really think it will take for that code to be turned against those who wielded it first ?

(the US sure does have a lot of nuclear control systems - power grids - military installs that were ALL put "online" for cheaper management purposes. Anything online can be accessed from anywhere on the planet w/ a connection <and some 'elbow grease'>)

Author, you ask a silly question...sorry. A government likes ours will do any and everything it can to protect our country from attack. You can't limit that. Would you want to? Slow down our response? I hope not.

Not to be rude to you personally, but this is exactly the type of logic that, when taken to extremes like humans tend to do, leads to things like the holocaust. I am sure Hitler believed that he had to exterminate the Jews and others to protect his homeland. I am sure Stalin justified the death of 20 million people for exactly the same reason, "we have to save the homeland". There is a reason why they say that the path to hell is paved with good intentions.

On the subject of human judgement in war, I would like to say that removing humans from the equation will lead to mass destruction. The one thing I feel that prevents most wars is empathy. It takes a lot of hate to stand toe to toe with someone and kill them. (at least for most people) When we take that away, we lose our concept of the consequences of our actions. How much empathy is there in pushing a button? We as a species need to be making it harder to wage war, not easier. Maybe wars should be decided by gladiators or small squads instead of mass destruction.

I dunno. By the nature of cyberwarfare some attacks will have to be autonomous.

Take stuxnet, as is mentioned often in preceding posts. It's necessarily a fire-and-forget technology. Similarly, even with externally-accessible systems, it would be far more secure to get a program into the target network and let it work than to hold a link to the network open.

My read is that the presentation is somewhat sensationalist. The required autonomy is more akin to a GPS-guided bomb than a T-800.

Author, you ask a silly question...sorry. A government likes ours will do any and everything it can to protect our country from attack. You can't limit that. Would you want to? Slow down our response? I hope not.

Anyone gets their hands on the code - and has half a brain worth of a skillset to make midifications and how long do you really think it will take for that code to be turned against those who wielded it first ?

Other than the zero day exploits, which are now known, there was nothing particularly groundbreaking about the code. It was designed to attack very specific hardware in very specific circumstances, and it's not going to be even trivially useful for anything else. The thing that made it useful was the detailed knowledge of the systems and their configuration. If an attacker has that information about another system, they can easily write their own attack code, they don't need stuxnet. As with all software, it's not the idea or the algorithms that are the hard part, it's getting a complete working package together.

What if one of these viruses or exploits re-writes the code in a weapons system, so that the weapons system no longer requires human input? (I.e. what if the US wrote a virus to make a countries defense systems attack themselves, then the virus gets into our own weapons systems in the process?)

What if one of these viruses or exploits re-writes the code in a weapons system, so that the weapons system no longer requires human input? (I.e. what if the US wrote a virus to make a countries defense systems attack themselves, then the virus gets into our own weapons systems in the process?)

That would be impossible, programs just don't work that way. Unless we are using the same software in our weapons as the target systems, there's zero chance of something like that happening accidentally. Computers don't do anything you don't tell them to do, and you have to explain, explicitly, exactly what you want them to do.

Eh, this is all probably not much of an issue from a policy standpoint. There are probably lots of circumstances where it is infeasible for "white hat" malware to be in "Mother May I?" contact with our military after launch, even if the time frame permitted it. And as has been said, the likely loss of human life from this (compared to regular, explosive-oriented warfare) seems very low.

While the whole "skynet will kill us all" line of reasoning is a bit overblown, the big problem that I _do_ see here is the complete lack of accountability that will result when something (inevitably) goes wrong.

Even now with generals and politicians giving strict orders through a command chain, we have terrible accountability (Benghazi is only the most recent example in the long, sad history of screwups for which there are effectively no consequences for those who make decisions that cost lives). When you completely remove the human element, it's even worse: zero accountability. The line will be "the program was flawed; we fired the coder we think is responsible," and the politicians and other leaders will go about their business, secure in the knowledge that, whatever they do, they can just blame the computers or the "devs." The whole thing will be so opaque that there will be hardly anything for the media to report, either.

I'd be a bit concerned if I was a hospital IT admin or maybe even the provisioning / procurement agent for medical equipment. I'd start to worry about overlapping software and components that could fall prey to whatever someone tossed out the cyber-warfare window.

We already gave the stock market to the bots and they haven't screwed up once. Not once.

Spoiler: show

They did it 3 times

Exactly, these sort of "optimised" autonomous systems have a track record of working really well until they hit an edge case and go catastrophically wrong. Having said that, a full on global cyber-arms race is probably inevitable at this point.

Will it result in some sort of MAD like stand off, with the two big players on the brink on cyber-war, ratcheting up the stakes until the one with the smaller economy and manufacturing base goes bankrupt?

My issue with this is that they don't take into account the effect on physical systems. Ok, so automatons in "cyberspace" are exempt from requiring human input, but physical machines with machine logic are not. Where's the line? What if a piece of code in cyberspace is intended to hack into a foreign missile system and launch a missile? Since WE didn't make the physical system being launched does that mean our system is exempt, even if the missile kills hundreds of civilians? The lines need to be drawn more along those lines than where the CODE exists.

It's not like you can expect your malware to phone home for permission before taking an action. Sure there are lots of non-military bots farms out there that do wait for updated instructions, but if you are trying to infiltrate secured systems that aren't directly connected to the internet, the time for such decisions is before you release the software into the wild. After it is released, you can't reasonably expect you will have control over it any more. That does mean you'd better pay careful attention to testing and be wary of unintended consequences, but if they were to hobble our cyber warfare efforts with requirements that the malware keep phoning home for orders, we might as well not bother.

Did you seriously say we can't reasonably expect our Governments to have control over their weapons systems?

Yes that thinking will make sense until the US sends out malware to target a Chinese rocket but it spreads and blows up a NASA rocket.

Then we will reasonably expect our Government to have control over their weapons systems...

Yah know. People are going to keep joking about Skynet.....until its no longer a joke in some form or another. Humans have this wonderful ability to ignore future problems until it actually bites them in the ass and becomes a current problem.

It's not like you can expect your malware to phone home for permission before taking an action. Sure there are lots of non-military bots farms out there that do wait for updated instructions, but if you are trying to infiltrate secured systems that aren't directly connected to the internet, the time for such decisions is before you release the software into the wild. After it is released, you can't reasonably expect you will have control over it any more. That does mean you'd better pay careful attention to testing and be wary of unintended consequences, but if they were to hobble our cyber warfare efforts with requirements that the malware keep phoning home for orders, we might as well not bother.

I agree with you entirely... we shouldn't even bother... the federal government has proven that they have no qualms defining americans citizens as a threat to the state and as such any such technology will likely be used to violate american citizen's right to privacy...