Frank Rieger Interviews Daniel Suarez: Swarming Killing Machines

A Predator Drone over Kandahar air base in Afghanistan on the 29th of January 2010 Bild: dapd

He wants to warn of the effects that autonomous drones might have on democratic institutions. Frank Rieger talks to the author and programmer Daniel Suarez about his book „Kill Decision“.

Beitrag weitersagen

Permalink: https://www.faz.net/-gsi-733dk

Your new book, Kill Decision, dramatically opens a view into a future world, where killer drone technology has proliferated so widely that it becomes practically impossible to determine who is behind an attack and the real goals of the attacker. Could the result be described as the opening of Pandoras box, very much like the Stuxnet attack has been described as a „fire at will“-signal in the field of cyberwar?

I view drone warfare as the kinetic cousin of cyber warfare, in that they are both radically new, low-cost, low-risk methods of waging conflict. Notably, once the technological landscape was ready, cyber war was swiftly adopted - and not just by governments, but by transnational criminal organizations as well.

Autonomous combat drones will likely follow the same pattern, and quite soon since the technological groundwork has been laid for their arrival. Drones of insect-level intelligence are readily available and researchers are now working on combat robots of rat-like intelligence. With fifty nations actively pursuing drone technologies, these too will spread swiftly. As with all technologies whose time has come, I expect combat drones are here to stay.

Incidentally, cyber espionage is helping to facilitate drone proliferation by spreading top-secret designs around the world. A lot of technology is leaking out of Western networks to parts unknown, and the know-how and equipment required to manufacture such designs has also spread to low-cost labor centers worldwide through globalization. Don’t be surprised to see a lot of similar-looking drone designs loitering over battlefields. So yes, Pandora’s Box has indeed been opened. When it comes to high-tech, borders are increasingly meaningless - and yet when it comes to geopolitics, borders still matter a great deal. The tension between those two forces will cause serious problems in the 21st century.

What are the military and strategic implications of drones becoming the universal weapon of choice for hard to attribute attacks, available to nearly every nation state and subsequently also mercenaries and „security contractors“?

Setting aside the social and political implications (which will be significant) the arrival of low-cost, difficult-to-trace, no-mercy robotic weapons could change not just the laws of war - but the very nature of human conflict. The age of ’anonymous war’ is upon us - it will be nearly impossible to determine who’s attacking you, even if you capture an attacker’s drone intact. The components were assembled in China? So what? Everything’s made in China, and most if not all of these components are dual-use - that is, they’re commonly used in consumer electronics and process control equipment - making arms control more difficult. Anonymity could nullify an adversary’s superior firepower, since they won’t know whom to target in retaliation. And that might make attacks more likely as a foreign policy option for even relatively weak combatants.

You’re going to see a ’Cambrian explosion’ in drone variations in coming years - ranging widely in size and sophistication, from high-altitude supersonic platforms all the way down to disposable swarming killing machines. Funding for manned military aircraft will be drastically reduced in coming years as nations move toward unmanned (and in many cases autonomous or semi-autonomous) aircraft that will be able to out-fly, out-perform, and outlast their manned counterparts. No drone is going to ’black out’ in a 20-g turn, and neither is it going to suffer fatigue or lose alertness.

But what are the consequences to the societies who deploy these systems? What do autonomous fleets of combat machines do to the concentration of political power in a society? These are open questions. I think it’s likely to strengthen the hand of authoritarian governments who will be able to use these weapons in combination with ubiquitous surveillance to retain control even in the face of widespread popular dissent. Certainly robotic weapons will never refuse an order to fire upon civilians. That’s why it’s important that checks and balances, as well as transparency are built into drone use from the outset, otherwise unseen and unaccountable concentrations of power will form. At present, none of those checks and balances are being implemented and the targeting process is a black box - no transparency at all.

Let us speak about the social and political implications. I can think of several scenarios, some of which you also explore in Kill Decision. Assassinations become easy, draconian security measures, counter-drone-drones, futile attempts at technology restriction, surveillance systems that can increase coverage and intensity within minutes. What else do you think will happen inside societies with massive drone usage?

Your premise covers a lot of the physical manifestations of drone proliferation, but consider the *effect* of such an environment on society: it would be corrosive to democratic institutions. Why would one-person-one-vote continue if the powerful (whether government, corporate, criminal, or whomever) could cost-effectively and reliably use force against political opponents with little or no risk? Is there any time in all of human history when a people remained free or when democratic forms persisted if citizens could not credibly assert their rights? Power isn’t shared unless it must be, and unchecked proliferation of autonomous combat drones throughout society could seriously shift the balance of political power.

It would not require Terminator-like weapon systems to tilt this balance. Cheap, error-prone - but numerous - autonomous weapons could achieve the same result.

Currently, the U.S. military claims that every drone strike has a human „in the loop“ that makes the final kill decision (and this is supported by recent leaks about Obama’s direct involvement). However, the intelligence process needed to designate targets is clearly on a path towards more and automation and reliance on algorithms. Some systems, like so called loitering munitions already react automatically to target signatures, such as radar radiation or pre-programmed visuals of vehicles like tanks. How big is the pressure to speed up the „kill loop“ by minimizing human influence?

’Lethal autonomy’ is the term used to describe drones capable of making a kill decision without a human in the loop. I think it’s likely that, for political reasons, humans will remain in the ’kill decision’ loop in NATO militaries in the near-term (at least officially). However, there are powerful incentives pressing for lethal autonomy in drones. First the sheer volume of sensor data that needs to be analyzed is choking decision-makers. Drone surveillance video coursing through modern military networks has already outstripped the capacity of humans to view it all. The U.S. drone fleet flew 71 hours in 2004. That climbed to 25,000 hours by 2009, and the Pentagon estimates their drones flew 300,000 hours in 2011. Likewise, drones are about to acquire many more eyes. The Gorgon Stare and ARGUS projects could give each drone up to 65 independently-operated cameras, enabling surveillance of vast swaths of terrain - and creating yet more imagery for analysis. Thus, it will be drones that tell humans what to look at, not the other way around.

But more imporantly, remotely operated combat drones have a critical weakness: against any reasonably sophisticated opponent they are vulnerable to electromagnetic signal jamming. An enemy can simply flood the radio spectrum to sever the control link. That’s a powerful incentive to develop autonomous drones that pursue their missions in radio silence without the need (or vulnerability) of following external instructions.

Add to these pressures the deployment of automous drones by geopolitical rivals or non-state actors who might not be so concerned about ethical issues. Lethally autonomous drones on the opposing side will argue for rapid deployment in response.

One last factor will encourage drone autonomy: deniability. If no signals are going to or coming from the drone, and both its hardware and firmware are commonly available, even a captured drone will reveal little or nothing to an enemy, providing the essential ingredient for anonymous war.

There are some scattered reports about the use of mobile or satellite phone device-ID’s, that are transmitted to the network, as target designators in drone-based assassinations. Currently the targeting process seems to involve manual interception of the phone call, locating the phone and then shooting rockets-- airplane or drone based - to the location. The logical next step would be to directly program target phones ID’s as target into a drone directly. How far are we away from this?

Technologically we’re most likely there already - although I think the system will be more sophisticated than simply locating a cell phone’s International Mobile Equipment Identity (IMEI). Like cyberwar munitions, several methods will be used to acquire targets. For example, and IMEI will only bring a drone to within fifty meters or so of a target phone. From there it will most likely hone in on other identifiers (facial recognition, thermal signature, license plates, etc.). Successful implementation of such systems will be another matter entirely - but again, it depends on the sensibilities of those implementing the technology. A nation concerned about human rights and rule of law, would be very careful indeed in deploying automated systems for killing (as well they should be). However, rogue nations, criminal organizations, extremist groups, short-sighted leaders, and many others might not be overly concerned if autonomous drones make mistakes - especially if those systems can’t be easily traced back to their owners. In fact, sowing chaos or undermining a government by highlighting its inability to protect the populace might be the very point of the attack.

Technology is no longer the obstacle to autonomous killing machines. It’s the choice not to make mistakes or kill innocents that’s prevented this from being used. But again, as the price-point plummets and numerous groups get their hands on these weapons, they’ll be less likely to exercise restraint. For example, narco-traffickers have already started using ultralight drone aircraft to smuggle narcotics across borders. From there, the technology might also be deployed offensively in drug wars to eliminate rivals, police, journalists, politicians. And how many $10.000 drones might be sent to eliminate an obstacle to a billion dollar business?

In many fields of robotics self-learning systems like neural networks are eschewed as they are non-deterministic and might react unpredictably. Do you think this will be different in a military context, where „collateral damage“ is accepted as a necessary evil and already the definition of an enemy has become „was at the targeted kill zone when the drone-fired rocket exploded“?

War tilts social norms on their head (one reason for its damaging psychological effects). However, the emotional distance autonomous drones provide might make indiscriminate killing easier. Once deployed, it will be very difficult for the public (or even commanders) to discern mistakes from intentional targeting. The soldiers who sent the drone might not be well-versed in its internal design, merely in it’s operation or maintenance, and they might never see video from its gun cameras. Moreover, a drone’s firmware is unlikely to be called in to the Hague to faces charges for war crimes. Who would be to blame if things go wrong? The drone manufacturer? The purchaser? The system operator? Such diffusion of blame has historically been a source of trouble. Would a firmware update be sufficient assurance that the ’problem’ was solved?Most national militaries seek to avoid civilian casualties wherever possible (due to political blowback if nothing else), but unless autonomous drones start killing friendly forces or civilians in significant numbers, I think errors will be tolerated, especially in secret conflicts. This is why checks and balances and reasonable transparency must be included in any drone defense program.

The current public reaction to drone warfare (especially in the US) seems to be largely positive. No friendly soldiers are put at risk and abstract concepts like the sovereignty of foreign countries seems to be irrelevant. Do you expect this to change?

The American public’s approval ratings for drones is likely to change the day drones attack the U.S.. Then you’ll see one of two things: either the public demands international cooperation to root out illicit drone manufacturers and establish a legal framework for drone use in defense-only applications (unlikely), OR (much more likely) an escalation - a free-for-all arms race of robotic weaponry. And unlike nuclear, chemical, or biological weapons, these weapons will almost undoubtedly be used - especially with the technology spreading on black markets. If cooler heads do not prevail and government leaders overreact, the losers will be civil society.

The technology needed to build drones seems to be unstoppable. Consumer toy drones already contain enough processing power to do image recognition. Stable autonomous flight can be achieved with the processors and sensors contained in today’s smartphones. What do you propose to avoid a detoriation into a world like you describe in Kill Decison?

Civilian drones will have many beneficial uses (search & rescue, environmental monitoring, exploration, and much more). Drones and other robotics are indeed here to stay, and they will become cheaper, more ubiquitous, and more capable in pace with the general improvements in processing power, memory, and sensors.

What will curtail their mis-use - either by individuals or by the authorities? Transparency would be a good start - requiring not just individual citizens but also local & state governments & corporations to follow the same common sense rules. Autonomous drones in particular will need to be regulated so that the misbehavior of a drone can be traced back to its owner. That matches the trajectory of every useful, potentially dangerous technology that’s come along so far (the gun, the car, the airplane, etc.). One can picture a world where civilian drones are color-coded by their purpose (e.g., green for city-owned, blue for federally-owned, red for privately-owned) and must display tail numbers - as well as broadcasting a unique ID on some standard frequency. That ID should be able to be keyed into a web registry to determine its owner - and possibly cross-referenced through a Google-maps-like interface to affirm its current location and historical geolocation data. If geolocation is possible on every cell phone, it should be possible with autonomous drones. In such a system, unmarked or incorrectly marked drones flitting about will be immediately suspicious. There might even be city-owned drones charged with spotting unregistered drones.

In a military context, autonomous drones are more problematic. I think the only thing that’s likely to prevent a hellish outcome will be international treaties and international cooperation. It sounds boring, but it’s worked before. Humans are, after all, social creatures. The proof? Despite world-destroying weapons, we’re still here.

I grew up during the Cold War, and at times it seemed inevitable that East and West would annihilate each other along with the entire human race. But we didn’t. We pulled back from the brink, creating anti-nuclear and anti-biological, anti-chemical, and anti-laser-blinding weapons treaties that almost all of the world’s nations have become party to. And they’ve largely worked. Sure we have certain nations seeking nuclear weapons, but the result has been to be ostracized by the entire global community. Very much the same thing needs to be done with lethally autonomous robotic weapons. While they can and will be built, there will need to be a realization that automation of war is toxic for representative government. It centralizes power and decentralizes accountability. Furthermore, the most likely targets are business and political leaders of countries themselves. That alone should encourage nations of the world to cooperate with one another in codifying legal (defensive) uses of combat drones and regulating their manufacture and distribution. Since drones aren’t going away, and lethally autonomous drones will appear sooner rather than later, it’s important that law-abiding nations not only have drones themselves, but that they have the very best drones (if only to defend against other drones). Yet along with those drones, they must also develop the best legal framework to govern their just use.Many people are surprised to hear that there’s currently no international legal framework for the use of drones in warfare - much less for the use of fully autonomous robotic weaponry. If you think there should be, please support the initiatives of the International Committee for Robot Arms Control.

The resemblance of stability during the Cold War was only possible because all sides were acutely aware of the intolerable results of any failure of the nuclear deterrence. The strategists seem learn now that with the new realities of „cyberwar“, where attribution is only possible if the attacking party volunteers it (like in the case of Stuxnet and Flame) the Cold War logic of mutual assured destruction-based deterrence does not really work. With drone warfare developing into an physical extension of cyberwar, it seems rather unlikely that a treaty framework can succeed before things become genuinely unpleasant. How could a strategic logic work that makes cooperation in the form of treaties attractive for the major players?

It’s true that the mutually-assurred-destruction of the Cold War does not apply in the case of autonomous drones. However, the chilling effect from robotic weapons on civil society should not be underestimated. A constant state of low-intensity, surgically precise war - where individuals perish in anonymous attacks for unknown reasons, and the best defense is to keep a low profile - is an outcome we must try to avoid. Effective robotic arms control will be a challenge, but it must be attempted. Repeatedly if necessary.

Given that world business and political leaders are likely targets for autonomous robotic weapons, I think an international treaty on robotic weapons will get signed. However, it’s vital that this agreement specifically limit the use of robotic weapons on *individual citizens* - not just heads of state. This might seem obvious, but remember that these weapons will prove very tempting to certain leaders as a tool to solve domestic or international ’problems.’ They will want to retain their ability to use them in secret. Combined with high-tech surveillance, robotic weapons could be highly effective in disrupting political protests or populist movements. And unlike world leaders, individual citizens will not be able to afford elaborate anti-drone countermeasures. Thus, the net effect of the treaty should not be an agreement by leaders to refrain from using them on each other (staying silent on whether they can use them against the general public). This would almost be worse than no treaty, since it would give the impression that the robotic weapons issue had been solved, when in reality it would be establishing insufficient boundaries.

Finally, we’ll need to accept that these treaties - especially in their earlier incarnations - will be far from perfect. While autonomous drones don’t threaten the stark, irradiated hellscape that thermonuclear war would have brought, the intolerable effects they can have on democratic institutions need to be popularized so that as and if these symptoms start occurring, the public will have some idea what’s around the corner. It’s at that point where I think we’ll have the best chance for effective political action to properly contain the menace of robotic weapons.