“Kill Decision is a fantastic techno-thriller,” wrote Alexander Rose, executive director of The Long Now Foundation. “As someone who has designed combat robots myself, I found the technology depicted both accurate and chilling.” Former Wired magazine editor-in-chief (and drone enthusiast) Chris Anderson added, “Suarez’s fiction is closer to reality than most people think.”

Here, an exclusive excerpt from Kill Decision:

McKinney gestured to the covered object in the center of the table. “What’s in the sack?”

“Something you should see. I didn’t want to alarm you until you were better.”

“I’ve been alarmed ever since I met you.”

“Okay, then…” He unfolded the burlap to reveal one of the black weaver-drone quadracopters that had attacked them in Colorado.

A slightly irrational fear gripped her. It was clearly dead — damaged and missing half its rotors. As a scientist, irrational fears made her angry, so she tamped it down and leaned forward to look at the drone.

The core of it looked mostly intact, although none of the rotors at its four corners was still whole. The spike-like metal feet protruded menacingly, sharpened like metal thorns.

“We managed to reconstruct this one by cannibalizing parts from the two that got into the plane cabin.” He picked up the lightweight device. “It wasn’t difficult. I get the feeling these were meant to be assembled by semiskilled workers. They’re modular, cheap. Mostly dual-use off-the-shelf parts. Circuit boards. Memory chips. Batteries. Optical sensors.”

She extended her hand, and he passed the dead drone to her. McKinney’s curiosity had already bested her anxiety, and she peered into its recesses, rotating it around. The broken propellers flopped around at the ends of wires. Her nose caught the peppery scent she remembered from the Colorado swarm. “There’s that smell again. Like cayenne pepper. I’d like to know the chemical composition.”

Odin nodded. “Mouse knows a few local chemists. Ex-cartel people. I’ll see if he can get it analyzed.”

She kept sniffing and traced it to nozzles next to a row of silvery capsules in the frame. They looked like the nitrous oxide cartridges used for whipped cream or the CO2 propellant in paintball guns. “Four capsules. Like the chemical glands of a weaver. Mixing them in varying proportions to communicate different messages. That would match ant behavior. It’s how they lay down a pheromone matrix.”

“So they were leaving a trail.”

“It’s probably how they incite each other to attack. Each new arrival at a scene reinforces the attack message by spraying more pheromone. But that also means they’d need some way to read each other’s chemical pheromones.”

“Like an electronic nose.”

“Right.” McKinney ran her finger along one of four forward-facing wire antennas that were studded with tiny microchips.

Odin peered closely right next to her.

“Weaver ants — ants in general, actually–have dozens of sensilla on their antennas. They detect all sorts of things, chemical traces, heat, humidity. If these devices are running my weaver model, then they’d respond to numeric pheromonal input values. It’s virtual in my simulations, but here it could be a concentration measurement received from a hardware sensor. Weavers also transmit information to each other by touch, vibration.” She ran her fingers along each antenna, noting half a dozen small nodules.

“They transmit data to each other physically as well?”

McKinney nodded. “It allows them to move information through the swarm separate of the pheromones.”

“And we wouldn’t be able to jam that communication with radio countermeasures either.”

“I suppose that’s true of both the chemical and touch communication. But also, I was wondering how they found us — how they detected we were in the house, and where.”

“I was wondering about that too. These stupid little bots outperformed any system I’ve ever seen. We were wearing cool suits to hide our thermal signature and AD armor to conceal our human shape and faces. That fooled the sniper stations in the hills, but not these bastards. I was thinking maybe they reacted to noise or movement.”

McKinney shook her head. “If they’re using the weaver model I created, they’d focus on organic compound sensors. Ants have receptors in their antennas that help them identify food. Maybe –“

“You’re saying they smelled us?”

“Or tasted us.” She sighed. “I know it sounds silly, but that’s part of what weavers do when they swarm. They detect food sources by trace chemicals — in much the same way as they read each other’s pheromone messages.”

“There’s a technology — well known in counterterrorism work; used by customs and Homeland Security. It’s called C-Scout MAS. We used it while hunting for high-value insurgent targets.”

She examined one of the dead drone’s antennas. “I don’t know it.”

“It’s an electronic nose that sniffs the air to detect human presence. Apparently there are fifteen chemicals that indicate human presence by the breath we exhale — things like acetone, pentane, hexane, isoprene, benzene, heptane, alpha pinene. You get the idea. They appear in a specific ratio wherever people are breathing — the more concentrated it is, the closer people are or the more people there are.”

“You’re saying this technology is currently in use?”

“The detectors were on a microchip.”

She manipulated the articulated antennas on the dead drone. “Then maybe these drones find people in their vicinity by the gases we exhale — just like weavers would detect food. That would actually work well with my model. They could be coded to identify whatever they’re hunting by chemical signature — moving toward greater concentrations of the target scent and away from decreasing concentrations. That relatively simple algorithm is how my model works, and it manifests itself as complex hunting behavior when scaled up to a swarm of stigmergic agents.”

“We were breathing fast. Keyed up. We must have seemed like glowing neon signs to these things.”

McKinney was already looking more closely at the drone’s innards.

The drone had an aluminum tube frame, in the center of which was a wire box acting as ribs protecting the core. There was a stack of computer boards there, vision sensors all around, thin antennas — both leading and trailing — and wiring. Then along both sides were what looked to be steel cylinders — four in all.

Odin tapped them. “Zip guns. These were thirty-eights. They slide in on tracks, so it looks like they can have various weapon loads. The other one had .410 shotgun shells.”

She examined what looked to be ports in the back. Charging sockets? There were also LED lights, all dead, but curious nonetheless. “If these run on my model, an appropriate number of workers would be ‘feeding’ the others. With weavers they pass along honeydew — liquid food. Here, they probably pass along electricity, battery power. There seem to be electromechanical analogs for all the inputs and outputs of weaver swarm intelligence manifested in these things.”

She tossed it back onto the table in disgust. “But it looks like a toy. An evil toy designed by some sick, twisted –“

“Those ‘toys’ nearly killed all of us, and if we hadn’t escaped, they would have. Lalenia pulled ten bullets out of our team, and that’s with body armor on.” He rewrapped the drone in its burlap shroud. “These things could be churned out of just about any contract factory in the industrialized world. Shipped anywhere by the thousands — just like toys.”

She looked up at him. “Oh, it’s worse than that. Those inputs and outputs — the stimuli and the response — they can take just about any morphology. These zip guns could just as easily be missiles. Those tiny rotors just as easily jet turbines.”

He narrowed his eyes at her.

“Ants are what’s called a polymorphic species–they have various caste groups that can differ widely in size. For example, Pheidologeton diversus — the marauder ant — has supermajor warriors that are five hundred times the mass of one of their minor workers. And yet they are the same species and operate with the same brain — and belong to the same colony.”

“You’re saying these things could be easily scaled up using the same software brain.”

She gestured to the dead drone on the table. “I’m saying this might just have been a low-cost test version. A prototype. They could easily be made bigger.”

He contemplated this news. “Which means they will be. We’ll need to take action before that happens…”

Daniel Suarez worries what will happen if we continue to use drones in life-or-death decisions. Photo: James Duncan Davidson

Science-fiction author Daniel Suarez spoke about drones this summer at TEDGlobal 2013. In his talk “The kill decision shouldn’t belong to a robot,” he talked about the rise of drones, automated weapons and AI-powered intelligence-gathering tools. Here, he goes further, describing no less than a coming “automation revolution.”

Drones are in the news these days. More than any other technology, they capture the zeitgeist of the early 21st century. The controversy over drones in combat and the proposed use of drones by law enforcement might make headlines, but there’s also growing concern as remotely piloted drones transition to semi- and fully autonomous roles — operating without direct human supervision.

Of course, not all drones are destined for war or surveillance. Advocates see a positive future for mobile robotics in precision agriculture, search-and-rescue, environmental monitoring, logistics, hazardous-material handling, mapping, the uses limited only by human imagination. Then there are the folks who just want to play with robots, the drone enthusiasts. These entrepreneurs and hobbyists point to a future that’s not so frightening — one where humans figure out how to incorporate autonomous machines into our society without much drama.

But take the temperature of the general public on the subject of autonomous vehicles, and you will find unease. That’s one reason why the robotics industry avoids using the “D-word” and encourages newer, less emotionally freighted terms like “Unmanned Aircraft Systems.”

Yet, I would argue that autonomous drones and cars aren’t really where most of the automation is occurring. They’re just the physical manifestation of a much larger trend — the tip of a technological iceberg passing beyond humanity’s bow, and one that we’re rightly uneasy about.

What we’re concerned about is an automation revolution – every bit as transformational as the industrial revolution before it. The agent of this change is narrow AI software [1], a tool that can be leveraged to raise individual human whim to economies of scale. Technology is, after all, merely the physical manifestation of the human will, and when it comes to AI agents, that human can be digitally magnified a billionfold. Whether you’re a high-frequency Wall Street trader, a malware author, a medical researcher, a marketer, an astronomer, a dictator or a drone builder, narrow AI is the workhorse of the automation age. It is narrow AI software that imbues silicon with agency. And such narrow AI agents are increasingly everywhere in our society — a situation that risks tilting centuries-old human social arrangements on their head.

And still, it’s largely drones that get the press. Drones are a lightning rod for the automation revolution because they’re its most visible manifestation. Drones can be pointed to overhead, or seen on the highways, and their growing use in various settings noticed. Meanwhile, their virtual kin — innumerable and relatively invisible software agents — already manage broad swaths of human society as stand-ins for human actors. That transition has gone almost unnoticed. Software algorithms now handle our stock market trading, logistics, electrical power grid management, banking, communications, medical diagnosis, mapping analytics and much more. The human logic behind such decision-making has been codified into algorithms that greatly increase speed and efficiency.

That’s why there’s no turning back.

_______________

We are at the dawn of an Automation Age, and narrow (or weak) AI software agents are its hallmark. Narrow AI is distinctly different from the sort of human-level (or strong) AI we know from science fiction. Your narrow AI-powered GPS unit doesn’t contemplate the why of your proposed trip to Reseda. It simply creates the most efficient route to get there.

That efficiency is marvelous when it instantly delivers search results or alerts you to fraudulent use of your credit card, but not so marvelous when it makes widespread surveillance not only practical but downright cost-effective. Or when it creates a million-computer-strong botnet controlled by one or two people. So while the Singularity [2] will no doubt be an issue of concern to coming generations, for now we’ve got more pressing concerns — namely, how we’re going to grapple with highly capable, cost-effective, scalable, and yet relatively dumb narrow-AI automation proliferating all around us.

What will be the outcome for human society as it competes with sub-Singularity AI? That’s what the philosophers, technologists, sociologists, engineers, artists, politicians, activists, economists and many more must ponder and debate in coming years.

Automation — both in its cyber and robotics incarnations — is not going away. It is now a permanent fixture of our civilization. And even if modern industrial society were to break down, the survivors would be frantically working to get their computer networks up and running again as soon as possible. Their robots, too. Our machines are simply that useful. No — our narrow AI friends are here to stay.

Instead, just as our ancestors domesticated vicious dogs and dangerous large herbivores, so too will we humans need to safely domesticate narrow AI organisms. Doing so will involve building social structures to resist the power-centralizing effect of such easily multiplied, mindlessly obedient and inscrutable digital constructs. Failure to adjust to the rapid spread of narrow AI throughout society will come with stark consequences.

And those societies that attempt to resist progress, rejecting narrow AI automation, will be at a competitive disadvantage against those who successfully use it. On the other hand, those societies that implement automation without careful consideration of the consequences will be building an ecosystem for technological domination by the very few — because the same dynamic by which AIs increase productivity through centralized control can be used to undermine the checks and balances of democratic social systems.

But that is the charge of our times: To compete in the future we must learn to ingest increasing levels of software automation into the corpus of democratic life without fundamentally distorting the body politic. Previous generations had their challenges, and this appears to be ours.

And while we humans might not process facts with the swift precision of an optimized algorithm, one thing at which we excel is adaptation. And the sooner we understand the challenge presented to us by the automation age, the sooner we can start adapting to it.

_______________

[1] Narrow (or weak) AI can be defined as an information system designed to solve specific, reasonably well-defined problems. Whereas generalized (or strong) AI is an information system designed to autonomously learn new tasks and adapt to changing environments, narrow AI is unlikely ever to lead to human-level artificial intelligence because the tasks to which it is put are greatly simplified models of the real world.

[2] For the Amish among us, the Singularity is “… a theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence that will ‘radically change human civilization, and perhaps even human nature.’” From Eden, Amnon; Moor, James; Søraker, Johnny; Steinhart, Eric, eds. (2013). Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer. p. 1.

]]>http://ideas.ted.com/2013/11/18/the-automation-age-by-daniel-suarez/feed/3daniel-suareztedblogguestalt=refer to captionDrones: will they save us or destroy us?http://ideas.ted.com/2013/11/17/drones-will-they-save-us-or-destroy-us/
http://ideas.ted.com/2013/11/17/drones-will-they-save-us-or-destroy-us/#commentsSun, 17 Nov 2013 22:52:29 +0000http://blog.ted.com/?p=83767[…]]]>

This week, we’ll be taking a deep dive into a provocative topic: drones. For all the rhetoric, you might think think that this is a zero sum game: Drones will either destroy the world, or they’ll save it. The truth, of course, is that, well, they’re set to do both. Sophisticated developments see extraordinary advances on the part of the military, while the same technology is being harnessed and applied for life and planet-saving reasons, too. The key is for us to be careful and thoughtful about all, and not blindly stumble into dystopia despite ourselves.

Starting tomorrow, we’ll be publishing three new drone-related TED Talks. First up is Lian Pin Koh, who uses drones for nature conversation. We also have a talk from TEDxMidAtlantic showing how Chad Jenkins and Henry Evans collaborated to help the latter navigate life after a stem-brain stroke by means of a robot he can operate solely through thought. Finally, a talk by Andreas Raptopoulos, who wants to use drones to serve the 1 billion people on Earth who have no access to all-season roads.

Those are all pretty life-affirming uses of drones, to be sure. Representing the dark side, or as they might put it, “the real world,” we’ll have essays from some of our previous speakers to present an alternate point of view. Science-fiction writer Daniel Suarez memorably details one potential outcome of autonomous drones in his book, Kill Decision, and he’s written an incredible, thought-provoking essay about our entry into what he describes as the “automation age.” P. W. Singer, director of the Center for 21st Century Security and Intelligence at the Brookings Institution, revisits the topic of drones in a Q&A with Matt Power (author himself of an excellent GQ feature, Confessions of a Drone Warrior). And… well… much more. To whet your appetite, we created the trailer above, featuring just some of the speakers who’ve touched on the topic of drones from the TED stage.

The TEDGlobal translator contingent — part of a group of more than 9,000 volunteer translators in 101 languages. Photo: Ryan Lash

“I write sci-fi novels, because if I wrote a white paper nobody would read it,” Daniel Suarez told a panel of translators at TEDGlobal 2013. The science fiction author and drone activist was taking questions during his Skype Open Translation session, in which he described a dystopic–and all too believable–future dominated by autonomous lethal drones. His audience sat on the edge of their seats (or beanbags), bursting with questions. For instance, German translator Philipp Boing asked why he used science fiction as a medium to warn about such a dire vision of the future. Avoiding obscurity was a pretty solid answer.

During breaks between talk sessions throughout the week, a curated panel of TEDGlobal speakers and TED Talk translators–appearing in person and via videoconference–discussed topics like drone warfare, cultural identity, humor and guerrilla urban development. While the translators and speakers did not always directly address the topic of translation itself, the theme remained a powerful undercurrent. And many of the translators’ questions for the speakers shared a theme, too. One favorite: “How will you (or I) carry your ideas off of the TED stage?”

Just as there is no such thing as a definitive translation, none of the speakers claimed to have a singular answer. For Suarez, fiction was one relatable way to convey his message. But he also works with advocacy organizations, hypothesizes legal frameworks, and models an open-source “immune system” designed to allow citizens to monitor rogue drones.

We know that ideas, predictions and solutions for the future come from a lucky intersection of what we know and observe with our informed imagination. When Teddy Cruz observed how Tijuana residents retrofitted their generic, developer-built bungalows, he saw a density of social and economic interactions that’s missing in certain sprawling, oil- and water-guzzling American cities. Swedish translator Matti Jaaro asked him how it might be possible to reinvigorate a sense of ownership among urban dwellers, rich or poor. Cruz answered that not only did he see in these developing-world neighborhoods a model to rein in sprawl, but also an opportunity to “establish a social platform through a kind of urban pedagogy.” In other words: reinvent the negative connotation of “slums” and invest in parks and social spaces in neighborhoods that are already vibrant, but lack a voice.

Translators are key to this process of reinventing assumptions and giving a voice to the unheard. They have first-hand familiarity with the power of language to generate ideas–for good or bad. Translator Katia Demirtzoglou, talking with artist Hetain Patel about language’s effect on cultural identity, described the unique brand of humor her multilingual family had developed as a result of rapidly switching among German, English and Turkish. “Others often don’t understand the jokes between us,” she said. Yet.

Technology thriller author, most recently of the book Kill Decision, Daniel Suarez has been thinking about drones for some time. But as he says, as he stands onstage at TEDGlobal 2013, he’s not here to talk fiction. “I’m here to talk about very real autonomous combat drones,” he says, adding that he doesn’t mean remotely piloted drones such as the Predator or Reaper, where a human being still makes combat decisions. Instead, he’s talking about “fully autonomous robotic weapons which make lethal decisions about people on their own.” The technical term for this, he explains, is “lethal autonomy,” and the fact that there’s a technical term for it should tell us this is something worth minding.

Photo: James Duncan Davidson

“As we migrate lethal decisionmaking from humans to software, we risk not only taking the humanity out of war but also changing our social landscape entirely,” says Suarez. “The way humans resolve conflict shapes our social landscape.” He spins through a quick history lesson of combat innovations, from the armor worn by a knight on horseback, to gunpowder and the cannon, to the nation-state, by which time leaders were forced both to rely on and share power with the people. Autonomous robotic weapons, he says, are just such a step forward. “But by requiring very few people to go to war, they risk re-centralizing power into very few hands, possibly reversing a five-century trend towards democracy.” Knowing this, we must take decisive steps to preserve our democratic institutions — and soon. Suarez details three powerful factors driving the ever-faster development of drones:

1. Visual Overload

In 2004, the U.S. drone fleet produced 71 hours of video surveillance for analysis. By 2011, that figure was 300,000 hours annually, and that number is only going to increase. Cameras such as the Gorgon Stare produce so much footage no human could possibly review it all. “That means we’ll have to program visual intelligence software to review it; and that means very soon drones will tell humans what to look at, not the other way around,” says Suarez. You can feel the weight of this thought sink in around the room.

2. Electronic Warfare

In 2011, GPS signal on a U.S. RQ-170 sentinel drone was confused by a “spoofing attack” and captured. Moving forward, drones will be programmed so that such interference cannot happen. “They will know their objective and they will react to circumstances without human guidance,” says Suarez. Drones won’t rely on outside influence, in other words, but will make their own decisions.

3. Plausible Deniability

In a world where cyber-espionage is not confined to science fiction, who knows who does what where, or who knocked off which company, when. “Sifting through the wreckage of a suicide drone attack, it’ll be very difficult to say who sent that weapon,” says Suarez soberly, to glum murmurs from the audience. “This raises the very real possibility of anonymous war. This could tilt the geopolitical balance on its head and make it very difficult for a nation to turn its firepower against an attacker. That could shift the balance away from defense and toward offense, making military action a viable option for not only small nations but for private enterprise, powerful individuals. It could create a landscape of rival warlords, undermining the rule of law and civil society.” Gulp.

Suarez isn’t done with the bad news for those living in the developed world. Not only do they have no advantage over those in developing nations, they might actually be at a disadvantage. Big data, he says, makes those of us living in the west vulnerable, perfect targets for autonomous weapons. Where a marketer might use data to send you personalized product samples or services, a repressive government could use it to eliminate political opposition. “Popular movements agitating for change could be detected early and their leaders eliminated before their ideas achieve critical mass,” says Suarez. You can hear that proverbial pin drop; the audience is both spellbound and open-mouthed.

Photo: James Duncan Davidson

Now the call to action. “We need an international treaty on robotic weapons. In particular, a ban on the development and deployment of robotic weapons,” says Suarez, pointing out that treaties already exist for biological and nuclear weapons, and robotic weapons might well be just as dangerous as those. (Do also see the Campaign to Stop Killer Robots, sponsored by former TED speaker, Jody Williams.) “We need an international legal framework for robotic weapons, and we need it before there is a devastating attack or incident which causes nations to rush to adopt weapons before thinking through the consequences.”

The danger is that drones “concentrate too much power in too few hands. They would imperil democracy itself,” he says. “No robot should have an expectation of privacy in a public place.” Instead, they should be stamped with an identifying marker in the factory. And an international treaty would help us to take advantage of any benefits from autonomous vehicles, while preserving civil society. “Let’s not succumb to the temptation of automated war,” he concludes. “Let’s make sure killer robots remain fiction.”

A close up look at a quadcopter, from speaker Raffaello D’Andrea’s Flying Machines demonstration lab. Photo: James Duncan Davidson

When it comes to drones, a lot of ideas float in the ether. Technologists see potential for flying machines to help us in all sorts of unexpected ways, governments take measures that seem hard and fixed, and meanwhile the media oversimplifies and dramatizes the issue. The conversation on drones is ever changing — especially with President Obama’s recent announcement that Washington would be stepping back from its use of drones, and with U.N. expert Christof Heyns calling for a collective rethinking of them. Here at TEDGlobal, the speakers in session 2, “Those Flying Things,” face the issue of what can be done — the good, the bad, the hopeful — with these flying robots.

Here are the speakers who took the stage in this second session of TEDGlobal 2013. Click on their name to read a recap of their talk:

Raffaello D’Andrea explores the possibilities of autonomous technology by collaborating with artists, architects and engineers.

Blaise Agüera y Arcas is a Distinguished Engineer at Microsoft. His team works on augmented reality, mapping, wearable computing and natural user interfaces. He was the co-creator of Photosynth, software that assembles photos into 3D environments

Author Daniel Suarez says his book about drones is a “cautionary tale.” He’ll explain what that means at TEDGlobal 2013.

Daniel Suarez is a former systems consultant turned novelist who will speak in session two at TEDGlobal 2013. Officially subtitled “Those Flying Things,” this is unofficially known as the drone session, and it will feature various speakers tackling the topic of unmanned aerial vehicles. For his part, Suarez will expound on some of the themes he examines in his most recent book, Kill Decision, a fictional romp through a world in which swarms of autonomous drones sink ships and systematically hunt and kill people. Here’s the most horrifying thing of all: it all sounds entirely plausible.

Wanting to find out just how scared we need to be, we gave him Suarez a call. An edited version of our conversation follows.

So first things first. What’s up with Kill Decision? It’s terrifying!

It’s a cautionary tale! I want to get people to pay attention. I don’t want to scare them. Honestly. I look at what I do as looking over the horizon to spot icebergs. It’s not that I don’t like technology — after all, I spent 20 years as a systems analyst. But it’s all in the way we implement it. This was to suggest something of a course correction.

It all sounds terribly realistic, including detailed descriptions of drones and systems which do really exist in the world already. What’s your research process for a book like this?

Typically I’ll start with a subject that interests me, so with Kill Decision that was lethal autonomy. I’ll read about the state of the science in the field, so I read two dozen books on various things including swarming intelligence, robotics, military policy, quadrennial reviews of the Pentagon, and so on. I want to know: what’s the collective thinking, what’s the reality, and then what’s just over the horizon. I wanted to focus on robotic weapons that were possible, and after reading a lot about swarming intelligence I became fascinated with robotics and different forms of intelligence in the biological world. We might have the largest brain, but there are other creatures and strategies that are more effective in certain circumstances. If you put the brain of a weaver ant into a robot with weapons, and you make many of them, simulating their society algorithmically seems very possible. And because it’s possible, I want people to be aware of it. To be clear, I emphatically don’t want us to do it! But that’s a great rule for fiction. I get to spin scenarios and ask “what if,” and hopefully people learn new things and get to share my excitement in the research.

Many of the characters in your book are academics, deeply immersed in the world of research. Yet their work is co-opted by people who want to harness that insight for deeply selfish and entirely nefarious means. Should we worry more about the connection between academia and business or policy?

It really is up to society to decide how that goes. A scientist pursues science. Society has to ingest it and figure out what to do with it. In my view an open exchange of views is better than “you should never look into this.” I’m not into that. If you keep things hidden, that doesn’t stop things from happening. Open science helps to protect us more than secrecy ever would.

But you can’t put the autonomous killing machine genie back in the bottle, can you? How can you ensure that the bad guys don’t win?

There is a long list of technologies that should have rightly annihilated us, but we’ve created a society that’s more advanced than ever. That’s what gives me optimism. We’ve created international treaties for things like biological and nuclear weapons. It’s not perfect, but those are world-destroying weapons and the reason the world has not been destroyed is that humans in the main don’t want to destroy the world. That’s why you distribute power. Democracy is such a great thing, as ugly as it can be at times. So we have to come to a consensus about what to do with this new technology and drag it to the to public square for a big public debate. Yes, drones are not going away. Biological weapons and nukes are still here too — and so are we. I’m hopeful that as long as we take the time to understand the issues and have honest debate we will be able to incorporate unarmed autonomous vehicles into our daily lives without having robotic weapons flying around the place.

Are you going to scare everyone silly at TEDGlobal?

I don’t want to be all doom and gloom, I promise! I’m excited to be there in general and I’m eager to hear everything everyone else has to say on this topic.