UC Berkeley Professor Calls for Speeding Up the Drone Arms Race

A new book co-authored by Cal law professor John Yoo argues that America should be first in developing the deadliest, smartest, most autonomous new killing machines.

Darwin BondGrahamSep 26, 2017 17:00 PM

Photo by Ed Ritger/Commonwealth Club/CC

Yoo is to speak Oct. 2 at the Commonwealth Club.

Leading up to this week, UC Berkeley was on edge due to a B-list of extremist pundits who were supposed to appear for a so-called "Free Speech Week" to grandstand far-right political ideas, like banning Muslim immigration to the United States. The alt-right speakers also were expected to once again bash UC Berkeley for allegedly quashing speech that isn't liberal.

But Cal is hardly an ivory tower of progressive thought being besieged by far-right thinkers from the outside. In fact, the school is host to more than a few professors working on extreme political and technological ideas. Take, for example, the legal scholar John Yoo.

Yoo is a faculty member at UC Berkeley's law school where he holds the prestigious Emanuel S. Heller Professor of Law endowed chair. Back in the early 2000s, Yoo took temporary leave from Berkeley to join the George W. Bush administration. There, he wrote several legal memorandums arguing that people captured during the U.S. invasion of Afghanistan (now America's longest running war) are not protected by the Geneva Conventions' ban on torture.

The U.S. military and CIA used Yoo's memos to justify so-called "enhanced interrogation methods," including water-boarding, stress positions, and sleep deprivation, against prisoners captured in Afghanistan, Iraq, and elsewhere. And when the Abu Ghraib scandal broke, it became clear that U.S. troops were raping and brutalizing captives in despicable and clearly illegal ways. Yoo's memos helped lead to this. (See "The Torture Professor," 5/14/2008.)

Despite being harshly criticized for his role in the torture scandal, Yoo was allowed to return to UC Berkeley where he's been diligently working ever since.

Now, Yoo has a new book, Striking Force: How Cyber, Robots, and Space Weapons Change the Rules for War, co-authored with Jeremy A. Rabkin, professor of law at the Antonin Scalia Law School at George Mason University, and he's been making the speaking rounds of talk shows and forums to promote it. (On Oct. 2, Yoo is scheduled to speak to the Commonwealth Club in San Francisco on the topic.)

In Striking Force, Yoo and Rabkin argue the following: The United States shouldn't work with other nations to try to restrain the development and use of new cyber, robotic, artificially intelligent, and drone weaponry in warfare. Rather, America should be first in developing the deadliest, smartest, most autonomous new killing machines to hunt down its enemies. Any legal thinking about how to limit the spread or use of these weapons should come much later, after we can see what the actual impact of these weapons are.

"International law could not stop the spread of technological progress to the machines of war," Yoo and Rabkin write, about everything from crossbows to AK-47s. "This has been the lesson of history."

The book buttresses this argument by claiming these new weapons will be more precise, possibly limiting civilian casualties, though critics point out that U.S.-piloted drones have killed numerous civilians.

A sub-theme that runs through the book and much of Yoo's political thought is that America too often ties its own hands behind its back in conflicts, when its leaders should be breaking normative binds to defend the West. Yoo approvingly quotes Winston Churchill: "I do not see why we should have all the disadvantages of being the gentleman while they have all the advantages of being the cad."

Just as Yoo tried to help America embrace its inner bully with regard to the use of torture, Striking Force, sets out to help the United States further distance itself from efforts to limit the development and spread of new kinds of automated weapons, like artificially intelligent drones that could pick out and kill targets on their own, without a human pilot.

But Yoo's worry that the United States might fall behind in this new arms race doesn't hold up against the actual amount of military-supported research and development aimed at producing these new weapons. In fact, UC Berkeley is a hot spot for military-sponsored research into drones, artificial intelligence, and cyber weapons. For example, the dean of Cal's College of Engineering and several other prominent professors co-lead a Navy-funded project called "HUNT" that is creating "small expeditionary forces with light combat ships (LCS), high altitude long endurance (HALE) vehicles, tactical UAVs, and unmanned underwater vehicles (UUVs)" to collaborate with "war fighters" in 21st-century combat, according to the research group's website. This is precisely the kind of smart weaponry Yoo is calling on the United States to develop and use.

Of course, many of the researchers doing the fundamental science that could be diverted toward building new weapons aren't happy with Yoo's message. Stuart Russell, a renowned artificial intelligence researcher at UC Berkeley, said Yoo's ideas are contradictory and run against the mainstream of thinking in the scientific community that is most familiar with these new technologies.

"The authors are unequivocal in identifying proliferation of WMDs as a major threat to international peace and security," wrote Russell, referring to nuclear, chemical, and biological weapons, in an email to the Express. Russell thinks artificial intelligence and other new technologies have the same capacity to indiscriminately destroy as other WMDs. "Lethal autonomous weapons constitute a new and easily proliferated form of WMD, because autonomy means that a handful of humans can launch attacks with tens of millions of weapons, potentially wiping out entire nations or entire ethnic groups within a nation," he said. "Thus, the book's encouragement of an arms race in autonomous weapons seems self-contradictory at best."

Yoo did not respond to an email request for an interview.

Russell and most of his colleagues hope to see international treaties banning AI weaponry, and they've been circulating a letter advocating for an international treaty banning "offensive autonomous weapons beyond meaningful human control."

Yoo, on the other hand, believes Russell and his colleagues are wishing for a politically impossible treaty. He seems to think that nations will rush to create these new killing machines, because they provide a new edge in war, and human nature is too dark to prevent this.

If Yoo's ideas prevail, we can look forward to a future where robots decide when, how, and who to kill, and UC Berkeley's research, both political and scientific, will have made a dangerous contribution.