Earlier this month, it appeared that international Go champion Lee Sedol had become the latest uber-intellect crushed by the onslaught of artificial intelligence. Three games into their five-game matchup, Google’s A.I. program AlphaGo was undefeated. This story of machine victorious over man evoked another scene from 20 years earlier, when a wounded and confused Garry Kasparov lost an epochal chess showdown against IBM’s Deep Blue, signifying the end of human supremacy on the chessboard.

Artificial intelligence is on its way to ubiquity, and we’re not ready for it.

Advertisement

But many argued that Go was different. It is a dramatically more complicated game—a more human game—whose labyrinth could not be exhaustively mapped by a computer. Yet in the first three games, AlphaGo bludgeoned Lee with a calculating efficiency that mystified the 33-year–old Korean. Then in Game 4, Lee responded to the challenge of artificial intelligence with new tactics. He attacked AlphaGo, aggressively sought to hem it in, and in a transcendent moment of genius laid down a lone white pebble that one Go champion dramatically called the “hand of god.” He won.

Policymakers in Washington could learn something from Lee’s agile response to the evolving challenges posed by the artificial-intelligence revolution. Indeed, they’ll have to. Artificial intelligence is on its way to ubiquity, and we’re not ready for it. Already it has entered the landscape of the physical world in delightful and dangerous new ways, with Google leading the charge in many different industries. Yet policymakers seem trapped in the regulatory frameworks of the 20th century. In two of the most prominent A.I.-linked industries, autonomous vehicles and drones, current legal regimes are already insufficient. Yet both pose serious ethical quandaries, as well as social and economic challenges, that can only be met by Washington.

Get Future Tense in your inbox.

A good example of this tension came last week as leaders in the autonomous-vehicles industry testified at a Senate hearing. The testimony showcased both the diversity of the concerns associated with autonomous vehicles and Washington’s tenuous grasp on solutions.

Chris Urmson, head of Google’s self-driving car project, fielded questions from Sens. John Thune, Cory Booker, and others on the future of autonomous cars. Practical issues of driver’s licenses, safety, and liability were a large part of the discussion. But so were regulatory desiderata, such as the question of whether autonomous vehicles need a button for high-beam headlights. Sen. Steve Daines of Montana explained the safety concerns of his constituents succinctly: “It’s not just deer, it’s moose."

Advertisement

Duke University’s Missy Cummings worried out loud about the issues concerning data and privacy. Although she is a roboticist, pioneer, and strong believer in the ultimate potential of the technology, she told the assembled politicians that autonomous vehicles were “absolutely not ready for widespread deployment, and certainly not ready for humans to be completely taken out of the driver's seat.” Industry witnesses disagreed.

The senators questioned the panel about how cars would decide whom to strike in an inevitable lose-lose situation. Urmson gave a troublingly blunt answer to this ethical quandary, stating simply that there was no good answer. The best we could hope for would be transparency in how these decisions were made. But at hearings held by the California Department of Motor Vehicles earlier this year, another Google programmer told state officials that the DMV could not possibly understand the intricacies of Google’s control systems and should not try to.

Transparency, of course, isn’t necessarily in Google’s economic interest. A.I. algorithms—a field in which Google leads the world—may ultimately be the true moneymakers of the industry, just as Bill Gates’ Windows operating system was for the personal computer market. If that is the case, transparency of algorithms may threaten the profitability of Google’s business.

The biggest issue at stake at last week’s hearing was whether the federal government would issue rules that superseded the “patchwork” of state regulations (led by California) that are growing up around the industry. The emerging A.I. industry’s clear preference is for Washington to hold the reins rather than state officials in Sacramento, California, or Lansing, Michigan, or Albany, New York. But the threat is that sophisticated lobbyists in Washington will win legislation that tramples over personal privacy or public safety.

The conversation has progressed a bit further with respect to unmanned aerial vehicles, or drones—another sector that will be inextricably linked with A.I. Part of the reason for this is that the challenge for drones is less acute in many applications. Drones will often operate far away from human controllers—and they don’t need roads. This means that they pose fewer ethical quandaries in terms of potential collisions and offer clearer opportunities to protect human lives (such as searching burning buildings for victims or entering dangerously radioactive areas). Their ability to make A.I.-based decisions will be critical to performing these roles.

Exercising caution, the Federal Aviation Administration has so far required that a human operator keep a drone within her line of sight at all times. But it has begun exploring the possibility of remote operations through its Pathfinders initiative, with partners that include CNN (which wants to operate news camera drones in urban areas) and BNSF Railway (which wants to inspect rail infrastructure over thousands of miles).

Can A.I. solve these problems? Maybe: The drone manufacturer DJI has just released the Phantom 4 with integrated A.I. systems that can keep it from crashing into people, cars, and buildings. Regulators will need to develop new regimes for testing these capabilities to ensure reliability.

But the prospect of roving drones with cameras and artificial intelligence also raises important privacy concerns that haven’t been resolved. It’s not hard to imagine a real-life embodiment of Philip K. Dick’sThe Simulacra, wherein autonomous news drones invade homes to interview people at their breakfast tables. (Indeed, the impressive ability of the Phantom 4 to autonomously follow someone as they run or bike feels eerily similar.) Can we trust that drone A.I. capabilities will respect our privacy and personal spaces? Certainly technology will not secure these legal and ethical boundaries on its own. Policy will be a vital factor.

Part of the challenge is that the possibilities are overwhelming. We are just at the beginning of a fast-paced revolution. Computers will learn how to analyze medical imagery, legal documents, and educational records; drones will monitor our cities for crime and deliver packages. Many, most, or all industries will be transformed by A.I. over the coming decades. In turn, these new industries will transform human society. The raw power of emerging data processors and machine learning algorithms, and the increasingly physical nature of A.I. applications both cry out for the government to establish new bounds for acceptable use.

Grappling with the promise and threats posed by the A.I. revolution will be one of the major policy challenges of 21st-century government.

Advertisement

Policymakers are outgunned—just like Lee Sedol. So they should look to the Go champion for inspiration on how to grapple with A.I. Without a doubt, Lee’s victory was pyrrhic—he lost the fifth match. And even before that, Google’s decidedly corporate-looking pair of program managers made clear that his play was worse than futile: It would be integrated into AlphaGo’s machine learning algorithms and, in the long run, only serve to strengthen Google’s artificial intelligence.

But Lee was undaunted. Asked about whether he thought he could bring home another win in Game 5, he pointed out that AlphaGo “finds it more difficult” when it plays as black. Then, with a fearless, impish smile, he asked a favor of the Google execs, hoping to ensure for himself a challenge. “Since I won with white, would it be possible for me to take black the next time?” The dumbfounded Google reps looked at each other, whispered in conference, and then replied.

“Yeah, I guess it’s fine.”

If America is going to fully realize its potential—indeed our human potential—we are going to have to demonstrate some of that same tenacity, curiosity, innovative spunk, and fearlessness. The rise of A.I. cannot be rolled back. But, rather than simply trying to control it through the command and control regulations of years past, Washington should embrace change and seek to construct new regulatory approaches that can channel these powerful tools toward positive ends. That would be revolutionary, indeed.

Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

Colin McCormick is a former adviser to the U.S. Department of Energy and chief technologist at Valence Strategic.