Can We Trust Robots?

In popular culture, robots tend to be either faultlessly loyal Victorian butlers or duplicitous psychopathological killers. Consider C3PO in Star Wars and Ava inEx Machina. Or Robby in Forbidden Planet and HAL in 2001: A Space Odyssey.

Those depictions are of course just reflections of our own hopes and fears. But those fears, at least, have started leaking out into the real world. In recent years, dozens of tech and science luminaries have shared their apprehension of AI run amok—of superintelligent robots establishing a new world in which humans are at best irrelevant and at worst, extinct. Their fearful scenarios aren’t much different from the ones that sci-fi writers conjured decades ago. But their concerns have resonated widely because of the lofty status of some of these folks, who include Stephen Hawking, Bill Gates, Bill Joy, and Elon Musk.

It’s not so much pedantic piffle. We and our machines are on the cusp of a new relationship. In the not-so-distant future, we will begin entrusting to robotic systems that are highly or completely autonomous such vital tasks as driving a car, performing surgery, and choosing when to apply lethal force in a war zone. For the first time, machines programmed, but not directly controlled, by us will be making life-or-death decisions in complicated, fluid, and unstructured environments. Undoubtedly, mistakes will be made and people will die. But in smaller numbers than do so now.

Getting from here to there won’t be straightforward. As we describe in this report, the challenges will span technical, regulatory, and even philosophical realms. Besides the coding and policy problems, the new autonomous systems will force us to confront deep moral quandaries. They might even tweak our sense of who we are. But in the end, the world will probably be a better place.