Opinion Interpretation of the news based on evidence, including data, as well as anticipating how events might unfold based on past events

Killer robots? Superintelligence? Let’s not get ahead of ourselves.

ByDileep George

November 4, 2015

“Jeopardy!” contestant Ken Jennings, who won a record 74 consecutive games, cracks his knuckles before starting a practice match against another “Jeopardy!” champion: an IBM computer called “Watson.” (Seth Wenig/Associated Press)

Each week, In Theory takes on a big idea in the news and explores it from a range of perspectives. This week we’re talking about robot intelligence. Need a primer? Catch up here.

Dileep George is an artificial intelligence and neuroscience researcher. In 2010, George and D. Scott Phoenix co-founded Vicarious, an AI research company focused on developing software that can think and learn like the human brain.

As an artificial intelligence researcher, there are two questions I am often asked. Will human-level artificial intelligence — also called artificial general intelligence — be reached soon? And will it be dangerous? If you look at recent headlines, you might think the answers are “Yes!” and “Yes! … Run for your lives!”

I think the truth is actually more mundane: AGI will be created gradually, over the course of many years. The scenarios presented by movies and the media are an exaggerated picture of the actual risks.The headlines might lead you to believe that AGI is imminent, but scientists actually working on the problems will tell you otherwise. A lot more research is needed before we can build AGI.

Like our reptile ancestors, the current generation of AI is able to perform many complex behaviors. Outward signs of progress, like Siri, Watson and self-driving cars, may seem like impressive steps toward human-like artificial intelligence. But what separates smart-looking behavior from general intelligence is our human ability to dynamically imagine, reason and adapt — to understand why we’re behaving one way, imagine new possibilities, reason about their consequences and alter our behavior as the environment changes. The AIs of today can act smart, but many more years of fundamental discoveries are needed to build systems that can actually learn, imagine and reasonlike a human.

AGI won’t be created overnight. Building machines of human-like intelligence is a very difficult, long-term project, not unlike putting the first humans on Mars. To build AGI (or to colonize Mars) will require a large, interdisciplinary team focused on the problem for many years. There is a broad array of intermediate milestones, like being able to recognize the contents of photographs or putting a satellite in orbit, that are achieved far in advance of the final goal. Neither AGI nor Mars colonization will happen overnight, or in a lone mad scientist’s garage without many measurable, understandable, intermediate achievements. Since so many of these milestones are commercially valuable and scientifically fascinating, it also seems likely that the big steps along the way will be celebrated and shared with the public — imagine the fanfare when robots start cleaning up nuclear waste at Fukushima or treating patients with Ebola.

Many of the “scary” scenarios are less than realistic. For example, one concern is that a superintelligent AGI will misunderstand our intentions and follow simple instructions — like “I need some paperclips” — to an extreme outcome — like turning the entire planet into paperclips. Such a feat would confound all of the greatest human minds alive today, and it seems contradictory to argue that an AGI smart enough to outwit all of humanity is simultaneously not smart enough to figure out what we mean when we ask for paperclips.

Imbuing our computers with common sense and teaching them to behave as we expect are precisely the kinds of research challenges that need to be overcome to create the first human-level AGI. In Hollywood’s imagination, these problems occur after we achieve AGI. In reality, human-level AGI is achieved only when these problems are well understood and solved.

Real concerns are often overlooked because headlines about Skynet get more clicks. Like every technology humanity has created, an artificial general intelligence could bring real benefits and real risks. One example of a far more practical problem associated with AGI is the economic effect of robotic automation. A broad transition toward automated manufacturing and transportation could potentially disrupt a large number of jobs, and I agree with the economists who have advocated for work programs and other government-sponsored initiatives to ease structural transitions.

Every new technology comes with its own potential risks and benefits, and the research community is committed to addressing these challenges together.Vicarious and other AI labs recognize the power of technology to transform society and the many benefits and risks that any important invention can create. This spring, Vicarious contributed to the creation of an open letter and research document on the focus areas that can be helpful along the path toward human-level AGI, from legal frameworks for autonomous vehicles to verification algorithms.

Human-level artificial intelligence has the potential to help humanity thrive more than any invention that has come before it, and it’s important to not let Hollywood fantasies or overzealous reporting make us lose sight of how amazing a world with AGI could be. Many of the biggest problems facing humanity today, like curing diseases or addressing climate change, would be vastly easier with the help of superintelligent AI. We are all lucky to share a future where that will one day be possible.