Advertisement

Advertisement

Smart machines: What’s the worst that could happen?

By MacGregor Campbell

An invasion led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world’s leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions.

Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.

Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware.

The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians, and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications – such as growing genetically modified crops – had not yet been developed.

Advertisement

Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies, sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft’s Clearflow system helps drivers pick the best route by analysing traffic behaviour.

At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What’s more, what precautions should we be taking?

These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar, a quiet town on the north California coast, for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July.

Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence – a system capable of expertise across a range of domains – is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years.

Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today’s AI research is not aimed at building a general human-level AI system, but rather focuses on “idiot-savants” systems good at tasks in a very narrow range of application, such as mathematics.

The panel discussed at length the idea of an AI “singularity” – a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were skeptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. “Perhaps the singularity is not the biggest of our worries,” said Dietterich.

A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans. According to the panel, identity thieves might feasibly plant a virus on a person’s smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. “If we could do it, they could,” said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates.

Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. “There are a few thousand lines of code running on my cell phone and I sure as hell haven’t verified all of them,” he says.

“These are potentially powerful technologies that could be used in good ways and not so good ways,” says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don’t understand them.

Given such possibilities, “what’s the responsibility of an AI researcher?” says Bart Selman of Cornell, co-chair of the panel. “We’re starting to think about it.”

At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.