The Brain Builder

AI researcher Hugo de Garis on evolvable hardware, asteroid-sized "artilects," and the issues of massive intelligence and species dominance that will roil global politics in the 21st century.

Let's face it. Hugo de Garis dwells on the fringe. After aborting an early career in theoretical physics, de Garis spent more than a decade roaming the thinly populated fields of artificial life and artificial intelligence – frontiers where even the most respected get little respect. Here, fantastic ideas – maybe even demented ideas – are a plus. Which is a good thing for de Garis, who has a headful of them. In an effort to pursue any idea no matter how strange, de Garis has set up shop in Kyoto, Japan, where he is head of the Brain Builder Group at the Advanced Telecommunications Research Lab. Yes, he is trying to make a hyperintelligent electronic brain. His approach: evolve the hardware/software of intelligence by applying the power of Darwinian natural selection to electronic neural nets. And that's not his strangest idea.

This article has been reproduced in a new format and may be missing content or contain faulty links. Contact wiredlabs@wired.com to report an issue.

Wired: The lion's share of neural net research still focuses on single net modules – not more than tens of neurons linked together to perform one aggregate function. Why do you think you'll suddenly be able to build a multifaceted brain that makes use of a million neurons?

de Garis: I've already built working 100-module systems. No one has ever tried connecting a million neurons – or 10,000 modules – so I'm prepared for a heavy learning curve. But I have a lot of experience in the evolution of neural net dynamics, the manner in which the brains can be molded and given characteristics. It works. It's a powerful technique. I'm confident that within two years we'll have proof of concept, and by that I mean a functioning 10,000-module robot kitten – or robokoneko, as they say in Japanese – running around doing all sorts of nasty things.

What ramifications will evolutionary engineering have on traditional computing?

The great strength of evolutionary engineering is the ability to develop systems whose complexity levels are beyond human understanding, systems that function better than any product of the human-designed top-down approach. Evolutionary engineering will play an increasingly important role and eventually dominate traditional programming, as artificial brains get smart enough to write their own programs. But that is many decades away.

Where is all of this intelligence ultimately headed?

My dream is to create "artilects," or artificial intellects, with intellectual capacities many orders of magnitude above human levels. The scale of these systems will eventually shrink to the molecular level, with one bit per atom. But high-function systems will still require many bits or atoms. And with that large size comes an enormous heat-dissipation problem.

If you can develop a system that dissipates zero heat – using reversible computing, for example – it'll be possible to build asteroid-sized, or even moon-sized, artilects, if you want to get into science fiction. By the late 21st century, you're talking huge computing capacities with 1030 or 1040 components. Compare that to our brains, which have around 1010 – that is, tens of billions – neurons.

What will the creation of artilects mean for society?

The issues of massive intelligence will dominate global politics in the next century. The most plausible scenario is that humanity will split ideologically – one faction religiously saying the artilects should be built, and the other saying that they would be too dangerous. I call the first group Cosmists, because their horizons are more cosmic and big picture; the latter group I call Terras, since their horizons are more terrestrial. I can imagine the Cosmists getting fed up with the pressure of the Terras and, enabled by 21st-century technology, leaving Earth. But the Terras are not stupid. They'll probably argue that the artilects could escape and return to Earth. And when the stakes are the survival of the human species, what risk are you willing to take – 0.00000000001 percent? So why ever would you allow the Cosmists to leave? This may sound crazy, but you would be prepared to nuke them. And that's what keeps me awake at night – the worry that the long-term consequence of my work is, potentially, gigadeath.

This isn't the first time you've framed evolutionary engineers, like yourself, in the context of the nuclear physicists of the 1930s. If these scenarios are so horrifying, why continue your research?

This historical analogy scares me crazy. And the potential threat will be even larger than the hydrogen bomb. Sure – I realize there's a certain hypocrisy in what I'm saying and what I'm doing. But there are tremendous economic and social forces at work, and sooner or later, certainly within the next century, humanity will have to face the issue of whether or not it will give up its status as the dominant species on the planet. What's the alternative – do we just stagnate? Is that even possible?

To give an inkling of the Cosmist perspective, take an insect. No way could it do calculus, right? It just doesn't have the brainpower. So what is it – that unknown x – that artilects can do that human beings can't even imagine? By definition, we can't even talk about it. It's so godlike, it's so magnificent! Why would we choose to stop that?

Assuming that there is intelligent life on other planets, have those extraterrestrial societies already evolved artilects?

Civilizations throughout the universe have probably already made this transition. Our solar system is a billion years younger than others. And the time it takes to go from human to Cosmist to artilect is no more than a few centuries. The evolution is inevitable. After all, the real potential for intelligence is not biological – that's too primitive.

Here’s The Thing With Ad Blockers

We get it: Ads aren’t what you’re here for. But ads help us keep the lights on. So, add us to your ad blocker’s whitelist or pay $1 per week for an ad-free version of WIRED. Either way, you are supporting our journalism. We’d really appreciate it.