The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

This morning I got an email from Eric Klien of the Lifeboat Foundation, an organization that advocates building a “space ark” as an insurance policy in case out-of-control nanorobots destroy all life on Earth. Klien was inviting me to join the foundation’s scientific advisory board, which includes such notables as Ray Kurzweil. I thought readers of this blog might be interested in my response.

Dear Eric,

I’m honored (and surprised) that you would consider me for your board. But I’m afraid I’m going to decline, for the following reasons:

(1) I’m generally skeptical of predictions about specific future technologies, especially when those predictions are exactly the sort of thing that a science fiction writer would imagine. In particular, I consider the risk of self-replicating nanobots converting our entire planet into gray goo to be a small one.

(2) Once we’re dealing with such unlikely events, I don’t think we can say with confidence what protective measures would be effective. For all we know, any measures we undertake will actually increase the risk of catastrophe. For example, maybe if humanity launches a space ark, that will tip off a hostile alien civilization to our existence. And maybe the Earth will then be besieged by alien warships, which can only be destroyed using gray goo — the development of which was outlawed as a protective measure. I’m not claiming that this scenario is likely, only that I have no idea whether it’s more or less likely than the scenarios you’re considering.

(3) There are several risks to humanity that I consider more pressing than that of nanotechnology run amok. These include climate change, the loss of forests and freshwater supplies, and nuclear proliferation.

Best regards,
Scott Aaronson

This entry was posted
on Tuesday, October 25th, 2005 at 1:52 pm and is filed under Rage Against Doofosity.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

No, they aren’t joking. I actually think that there is a kernel of truth to the religion of Kurweil and others, one which was predicted in early times by von Neumann, Shannon and others. One day computers may well be smarter than people. When that day comes, computers will inherit the earth. People will have no more control of what happens after that than plants and animals are in control now.

But even though we can speculate about these things, they are not part of the forseeable or plannable future. It makes no sense to try to plan for exactly Scott’s reason. We don’t know if “The Singularity” will be a good thing or a bad thing, or when it might happen, or what we can do to make it better or worse. All we can do is live in the present.

(To make an analogy, the advent of sentient people has been good for dogs but very bad for woolly mammoths. But neither dogs nor mammoths were in any position to plan ahead. Indeed, although it may mean nothing to the computers when their time comes, we can at least set a moral precedent by treating plants and animals with respect.)

But just think Scott, you could join the group, and then when they send off their ark, you could be the character in the SciFi novel who is secretly working for the nano-goo which has secretly already taken over the Earth.

Just to play devil’s advocate, presumably the space ark would also address (3).

In fact, let’s suppose that we want to maximize the number of humans that ever live. Then it’s reasonable to say that this expected value is dominated by the chance of settling other planets. Specifically, one possibility is that we all die in the next 1000 years; another is that we survive until the sun burns out in 5 billion years; another is that we expand exponentially through the universe until, say, the last star burns out a trillion years from now.

I wonder at what point it because cost-effective to devote resources to this last problem, instead of say, trypanosomiasis.

“Just to play devil’s advocate, presumably the space ark would also address (3).”

Yeah, I thought of that. But I think one needs to consider, not merely the improbability of nanobots gone wild, but also the improbability of actually building a space ark. (Think of the failure of Biosphere 2, or the lack of interest in creating a permanent human settlement in Antarctica, which would be so much easier than in space.)

Which takes us back to my original point: I don’t know how to compare the probability of a working ark in the forseeable future, to (say) the probability of the would-be ark’s antimatter propulsion system malfunctioning and destroying the Earth during launch.

One day computers may well be smarter than people. When that day comes, computers will inherit the earth. People will have no more control of what happens after that than plants and animals are in control now.

Biological humans were not built to make long journeys into space. If our spirit lives on, and is passed on to some sort of super-intelligent robots , they could go find answers to the rest of the questions (or THE question for that matter).

“But just think Scott, you could join the group, and then when they send off their ark, you could be the character in the SciFi novel who is secretly working for the nano-goo which has secretly already taken over the Earth.”

“That gray blob at Earth coordinates? Oh, that’s just a smudge on the telescope.”

A point: If the sentient robots of the far future look at us as we look at chimpanzees, that’s not very reassuring. After all, look at how humanity treats chimpanzees.

Also, while we are in this mode of wild speculation, it is silly to hope to escape the robots by flying away in space ships. If the robots wanted to, they would easily chase you down. Or convince you not to escape in the first place. After all, snakes usually hide from people, but it is futile. Their conception of the world is too limited to make good decisions in the modern world. They have surrendered their independence to humanity whether it is good for them or not.

Although plausible at first glance, I’m not sure it’s true that it makes no sense to plan for the singularity and that there is nothing which can be done to influence whether it turns out well or badly. You may want to look at some articles by Eli Yudkowsky:

He has actually thought carefully about these issues rather than making off the cuff remarks. Considering that the possible consequences of singularity are incalculable, I think a rational utility-maximizing person is obligated to devote some thought to the matter even if the probability of singularity occuring or the probability of being able to positively influence the outcome are very small.