The Conversation So Far

After a year of Robin pestering co-blogger Eliezer "Can we talking about singularity on the blog now, can we?" and Eliezer saying "Not yet," Robin speaks up on the occasion of his IEEE Spectrum singularity article:

Robin: Hey Eliezer, I see you’ve been talking for years about an AI-singularity. Have a look; I’ve analyzed the history of previous "singularities" (as Vinge defines the term) and can use that to forecast the timing, speedup, and transition inequalities of the next singularity. I can also find a tech that looks pretty likely to appear within the predicted time-frame, and an economic analysis suggests it could plausibly deliver the forecasted speedup. And this tech is a kind of AI! Eliezer: I really don’t have time to talk, but you are looking at untrustworthy surface analogies, not reliable deep causes. My deep insight is that optimization processes are more powerful the smaller and better is their protected meta-level, and history is divided into epochs according to the arrival of new long-term optimization processes, and to a lesser extent their meta-level innovations, after each of which ordinary innovation rates speed up. The two optimization processes so far were natural selection and cultured brains, and key meta-innovations were cells, sex, writing, and scientific thinking. I’m talking about a future singularity due to a transistor-based machine with no (and therefore the best) protected meta-level. My deep insight suggests this would have an extremely large speedup and transition inequality.

Robin: This history of when innovation rates sped up by how much just doesn’t seem to support your claim that the strongest speedups are caused by and coincide with new optimization processes, and to a lesser extent protected meta-level innovations. There is some correlation, but it seems weak. And since you don’t argue for a timing for your postulated singularity, why can’t we think yours will happen after the singularity I outline?

Robin: “I can also find a tech that looks pretty likely to appear within the predicted time-frame, and an economic analysis suggests it could plausibly deliver the forecasted speedup. And this tech is a kind of AI! “

Eliezer, I doubt you and I would have had much problem discussing free will a year ago. But perhaps many of our readers would have found it harder to follow our discussion then.

Roko, yes.

Nominull

If the singularity is arriving so soon, how do you have time to spend whole years laying the groundwork for discussing these complicated concepts? I’m not complaining, but…

Tim Tyler

I don’t think there’s going to be much time for future growth spurts to happen “later”. Our current exponential growth seems unlikely to continue for very long, since it seems likely to propel us into the realm of physical limits before too many more doublings.

Tim, if previous trends continued to the next two singularities, the second one would happen within about two years after the first one.

Unknown

So maybe the first singularity could be caused by uploads, the second by AGI. This would be consistent with Eliezer’s claim that AGI wouldn’t have such a limited doubling time.

Tim Tyler

Right. So: that’s two years of your doubling every two weeks? I make that 52 doublings. Almost as many doublings as there are square on a chessboard.
Growth by a factor of 4,503,599,627,370,496. Do you really think that the
laws of physics will stomach that?

Since you are extrapolating from such a small number of data points, figures like those could be way off base – but it seems as though your model would suggest that the next such development might well be the last one of its kind.

I can certainly see both sides of this issue(regarding timing of discussion).

Eliezer, perhaps it makes more sense to keep Robin informed ahead of time what you want to discuss. At the least as a rough timeline, so that he can discuss similar topics, and you will both be ready when you’re ready.
regards,

4 quadrillion is not a difficult amount of expansion for our Solar System to absorb, but I think it does require nanotech. Not to mention a choice.

Tim Tyler

I can’t criticise your analysis if you don’t present it. Anyhow, physical limits can be made out on the horizon – and it is far from obvious that there’s ever going to be anything in the future with remotely the same impact as the ongoing technological revolution looks like it is going to have – as it overtakes the “natural-technology” of existing evolved lifeforms.

Maybe if we get a signal from another galaxy – with a dramatic influx of knowledge – but you can’t easily predict things like that.

Tim Tyler

As I understand it, the idea of economic doubling is not really to do with resources. Resources are not yet much of a factor – since we have plenty of them. Growth appears to have more to do with our ability to manipulate signals. Transistor densities double annually, while our gold supplies grow only modestly. So the idea is that – if exponential growth is to continue – it should be supported on a small scale – by the “room at the bottom” – otherwise we don’t have a good reason to expect such growth to happen in the first place.

Andy Wood

For myself, I like Eliezer’s drawn-out foundation building. If you don’t cross inferential distances in baby steps, it seems you wind up with a lot more noise in the comments (in the form of people unwittingly criticizing strawmen, arguing semantics, and so on).

I found Robin’s analysis of timing and speedup interesting, if pretty speculative – due to extrapolation from a tiny number of data points. But I thought the analysis of transition inequalities sucked. It wasn’t even clear if Robin was counting AIs as people or not.

Tim Tyler

Re: So maybe the first singularity could be caused by uploads, the second by AGI.

Ah, the hypothetical uploads-before-AI scenario again. Has anyone ever articulated any coherent reasons for taking this idea more seriously than angels dancing on pinheads?

Giedrius

I could not help but paraphrase a little more.
Let’s suppose Robin and Eliezer are talking about explosions sometime in the past.

Robin: I have analyzed most of the known ways to make an explosion. If you heat up a boiler, it explodes. If you put gunpowder into closed metallic container and heat it up, it explodes even more powerfully. Water is a single substance; gunpowder is a mixture of three substances. I have a theory, that if you make a mixture of FIVE substances and use it instead of gunpowder, you might get an even more powerful explosion. I have even proposed a recipe that might work. It involves a new substance, produced using nitric acid and some organic materials…

Eliezer: I don’t have time to talk, but I have a better idea. If you bring together two big pieces of radioactive material, you will get a really big explosion. If you get the details right, that is.

Robin: Historical evidence does not support that. For all the known cases when two pieces of anything were brought together, explosion never occurred. But since you do not specify how hard it is to figure out the details of your explosion, why can’t we think mine is more feasible?

So we go to four quadrillion times in two years, and then things really start to pick up? Unless someone has a picture handy of what a quadrillion looks like, I think we are past the point where your meat brain substitutes “gazillion” when trying to think about what the numbers really mean, like 3^^^3. Whether or not we are uploaded, most value will be in the form of information by that point, or at least the information that is incorporated into the matter we are enjoying.

But the real point of uploading has already been mentioned: to speed up Overcoming Bias posts. When we can directly connect brains and transfer information the way our computers do, we can get through the introductory explanations much more quickly. Perhaps non-biological post-humans will put great value on this discourse, and that will be the source of the octillion-dollar economy.

(I suspect this is an audience that would get it if I made an obscure reference here to Oracle’s digital telepathy line in Rock of Ages. Very obscure.)

Everyone, it is Tim, not I who predicted 52 doublings (or quadrillions)! I would only suggest that the next mode might plausibly see a similar number of doublings to the last few modes, i.e., six to sixteen.

Unknown, cool picts!

Tim Tyler

What I actually said was that 52 more doublings after the next substantial increase in the growth rate seemed unlikely to happen – due to physical limitations.

Even 16 doublings at Robin’s 2.3 weeks per doubling is only 36.8 weeks – less than your average pregnancy.

Tim Tyler

Thinking about what chain of events could plausibly produce a two-stage increase in the acceleration rate, my guess would be:

1. Synthetic minds
2. Synthetic bodies

We’ll probably get advanced AI first, which will then help to produce advanced nanotechnology. In both cases, any shift is likely to occur when the engineered technology becomes broadly competitive with the existing natural one it is supplanting.

The new replicators will first build themselves minds, and then use those minds to build themselves bodies.