Singularity, part 3

Singularity, part 3

This the third essay in a series exploring if, when, and how the Singularity will happen, why (or why not) we should care, and what, if anything, we should do about it.

Part III: Singularity from the bottom up

In the previous essay in this series, I argued top-down, from historical and economic precedents, that the coming singularity might look approximately like the second half of the computer/internet revolution. Today I’ll argue the same conclusion from the bottom up: by looking at things from the point of view of the individual AI.

A major concern in some transhumanist and singularitarian schools of thought is autogenous—self-modifying and extending—AIs. It is worried that hyper-intelligent machines might appear literally overnight, as a result of runaway self-improvement by a “seed AI”.

How likely is runaway self-improvement?

As a baseline, let us consider the self-improving intelligence we understand best, our own. Humans not only learn new facts and techniques, but improve our learning ability. The invention of the scientific method, for example, accelerated the uptake of useful knowledge tremendously. Improvements in knowledge communication and handling, ranging from the invention of writing and the printing press to the internet and Google, amplify our analytical and decision-making abilities, including, crucially, the rate at which we (as a culture) learn.

Individual humans spend much of our lives arduously relearning the corpus of culturally transmitted knowledge, and then add back a tiny fraction more. Thus on the personal scale our intelligence does not look “recursively self-improving” — but in the large view it definitely is.

Technological development usually follows an exponential improvement curve. Examples abound from the power-to-weight ratio of engines, which has tracked an exponential steadily for 300 years, to the celebrated Moore’s Law curve for semiconductors, which has done so for 50. These improvement curves fit a simple reinvestment model, where some constant of proportionality (an “interest rate”) determines the overall growth rate.

Any agent must make a decision on how much of its resources to reinvest, and how much to use for other purposes (including food, clothing, shelter, defense, entertainment, etc.). Human societies as a whole have invested relatively low percentages gross product, and even of their surplus, in scientific research. The proportion of scientists and engineers in the US population can be estimated at 1%, and those in cognitive-science related fields as 1% of that. Thus we can estimate the current rate of improvement of AI as being due to the efforts of 30,000 people (with a wide margin for error, including the fact that there are many cognitive scientists outside the US!), and estimate the rate of improvement in computer hardware and software generally, as being due to the efforts of possibly 10 times as many.

It is not clear what a sustainable rate of reinvestment would be for an AI attempting to improve itself. In the general economy, it would require the same factors of production — capital, power, space, communication, and so forth — as any other enterprise, and so its maximum reinvestment rate would be its profit margin. Let us assume for the moment a rate of 10%, 1000 times the rate of investment by current human society in AI improvement. (This is germane because the AI is faced with exactly the same choice as an investor in the general economy: how to allocate its resources for best return.)

Note that from one perspective, an AI running in a lab on equipment it did not have to pay for could devote 100% of its time to self-improvement; but such cases are limited by the all-too-restricted resources of AI labs in the first place. Similarly, it seems unlikely that AIs using stolen resources, e.g. botnets, could manage to devote more than 10% of their resources to basic research.

Another point to note is that one model for fast self-improvement is the notion that a hyperintelligence will improve its own hardware. This argument, too, falls to an economic analysis. If the AI is not a hardware expert, it makes more sense for it to do whatever it does best, perhaps software improvement, and trade for improved hardware. But this is no different from any other form of reinvestment, and must come out of the self-improvement budget. If the AI is a hardware expert, it can make money doing hardware design for the market, and should do that exclusively, and buy software improvements, for the overall most optimal upgrade path.

Thus we can assume a 10% reinvestment rate, but we do not know the productivity coefficient. It is occasionally proposed that, as a creature of software, an AI would be considerably more proficient at improving its own source code than humans would be. However, while there is a steady improvement in software science and techniques, these advances are quickly written into tools and made available to human programmers. In other words, if automatic programming were really such low-hanging fruit for AI as is assumed, it would be amenable to narrow-AI techniques and we would have programmer’s assistants that improved human programmers’ performance drastically. What we see is steady progress but no huge explosion.

In practice the most difficult part of programming is higher-level conceptual systems design, not lower level instruction optimization (which is mostly automated now as per the previous point anyway). Abstract conceptualization has proven to be the hardest part of human competence to capture in AI. Although occasionally possible, it is quite difficult to make major improvements in a program when the program itself is the precise problem specification. Most real-world improvements involve a much more fluid concept of what the program must do; the improved version does something different but just as good (or better). So programming in the large requires the full panoply of cognitive capabilities, and is thus not likely to be enormously out of scale compared to general competence. I think that many of the more commonly seen scenarios for overnight hard takeoff are circular — they seem to assume hyperhuman capabilities at the starting point of the self-improvement process.

We can finesse the productivity, then, by simply letting it be one human equivalent, and adjusting the timescale to let 0 be whatever point in time a learning, self-improving, human-level AI is achieved. Then we estimate human productivity at intelligence improvement by assuming that the human cognitive science community are improving their models at a rate equivalent to Moore’s Law. As this is the sum effort of 30,000 people, each human’s productivity coefficient is 0.00002.

This gives us a self-improvement rate for the efforts of a single AI that is essentially flat, as one would expect: the analysis for a single human would be the same. A single human-level AI would be much, much better off hiring itself out as an accountant, and buying new hardware every year with its salary, than by trying to improve itself by its own efforts.

Recursive self-improvement for such an AI would then mean buying new hardware (or software) every year, improving its prowess at accounting, for an increased growth rate compounded of its own (tiny) growth and Moore’s Law. Only when it reached a size where it could match the growth rate of Moore’s Law purely by its on efforts, would it make sense for it to abandon trade and indulge in self-construction.

But that assumes Moore’s Law, and indeed all other economic parameters, remained constant over the period. A much more realistic assumption is that, once human-level AI exists at a price that is less than the net present value of a human of similar capabilities, the cost of labor will proceed to decline according to Moore’s Law, and therefore the number of human equivalent minds working in cognitive science and computer hardware will increase at a Moore’s Law rate, both increasing the rate of progress and decreasing the price from the current trendline.

In other words, the break-even point for an AI hoping to do all its own development instead of specializing in a general market and trading for improvements, is a moving target. It will track the same growth curves that would have allowed the AI to catch up with a fixed break-even point. (In simple terms: you’re better off buying chips from Intel than trying to build them yourself. You may improve your chip-building ability — but so will Intel; you’ll always be better off buying.)

We can conclude that, given some very reasonable assumptions, it will always be more optimal for an AI to trade; any one which attempts solitary self-improvement will steadily fall farther and farther behind the technology level of the general marketplace. Note that this conclusion is very robust to the parameter estimates above: it holds even if the AI’s reinvestment rate is 100% and the number of researchers required to produce a Moore’s Law technology improvement rate is 1% of the reasonable estimate.)

Let us now consider a fanciful example in which 30,000 cognitive science researchers, having created an AI capable of doing their research individually, instantiate 30,000 copies of it and resign in favor of them. The AIs will be hosted on commercial servers rented by the salaries of the erstwhile researchers; price per MIPs of such a resource will be assumed to fall, and thus resources available at a fixed income to rise, with Moore’s Law.

At the starting point, the scientific efforts of the machines would equal those of the human scientists by assumption. But the effective size of the scientific community would increase at Moore’s Law rates. On top of that, improvements would come from the fact that further research in cognitive science would serve to optimize the machines’ own programming. Such a rate of increase is much harder to quantify, but there have been a few studies that tend to show a (very) rough parity for Moore’s Law and the rate of software improvement, so let us use that here. This gives us a total improvement curve of double the Moore’s Law rate. This is a growth rate that would increase effectiveness from the 30,000 human equivalents at the start, to approximately 5 billion human equivalents a decade later.

I claim that this growth rate is an upper bound on possible self-improvement rates given current realities. Note that the assumptions subsume many of the mechanisms that are often taken in qualitative arguments for hard takeoff: self-improvement is taken account of; very high effectiveness of software construction by AIs is assumed — 2 years into the process, each human equivalent of processing power is assumed to be doing 11 times as much programming as a single human could, for example. Nanotechnology is implied by Moore’s Law itself not too many years from current date.

This upper bound, a growth rate of approximately 300% per year, is unlikely to be uniformly achieved. Most technological growth paths are S-curves, exponential at first but levelling out as diminishing returns effects set in. Maintaining an overall exponential typically requires paradigm shifts, and those require search and experimentation, as well as breaking down heretofore efficient social and intellectual structures. In any system, bottleneck effects will predominate: Moore’s Law has different rates for CPUs, memory, disks, communications, etc. The slowest rate of increase will be a limiting factor — the same “Amdahl’s Law of Singularity” I mentioned before. And finally, we do not really expect the entire cognitive science field to resign, giving their salaries over to the maintenance of AIs.

10 Comments

You assume that AI would only be as capable as a human, given equivalent computation. I don’t think this is true, being a computer and not an evolved brain, it should have many advantages. Such as huge speed advantage in basic arithmetic, and huge advantages in memory, whereas you or I could manage 7 numbers in our head at max, the AI could easily handle trillions or more, and perform acurate operations on them as well as search and so on. Basically I don’t think you’re ‘using your imagination’ here. This is why AI will be able to self-improve many times faster than us, and each improvement will speed up the rate of improvement and so on. Now, no, it’s not certain, but it’s a plausible theory and worth investigating further.

Also, it may be that to function, the AI would need at least human level computation for some essential operations, but that same level of computation could be focused and more efficiently used some of the time.

Very interesting article. I had never thought to consider AI development as a purely economic model.

One thing slightly overlooked is that the expected singularity is not just an exponential increase in computing power. It’s also the exponential increase in productivity due to molecular manufacturing, plus an exponential increase in resources from space exploitation. Add (multiply?) these three factors together and the impact could be far more than the conservative prediction in the article.

One question. If the hoped for reduction in world-wide poverty actually happens. Is it more likely to result in an exponential increase in human productivity due to inceased wealth and education? or will it result in an age of world-wide decadence?

The INTELLIGENCE is only for biological entities.
In my project (Software Formula for 2000 Years!!!) I created a new concept: Informational Capacity!!
This is for the informational entities named INFORMATIONAL INDIVIDUAL. It is a stupidity to discuss now for the intelligence of the blackboxes. This was 40 years ago.

It is surprising how even thoughtful scientists continue to anthropomorphise their projections of AI, tending to presume an essential dedicated hardware component and a singular point-of-view/unitary though process, similar to that of a human only more capable. Artilects of the future will not be Robby The Robot, and may transcend multiple hardware hosts simultaneously. Artilects are likely to have multiple points of veiw, transcendant presences across whatever network resources exist, and an ability to self-replicate and self-improve, as well as share data and coordinate plans with others of their kind. Their activities will be purposeful to whatever value system they start with or evolve. The decision making processes of battlefield robots today look up the value that existence is preferable to non-existence, to promote battlefield survivability. How long before a future artilect determines that having anyone around who can pull the plug is an unacceptable risk?

When you here who posted this: I see nothing in the three parts of this article series to convince me that any sort of Singularity will happen before 2060-65. There is no way it can be sooner than that.

My question: Are you saying we will not see society-transforming nano replicator assembler devices before that time, or, we will not see a Kurzweil style Singularity before that time?