Q1). Aren’t you going a bit too far, in extrapolating full-blown AI from a bunch of graphs?

A1). Of course, we can’t be 100% sure that AI will be developed in the near future. However, an increase in computer processing power, combined with improved brain-scanning methods, seems likely to produce at least some form of computer-based intelligence over the next century. Molecular nanotechnology, in particular, will enable us to harness massive amounts of new processing power, as well as build a much more thorough mapping of the brain. And even if molecular nanotechnology didn’t become available, more conventional techniques are also making rapid progress: by some estimates, the top supercomputers of today already have enough processing power to match the human brain, and comparable machines are expected to become cheap and commonly available within a few decades. Projects to build full-scale brain simulations are currently underway- for instance, IBM’s Blue Brain project has built a model of a full human neocortical column, with the ultimate goal of simulating the whole human brain. Researchers are also currently working on brain prostheses: the hippocampi and the cerebellum, for instance, are sufficiently well understood that the biggest issue in building prostheses is connecting them to the rest of the brain, not programming them.

Progress made towards reverse-engineering the brain will also help AI research, by making researchers themselves more intelligent. For instance, IQ tests seem to correlate highly with working memory capacity. As the neural basis for working memory capacity become clear, it might become possible for us to use that knowledge to increase our own intelligence. Even if this wasn’t possible, algorithms extracted from the brain can be applied to traditional computer systems, making them more effective at helping us conduct research.

Even if we exclude the possibility of creating artificial intelligence by reverse-engineering the brain, increasing amounts of processing power are likely to make it easier to create AIs by evolutionary programming. After all, the human mind is intelligent, and it was never designed by anyone at all- it evolved through genetic drift and selection pressures. It might not be strictly necessary for us to understand how a mind works, as long as we can build a system that has enough computing power to simulate evolution, and produce an artificial mind optimized to the conditions we want it to perform in.

While nothing is ever completely certain, these factors point strongly enough in the direction of computer-based intelligences to make the issue more than worth our attention.

Q2). Haven’t Kurzweil’s graphs for technological progress been shown to be unrealistic?

A2). Many of Ray Kurzweil’s predictions about technology have been shown to be right, and Kurzweil has produced copious amounts of information and evidence as to why he makes the predictions that he does. However, Kurzweil’s graphs don’t have to be accurate for an AI-based Singularity to happen; one could happen even if technological progress were slowing down or not happening at all.

Q3). Isn’t AI just something out of a sci-fi movie?

A3). Everyone agrees that we haven’t achieved full-scale AI yet. However, this says very little about whether full-scale AI is feasible. There are so many historical examples of this that it would be difficult to count them. For instance, in 1968, walking on the Moon was something that only happened in science fiction. In 1956, human spaceflight was something that only happened in science fiction. And in 1980, easy, instant worldwide communication was something that only happened in science fiction.

Q4). Isn’t it convenient that big changes always seem to be predicted to happen during the lifetimes of the people predicting them? For example, religious doomsayers always predict the apocalypse during their own lifetime (and that of the audience).

A4). Even if the Singularity takes thousands of years, it’s still a worthwhile goal for the human species, and we still need to pursue it. Very few Singularitarians are in it just for the sake of personal gain; even if they knew with 100% certainty that they would die before the Singularity, most Singularitarians would still see it as a worthwhile goal.

Q5). Isn’t the Singularity is the Rapture of religious texts, just dressed in different clothes to appeal to proclaimed atheists?

A5). Unlike any of the various Raptures, the Singularity is a technological event, caused by ordinary humans, doing ordinary science, building ordinary technology which follows the ordinary laws of physics. It does not involve any religious or divine powers. It doesn’t involve outside intervention by superior or alien beings. And it’s completely within our control as a species- it will only happen when we go out and make it happen.

Q7). Haven’t much simpler AI systems (chess programs, self-driving cars) been shown to be really stupid in the past?

A7). Most of the successful AI software is not called “artificial intelligence software”; many AI scientists have made the humorous observation that as soon as AI works, it stops being called AI. In addition, successful AI software is frequently used by corporations or programmers instead of individual consumers, further reducing public visibility, and creating the impression that AI is dumb, inefficient and a waste of resources.

Q8). What if there’s a war, resource exhaustion, or other crisis, which puts off the Singularity for a long time?

A8). With very few exceptions, the past few centuries have seen exponential technological growth, and a corresponding increase in the general standard of living. It would require a truly huge disruption to halt this progress. Even WWII, the single most catastrophic event in modern human history, didn’t slow the march of technology. Neither did WWI, the American Civil War, the Great Depression, the Dust Bowl, the Napoleonic Wars, the ozone hole, the Russian Revolution, the Cold War, the fall of communism, and countless other wars and disasters.