For these reasons, once a computer is able to match the subtlety and range of human intelligence, it will necessarily soar past it and then continue its double-exponential ascent.
A key question regarding the Singularity is whether the "chicken" (strong AI) or the "egg"(nanotechnology) will come first. In other words, will strong AI lead to full nanotechnology (molecular-manufacturing assemblers that can turn information into physical products), or will full nanotechnology lead to strong AI? The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited, and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology.
The second premise is based on the realization that the hardware requirements for strong AI will be met by nanotechnology-based computation. Likewise the software requirements will be facilitated by nanobots that could create highly detailed scans of human brain functioning and thereby achieve the completion of reverse engineering the human brain.

…

The reality is that progress in both areas will necessarily use our most advanced tools, so advances in each field will simultaneously facilitate the other. However, I do expect that full MNT will emerge prior to strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI).
As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled.
Runaway AI. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong Als, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely.

…

Such robots may make great assistants, but who's to say that we can count on them to remain reliably friendly to mere biological humans?
Strong AI. Strong AI promises to continue the exponential gains of human civilization. (As I discussed earlier, I include the nonbiological intelligence derived from our human civilization as still human.) But the dangers it presents are also profound precisely because of its amplification of intelligence. Intelligence is inherently impossible to control, so the various strategies that have been devised to control nanotechnology (for example, the "broadcast architecture" described below) won't work for strong AI. There have been discussions and proposals to guide AI development toward what Eliezer Yudkowsky calls "friendly AI"30 (see the section "Protection from 'Unfriendly' Strong AI," p. 420). These are useful for discussion, but it is infeasible today to devise strategies that will absolutely ensure that future AI embodies human ethics and values.

While narrow AI is increasingly deployed to solve real
world problems and attracts most of the current commercial interest, the Holy Grail of artificial intelligence is, of
course, strong AI—the construction of a truly intelligent
machine. The realization of strong AI would mean the existence of a machine that is genuinely competitive with, or
perhaps even superior to, a human being in its ability to
reason and conceive ideas. The arguments I have made in
Copyrighted Material – Paperback/Kindle available @ Amazon
THE LIGHTS IN THE TUNNEL / 242
this book do not depend on strong AI, but it is worth noting that if truly intelligent machines were built and became
affordable, the trends I have predicted here would likely
be amplified, and the economic impact would certainly be
dramatic and might unfold in an accelerating fashion.
Research into strong AI has suffered because of some
overly optimistic predictions and expectations back in the
1980s—long before computer hardware was fast enough
to make true machine intelligence feasible.

…

Research into strong AI has suffered because of some
overly optimistic predictions and expectations back in the
1980s—long before computer hardware was fast enough
to make true machine intelligence feasible. When reality
fell far short of the projections, focus and financial backing shifted away from research into strong AI. Nonetheless, there is evidence that the vastly superior performance
and affordability of today’s processors is helping to revitalize the field.
Research into strong AI can be roughly divided into
two main approaches. The direct computational approach
attempts to extend traditional, algorithmic computing into
the realm of true intelligence. This involves the development of sophisticated software applications that exhibit
general reasoning. A second approach begins by attempting to understand and then simulate the human brain. The
Blue Brain Project,57 a collaboration between Switzerland’s
EPFL (one of Europe’s top technical universities) and
IBM, is one such effort to simulate the workings of the
brain.

…

Once researchers gain an understanding of the basic
operating principles of the brain, it may be possible to
build an artificial intelligence based on that framework.
This would not be an exact replication of a human brain;
instead, it would be something completely new, but based
on a similar architecture.
Copyrighted Material – Paperback/Kindle available @ Amazon
Appendix / Final Thoughts / 243
When might strong AI become reality—if ever? I
suspect that if you were to survey the top experts working
in the field, you would get a fairly wide range of estimates.
Optimists might say it will happen within the next 20 to 30
years. A more cautious group would place it 50 or more
years in the future, and some might argue that it will never
happen.
True machine intelligence is an idea that, in many
ways, intrudes into the realm of philosophy, and for some
people, perhaps even religion.

A machine takeover is generally imagined as following a path of evolution to revolution. Computers eventually develop to the equivalent of human intelligence (“strong AI”) and then rapidly push past any attempts at human control. Ray Kurzweil explains how this would work. “As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI, but takes less time than the cycle before it as is the nature of technological evolution. The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating super-intelligence.” Or as the AI Agent Smith says to his human adversary in The Matrix, “Evolution, Morpheus, evolution, like the dinosaur.

…

Despite all the robots having the same initial software, the researchers are seeing the emergence of “good” robots that cooperate and “bad” robots that constantly attack each other. There was even one robot that became the equivalent of artificially stupid or suicidal, that is, a robot that evolved to constantly make the worst possible decision.
This idea of robots, one day being able to problem-solve, create, and even develop personalities past what their human designers intended is what some call “strong AI.” That is, the computer might learn so much that, at a certain point, it is not just mimicking human capabilities but has finally equaled, and even surpassed, its creators’ human intelligence.
This is the essence of the so-called Turing test. Alan Turing was one of the pioneers of AI, who worked on the early computers like Colossus that helped crack the German codes during World War II. His test is now encapsulated in a real-world prize that will go to the first designer of a computer intelligent enough to trick human experts into thinking that it is human.

…

Wireless capacity doubles every nine months. Optical capacity doubles every twelve months. The cost/performance ratio of Internet service providers is doubling every twelve months. Internet bandwidth backbone is doubling roughly every twelve months. The number of human genes mapped per year doubles every eighteen months. The resolution of brain scans (a key to understanding how the brain works, an important part of creating strong AI) doubles every twelve months. And, as a by-product, the number of personal and service robots has so far doubled every nine months.
The darker side of these trends has been exponential change in our capability not merely to create, but also to destroy. The modern-day bomber jet has roughly half a million times the killing capacity of the Roman legionnaire carrying a sword in hand. Even within the twentieth century, the range and effectiveness of artillery fire increased by a factor of twenty, antitank fire by a factor of sixty.

Introduction
Early AI researchers aimed at what was later called “strong AI,” the simulation of human level intelligence. One of AI’s founders, Herbert Simon, claimed (circa 1957) that
“… there are now in the world machines that think, that learn and that create.” He went
on to predict that with 10 years a computer would beat a grandmaster at chess, would
prove an “important new mathematical theorem, and would write music of “considerable aesthetic value.” Science fiction writer Arthur C. Clarke predicted that, “[AI]
technology will become sufficiently advanced that it will be indistinguishable from
magic” [1]. AI research had as its goal the simulation of human-like intelligence.
Within a decade of so, it became abundantly clear that the problems AI had to
overcome for this “strong AI” to become a reality were immense, perhaps intractable.

…

The next major step in this direction was the May 2006 AGIRI Workshop, of
which this volume is essentially a proceedings. The term AGI, artificial general intelligence, was introduced as a modern successor to the earlier strong AI.
Artificial General Intelligence
What is artificial general intelligence? The AGIRI website lists several features, describing machines
•
•
•
•
with human-level, and even superhuman, intelligence.
that generalize their knowledge across different domains.
that reflect on themselves.
and that create fundamental innovations and insights.
Even strong AI wouldn’t push for this much, and this general, an intelligence. Can
there be such an artificial general intelligence? I think there can be, but that it can’t be
done with a brain in a vat, with humans providing input and utilizing computational
output.

…

Machine learning algorithms may be applied quite
broadly in a variety of contexts, but the breadth and generality in this case is supplied
largely by the human user of the algorithm; any particular machine learning program,
considered as a holistic system taking in inputs and producing outputs without detailed
human intervention, can solve only problems of a very specialized sort.
Specified in this way, what we call AGI is similar to some other terms that have
been used by other authors, such as “strong AI” [7], “human-level AI” [8], “true
synthetic intelligence” [9], “general intelligent system” [10], and even “thinking
machine” [11]. Though no term is perfect, we chose to use “AGI” because it correctly
stresses the general nature of the research goal and scope, without committing too
much to any theory or technique.
We will also refer in this chapter to “AGI projects.” We use this term to refer to an
AI research project that satisfies all the following criteria:
1.

By around 2040 machine brains should, in theory, be able to handle around 100 trillion instructions per second. That’s about the same as a human brain. So what happens when machine intelligence starts to rival that of its human designers?
Before we descend down this rabbit hole we should first split AI in two. “Strong AI” is the term generally used to describe true thinking machines. “Weak AI” (sometimes known as “Narrow AI”) is intelligence intended to supplement rather than exceed human intelligence. So far most machines are preprogrammed or taught logical courses of action. But in the future, machines with strong AI will be able to learn as they go and respond to unexpected events. The implications? Think of automated disease diagnosis and surgery, military planning and battle command, customer-service avatars, artificial creativity and autonomous robots that predict then respond to crime (a “Department of Future Crime”—see also Chapter 32 and Biocriminology).

…

Sumner Redstone, chairman, Viacom and CBS
2002 “There is no doubt that Saddam Hussein has weapons of mass destruction.” Dick Cheney
Glossary
3D printer A way to produce 3D objects from digital instructions and layered materials dispersed or sprayed on via a printer.
Affective computing Machines and systems that recognize or simulate human effects or emotions.
AGI Artificial general intelligence, a term usually used to describe strong AI (the opposite of narrow or weak AI). It is machine intelligence that is equivalent to, or exceeds, human intelligence and it’s usually regarded as the long-term goal of AI research and development.
Ambient intelligence Electronic or artificial environments that recognize the presence of other machines or people and respond to their needs.
Artificial photosynthesis The artificial replication of natural photosynthesis to create or store solar fuels.

The down-to-earth clarity of Chace’s style will help take humanity into what could be a very violent, “Transcendence” movie-like, real-life, phase four. If you want to survive this coming fourth phase in the next fewdecades and prepare for it, you cannot afford NOT to read Chace’s book.
Prof. Dr. Hugo de Garis, author of The Artilect War, former director of the Artificial Brain Lab, Xiamen University, China.
Advances in AI are set to affect progress in all other areas in the coming decades. If this momentum leads to the achievement of strong AI within the century, then in the words of one field leader it would be “the biggest event in human history”. Now is therefore a perfect time for the thoughtful discussion ofchallenges and opportunities that Chace provides.
Surviving AI is an exceptionally clear, well-researched and balanced introduction to a complex and controversial topic, and is a compelling read to boot.
Seán Ó hÉigeartaigh, executive director, Cambridge Centrefor the Study of Existential Risk
CALUM writes fiction and non-fiction, primarily on the subject of artificial intelligence.

…

Whether intelligence resides in the machine or in the software is analogous to the question of whether it resides in the neurons in your brain or in the electrochemical signals that they transmit and receive. Fortunately we don’t need to answer that question here.
ANI and AGI
We do need to discriminate between two very different types of artificial intelligence: artificial narrow intelligence (ANI) and artificial general intelligence (AGI (4)), which are also known as weak AI and strong AI, and as ordinary AI and full AI.
The easiest way to do this is to say that artificial general intelligence, or AGI, is an AI which can carry out any cognitive function that a human can. We have long had computers which can add up much better than any human, and computers which can play chess better than the best human chess grandmaster. However, no computer can yet beat humans at every intellectual endeavour.

…

But it is daft to dismiss as failures today’s best pattern recognition systems, self-driving cars, and machines which can beat any human at many games of skill.
Informed scepticism about near-term AGI
We should take more seriously the arguments of very experienced AI researchers who claim that although the AGI undertaking is possible, it won’t be achieved for a very long time. Rodney Brooks, a veteran AI researcher and robot builder, says “I think it is a mistake to be worrying about us developing [strong] AI any time in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.”
Andrew Ng at Baidu and Yann LeCun at Facebook are of a similar mind, as we saw in the last chapter.
Less sceptical experts
However there are also plenty of veteran AI researchers who think AGI may arrive soon.

“Chapter eight is the deeply intertwined promise and peril in GNR [genetics, nanotechnology, and robotics] and I go into pretty graphic detail on the downsides of those three areas of technology. And the downside of robotics, which really refers to AI, is the most profound because intelligence is the most important phenomenon in the world. Inherently there is no absolute protection against strong AI.”
Kurzweil’s book does underline the dangers of genetic engineering and nanotechnology, but it gives only a couple of anemic pages to strong AI, the old name for AGI. And in that chapter he also argues that relinquishment, or turning our backs on some technologies because they’re too dangerous, as advocated by Bill Joy and others, isn’t just a bad idea, but an immoral one. I agree relinquishment is unworkable. But immoral?
“Relinquishment is immoral because it would deprive us of profound benefits.

…

* * *
So far we’ve explored three drives that Omohundro argues will motivate self-aware, self-improving systems: efficiency, self-protection, and resource acquisition. We’ve seen how all of these drives will lead to very bad outcomes without extremely careful planning and programming. And we’re compelled to ask ourselves, are we capable of such careful work? Do you, like me, look around the world at expensive and lethal accidents and wonder how we’ll get it right the first time with very strong AI? Three-Mile Island, Chernobyl, Fukushima—in these nuclear power plant catastrophes, weren’t highly qualified designers and administrators trying their best to avoid the disasters that befell them? The 1986 Chernobyl meltdown occurred during a safety test.
All three disasters are what organizational theorist Charles Perrow would call “normal accidents.” In his seminal book Normal Accidents: Living with High-Risk Technologies, Perrow proposes that accidents, even catastrophes, are “normal” features of systems with complex infrastructures.

…

Yet the analogy doesn’t fit—advanced AI isn’t at all like fire, or any other technology. It will be capable of thinking, planning, and gaming its makers. No other tool does anything like that. Kurzweil believes that a way to limit the dangerous aspects of AI, especially ASI, is to pair it with humans through intelligence augmentation—IA. From his uncomfortable metal chair the optimist said, “As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such it will reflect our values because it will be us.”
And so, the argument goes, it will be as “safe” as we are. But, as I told Kurzweil, Homo sapiens are not known to be particularly harmless when in contact with one another, other animals, or the environment.

Like Watson, although vastly less ambitious, ours was a non-thinking, high-performing system. In the language of some AI scientists and philosophers of the 1980s, these systems would be labelled, perhaps a little pejoratively, as ‘weak AI’ rather than ‘strong AI’.8 Broadly speaking, ‘weak AI’ is a term applied to systems that appear, behaviourally, to engage in intelligent human-like thought but in fact enjoy no form of consciousness; whereas systems that exhibit ‘strong AI’ are those that, it is maintained, do have thoughts and cognitive states. On this latter view, the brain is often equated with the digital computer.
Today, fascination with ‘strong AI’ is perhaps more intense than ever, even though really big questions remain unanswered and unanswerable. How can we know if machines are conscious in the way that human beings are? How, for that matter, do we know that consciousness feels the same for all of us as human beings?

…

Undeterred by these philosophical challenges, books and projects abound on building brains and creating minds.9
In the 1980s, in our speeches, we used to joke about the claim of one of the fathers of AI, Marvin Minsky, who reportedly said that ‘the next generation of computers will be so intelligent, we will be lucky if they keep us around as household pets’.10 Today, it is no longer laugh-worthy or sciencefictional11 to contemplate a future in which our computers are vastly more intelligent than us—this prospect is discussed at length in Superintelligence by Nick Bostrom, who runs the Future of Humanity Institute at the Oxford Martin School at the University of Oxford.12
Ironically, this growth in confidence in the possibility of ‘strong AI’, at least in part, has been fuelled by the success of Watson itself. The irony here is that Watson in fact belongs in the category of ‘weak AI’, and it is precisely because it cannot meaningfully be said to think that the system is not deemed very interesting by some AI scientists, psychologists, and philosophers. For pragmatists (like us) rather than purists, whether Watson is an example of ‘weak’ or ‘strong’ AI is of little moment. Pragmatists are interested in high-performing systems, whether or not they can think. Watson did not need to be able to think to win.
Nor does a computer need to be able to think or be conscious to pass the celebrated ‘Turing Test’.

The current state of the art in AI does in fact enable systems to also learn from their own experience. The Google self-driving cars learn from their own driving experience as well as from data from Google cars driven by human drivers; Watson learned most of its knowledge by reading on its own. It is interesting to note that the methods deployed today in AI have evolved to be mathematically very similar to the mechanisms in the neocortex.
Another objection to the feasibility of “strong AI” (artificial intelligence at human levels and beyond) that is often raised is that the human brain makes extensive use of analog computing, whereas digital methods inherently cannot replicate the gradations of value that analog representations can embody. It is true that one bit is either on or off, but multiple-bit words easily represent multiple gradations and can do so to any desired degree of accuracy.

The view that free will is compatible with determinism is called compatibilism. One of the strongest challenges to compatibilism is the consequence argument.
What is the consequence argument? What response can you give to the consequence argument based on what you have read in this book?
Example 10-7.
In the philosophy of mind, Strong AI is the position that an appropriately programmed computer could have a mind in the same sense that humans have minds.
John Searle presented a thought experiment called The Chinese Room, intended to show that Strong AI is false. You can read about it at http://en.wikipedia.org/wiki/Chinese_room.
What is the system reply to the Chinese Room argument? How does what you have learned about complexity science influence your reaction to the system response?
Chapter 11. Case Study: Sugarscape
Dan Kearney, Natalie Mattison, and Theo Thompson
The Original Sugarscape
Sugarscape is an agent-based model developed by Joshua M.

Now that we have made solid progress, let us not risk losing our respectability.” One result of this conservatism has been increased concentration on “weak AI”—the variety devoted to providing aids to human thought—and away from “strong AI”—the variety that attempts to mechanize human-level intelligence.73
Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.74
The last few years have seen a resurgence of interest in AI, which might yet spill over into renewed efforts towards artificial general intelligence (what Nilsson calls “strong AI”). In addition to faster hardware, a contemporary project would benefit from the great strides that have been made in the many subfields of AI, in software engineering more generally, and in neighboring fields such as computational neuroscience.

The device is inherently of no value to us (internal memo at Western Union, 1878).
Somehow, the impossible always seems to become the possible. In the world of artificial intelligence, that next phase of development is called artificial general intelligence (AGI), or strong AI. In contrast to narrow AI, which cleverly performs a specific limited task, such as machine translation or auto navigation, strong AI refers to “thinking machines” that might perform any intellectual task that a human being could. Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing.
In 2014, Google purchased DeepMind Technologies for more than $500 million in order to strengthen its already strong capabilities in deep learning AI.

…

His algorithmic programming requires him to complete the vessel’s mission near Jupiter, but for national security reasons he cannot disclose the true purpose of the voyage to the crew. To resolve the contradiction in his program, he attempts to kill the crew. As narrow AI becomes more powerful, robots grow more autonomous, and AGI looms large, we need to ensure that the algorithms of tomorrow are better equipped to resolve programming conflicts and moral judgments than was HAL.
It’s not that any strong AI would necessarily be “evil” and attempt to destroy humanity, but in pursuit of its primary goal as programmed, an AGI might not stop until it had achieved its mission at all costs, even if that meant competing with or harming human beings, seizing our resources, or damaging our environment. As the perceived risks from AGI have grown, numerous nonprofit institutes have been formed to address and study them, including Oxford’s Future of Humanity Institute, the Machine Intelligence Research Institute, the Future of Life Institute, and the Cambridge Centre for the Study of Existential Risk.

Simpler survival machines — plants, for instance — never achieve the heights of self-redefinition made possible by the complexities of your robot; considering them just as survival machines for their comatose inhabitants leaves no patterns in their behavior unexplained.
If you pursue this avenue, which of course I recommend, then you must abandon Searle's and Fodor's "principled" objection to "strong AI." The imagined robot, however difficult or unlikely an engineering feat, is not an impossibility — nor do they claim it to be. They concede the possibility of such a robot, but just dispute its "metaphysical status"; however adroitly it managed its affairs, they say, its intentionality would not be the real thing. That's cutting it mighty fine. I recommend abandoning such a forlorn disclaimer and acknowledging that the meaning such a robot would discover in its world, and exploit in its own communications with others, would be exactly as real as the meaning you enjoy.

…

This difficulty had been widely seen as systematically blocking any argument from Godel's Theorem to the impossibility of AI. Certainly everybody in AI has always known about Godel's Theorem, and they have all continued, unworried, with their labors. In fact, Hofstadter's classic Godel Escher Bach (1979) can be read as the demonstration that Godel is an unwilling champion of AI, providing essential insights about the paths to follow to strong AI, not showing the futility of the field. But Roger Penrose, Rouse Ball Professor of Mathematics at Oxford, and one of the world's leading mathematical physicists, thinks otherwise. His challenge has to be taken seriously, even if, as I and others in AI are convinced, he is making a fairly simple mistake. When Penrose's book appeared, I pointed out the problem in a review: his argument is highly convoluted, and bristling with details of physics and mathematics,
and it is unlikely that such an enterprise would succumb to a single, crashing oversight on the part of its creator — that the argument could be 'refuted' by any simple observation.

…

As a product of biological design processes (both genetic and individual), it is almost certainly one of those algorithms that are somewhere or other in the Vast space of interesting algorithms, full of typographical errors or "bugs," but good enough to bet your life on — so far. Penrose sees this as a "far-fetched" possibility, but if that is all he can say against it, he has not yet come to grips with the best version of "strong AI."
{444}
3. THE PHANTOM QUANTUM-GRAVITY COMPUTER: LESSONS FROM LAPLAND
I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have.
— ROGER PENROSE 1989, p. 414
I don't think the brain came in the Darwinian manner.

And there is nothing to prevent an AI’s cognitive capability being expanded simply by increasing its hardware capacity.’
‘This all sounds like an argument for stopping people working on strong AI?’ asked Matt. ‘Although I guess that would be hard to do. There are too many people working in the field, and as you say, a lot of them show no sign of understanding the danger.’
‘You’re right,’ Ivan agreed, ‘we’re on a runaway train that cannot be stopped. Some science fiction novels feature a powerful police force – the Turing Police – that keeps watch to ensure that no-one creates a human-level artificial intelligence. But that’s hopelessly unrealistic. The prize – both intellectual and material – for owning an AGI is too great. Strong AI is coming, whether we like it or not.’
TEN
‘But surely, if you’re right about all this,’ Leo protested, sounding genuinely concerned, ‘people – governments, voters – will wake up when it gets closer, and slow it down or stop it?’

Astronomical sky surveys are the
stereotypical example of big data that must be mined to extract discoveries regarding new asteroids or new planets from indirect data.
Eye tracking A skill enabling a robot to visually examine the scene
before it, identify the faces in the scene, mark the location of the eyes
on each face, and then find the irises so that the gaze directions of the
humans are known. Humans are particularly good at this even when we
face other people at acute angles.
Hard AI Also known as strong AI, this embodies the AI goal of going
all the way toward human equivalence: matching natural intelligence
along every possible axis so that artificial beings and natural humans are,
at least from a cognitive point of view, indistinguishable.
Laser cutting A rapid-prototyping technique in which flat material
such as plastic or metal lays on a table and a high-power laser is able to
rapidly cut a complex two-dimensional shape out of the raw material.

Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we’ve “solved” AI doesn’t realize the limitations of the current technology.
To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there’s been scarcely more than linear progress in five decades of working toward strong AI. For example, the different flavors of intelligent personal assistants available on your smartphone are only modestly better than Eliza, an early example of primitive natural-language-processing from the mid-1960s. We still have no machine that can, for instance, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class or an eighth-grade science exam.

…

AI can easily look like the real thing but still be a million miles away from being the real thing—like kissing through a pane of glass: It looks like a kiss but is only a faint shadow of the actual concept.
I concede to AI proponents all of the semantic prowess of Shakespeare, the symbol juggling they do perfectly. Missing is the direct relationship with the ideas the symbols represent. Much of what is certain to come soon would have belonged in the old-school “Strong AI” territory.
Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases . . . here in a couple of decades. But it is not all “iterative.” There’s a huge gap between that and the level of conscious understanding that truly deserves to be called Strong, as in “Alive AI.”

When pressed, the computer scientists, roboticists, and technologists offer conflicting views. Some want to replace humans with machines; some are resigned to the inevitability—“I for one, welcome our insect overlords” (later “robot overlords”) was a meme that was popularized by The Simpsons—and some of them just as passionately want to build machines to extend the reach of humans. The question of whether true artificial intelligence—the concept known as “Strong AI” or Artificial General Intelligence—will emerge, and whether machines can do more than mimic humans, has also been debated for decades. Today there is a growing chorus of scientists and technologists raising new alarms about the possibility of the emergence of self-aware machines and their consequences. Discussions about the state of AI technology today veer into the realm of science fiction or perhaps religion.

…

The experiment was made possible by Google’s immense computing resources that allowed the researchers to turn loose a cluster of sixteen thousand processors on the problem—which of course is still a tiny fraction of the brain’s billions of neurons, a huge portion of which are devoted to vision.
Whether or not Google is on the trail of a genuine artificial “brain” has become increasingly controversial. There is certainly no question that the deep learning techniques are paying off in a wealth of increasingly powerful AI achievements in vision and speech. And there remains in Silicon Valley a growing group of engineers and scientists who believe they are once again closing in on “Strong AI”—the creation of a self-aware machine with human or greater intelligence.
Ray Kurzweil, the artificial intelligence researcher and barnstorming advocate for technologically induced immortality, joined Google in 2013 to take over the brain work from Ng, shortly after publishing How to Create a Mind, a book that purported to offer a recipe for creating a working AI. Kurzweil, of course, has all along been one of the most eloquent backers of the idea of a singularity.

Thirdly, that intelligence, from its simplest manifestation in a squirming worm to self-awareness and consciousness in sophisticated cappuccino-sipping humans, is a purely material, indeed biological, phenomenon. Finally, that if a material object called ‘brain’ can be conscious then it is theoretically feasible that another material object, made of some other material stuff, can also be conscious. Based on those four propositions, empiricism tells us that ‘strong AI’ is possible. And that’s because, for empiricists, a brain is an information-processing machine, not metaphorically but literarily. We have several billion cells in our body.27 If we adopt an empirical perspective, the scientific problem of intelligence – or consciousness, natural or artificial – can be (re)defined as a simple question: how can several billion unconscious nanorobots arrive at consciousness?

…

The pioneers of AI explored many ideas including using algorithms for solving general logical problems, or simulating parts of the brain using artificial neural nets. And although they produced some very capable systems, none of them could arguably be called intelligent. Of course, how one defines intelligence is also crucial. For the pioneers of AI, ‘artificial intelligence’ was nothing less than the artificial equivalent of human intelligence, a position nowadays referred to as ‘strong AI’. An intelligent machine ought to be one that possessed general intelligence, just like a human. This meant that the machine ought to be able to solve any problem using first principles and experience derived from learning. Early models of general-solving were built, but could not scale up. Systems could solve one general problem but not any general problem.6 Algorithms that searched data in order to make general inferences failed quickly because of something called ‘combinatorial explosion’: there were simply too many interrelated parameters and variables to calculate after a number of steps.

Suicide by the numbers.” A glass appeared by my right hand. “Way I see it, we’ve been fighting a losing battle here. Maybe if we hadn’t put a spike in Babbage’s gears he’d have developed computing technology on an ad-hoc basis and we might have been able to finesse the mathematicians into ignoring it as being beneath them—brute engineering—but I’m not optimistic. Immunizing a civilization against developing strong AI is one of those difficult problems that no algorithm exists to solve. The way I see it, once a civilization develops the theory of the general purpose computer, and once someone comes up with the goal of artificial intelligence, the foundations are rotten and the dam is leaking. You might as well take off and nuke them from orbit; it can’t do any more damage.”
“You remind me of the story of the little Dutch boy.”

The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible.”9 Gordon Moore, whose name seems destined to be forever associated with exponentially advancing technology, is likewise skeptical that anything like the Singularity will ever occur.10
Kurzweil’s timeframe for the arrival of human-level artificial intelligence has plenty of defenders, however. MIT physicist Max Tegmark, one of the co-authors of the Hawking article, told The Atlantic’s James Hamblin that “this is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”11 Others view a thinking machine as fundamentally possible, but much further out. Gary Marcus, for example, thinks strong AI will take at least twice as long as Kurzweil predicts, but that “it’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.”12
In recent years, speculation about human-level AI has shifted increasingly away from a top-down programming approach and, instead, toward an emphasis on reverse engineering and then simulating the human brain.

If this were then subjected to an appropriate course of education one would obtain the adult brain.’
This proposed necessity of having to raise robots might lead you to the conclusion that truly intelligent robots will be few and far between. But the thing about robots is you can replicate them. Once we’ve got one intelligent robot brain, we can copy it to another machine, and another, and another. The robots have finally arrived, bringing an explosion of ‘strong AI’. Of course, it may not just be us (the humans) doing the copying, it might be the robots themselves.
And because technology improves at a startling rate (way faster than biological evolution), one has to consider the possibility that things won’t stop there. Once we achieve a robot with human-level (if not human-like) intelligence, it won’t be very long until robot cognition outstrips the human mind – marrying the human-like intelligence with instant recall, flawless memory and the number-crunching ability of Deep Blue.

I know for a fact that the new economies and new values that we discuss within Innotribe are driven by social media. Social media is creating new currencies and new economic models, and this will be very big and very important in the two to three years downstream from now. The question for the banks is how will they position in this new world of peer-to-peer currencies in social media. That is going to be a key question for banks in innovation for the next few years.
The other area is what I call strong AI. This is a modern way of looking at AI. The old way was mechanical and thought of this as expert systems. Today, we have this enormous computational power in our hands now, and we should make a big splash around this for the next four or five years.
So social data, social media, alternative currencies and peer-to-peer payments will dominate for the near term, and then big data and AI in four or five years from now.

Vendors like IBM, Cognitive Scale, SAS, and Tibco are adding new cognitive functions and integrating them into solutions. Deloitte is working with companies like IBM and Cognitive Scale to create not just a single application, but a broad “Intelligent Automation Platform.”
Even when progress is made on these types of integration, the result will still fall short of the all-knowing “artificial general intelligence” or “strong AI” that we discussed in Chapter 2. That may well be coming, but not anytime soon. Still, these short-term combinations of tools and methods may well make automation solutions much more useful.
Broadening Application of the Same Tools
—In addition to employing broader types of technology, organizations that are stepping forward are using their existing technology to address different industries and business functions.

Gödel’s second incompleteness theorem—showing that no formal system can prove its own consistency—has been construed as limiting the ability of mechanical processes to comprehend levels of meaning that are accessible to our minds. The argument over where to draw this distinction has been going on for a long time. Can machines calculate? Can machines think? Can machines become conscious? Can machines have souls? Although Leibniz believed that the process of thought could be arithmetized and that mechanism could perform the requisite arithmetic, he disagreed with the “strong AI” of Hobbes that reduced everything to mechanism, even our own consciousness or the existence (and corporeal mortality) of a soul.
“Whatever is performed in the body of man and of every animal is no less mechanical than what is performed in a watch,” wrote Leibniz to Samuel Clarke.51 But, in the Monadology, Leibniz argued that “perception, and that which depends upon it, are inexplicable by mechanical causes,” and he presented a thought experiment to support his views: “Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill.

In these years all the bad trends converged in “perfect storm” fashion, leading to a rise in average global temperature of five K, and sea level rise of five meters—and as a result, in the 2120s, food shortages, mass riots, catastrophic death on all continents, and an immense spike in the extinction rate of other species. Early lunar bases, scientific stations on Mars.
The Turnaround: 2130 to 2160. Verteswandel (Shortback’s famous “mutation of values”), followed by revolutions; strong AI; self-replicating factories; terraforming of Mars begun; fusion power; strong synthetic biology; climate modification efforts, including the disastrous Little Ice Age of 2142–54; space elevators on Earth and Mars; fast space propulsion; the space diaspora begun; the Mondragon Accord signed. And thus:
The Accelerando: 2160 to 2220. Full application of all the new technological powers, including human longevity increases; terraforming of Mars and subsequent Martian revolution; full diaspora into solar system; hollowing of the terraria; start of the terraforming of Venus; the construction of Terminator; and Mars joining the Mondragon Accord.

Say to them rather: “I’m sorry, I’ve never seen a human brain, or any other intelligence, and I have no reason as yet to believe that any such thing can exist. Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example.” Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies.
So now you perceive, I hope, why, if you wanted to teach someone to do fundamental work on strong AI—bearing in mind that this is demonstrably a very difficult art, which is not learned by a supermajority of students who are just taught existing reductions such as search trees—then you might go on for some length about such matters as the fine art of reductionism, about playing rationalist’s Taboo to excise problematic words and replace them with their referents, about anthropomorphism, and, of course, about early stopping on mysterious answers to mysterious questions