The Kurzweil Phenomenon

A reader has remarked that
the essence of Kelly's
What Technology Wants
is contained in Ray Kurzweil's The Singularity Is Near
[1].
He suggested that it be reviewed here.

All I knew about Kurzweil was
that he was reported to believe
that we are on the verge of eternal life:
recent advances in medicine can lengthen life enough
that the exponentially accelerating
further advances will suffice to further extend our lives,
and so on into eternity.

The second Kurzweil blip on my radar was the
unfavourable
review
of
[2] by Colin McGinn in the New York Review.

So that was two negative blips.
By the time I had finished reading the McGinn review
my interest was tweaked.
You see,
it was clear that McGinn was trying to make the review
of the kind that people call "devastating".
I became curious why he was trying so hard.

A sample of McGinn's review shows
an academic, a world authority
on the Philosophy of Mind, confronted with the impertinence
of a computer engineer barging in where angels
fear to tread:

So he is a computer engineer specializing in word recognition
technology, with a side interest in bold predictions about future
machines. He is not a professional neuroscientist or psychologist
or philosopher. Yet here we have a book purporting to reveal -- no
less -- "the secret of human thought." Kurzweil is going to tell us,
in no uncertain terms, "how to create a mind": that is to say, he
has a grand theory of the human mind, in which its secrets will be
finally revealed.

It turns out that Kurzweil is remarkably well read,
taking into account that he is a mere computer engineer.
To me this computer engineer
comes across as sufficiently well-read and intelligent
for me to suspect
he enjoys provoking those across the chasm that separates
the "professional philosophers" from people who actually do
something. Consider the title of
[2]
How To Create A Mind: the secret of human thought revealed.
After reading some of the philosophers of mind,
I came to the conclusion that "mind" is such an elusive
concept as to be useless
and that the same holds for "thought".
Sending your book into the world under this banner
looks to me like a tongue-in-cheek provocation,
and that is a good start.

It turns out to be a good start for a good read,
if one keeps in mind that nobody seems to be able
to explain the meaning of "mind" or "thought",
even though we all use these words.
Kurzweil indicates where he stands with respect
to Philosophy of Mind by quoting
Marvin Minsky: "Mind is simply what brains do."
Don't misread: I don't think that Minsky believes
that what brains do is simple.

While academics laboured on dissertations showing that
mind is not this and thought is not that,
Kurzweil invented, filed patents, and launched and managed companies
that took computer applications to new heights of sophistication.
For those interested in what has been happening and
what might happen next,
this book is a good introduction.

So, what has been happening in computers?
Here only some remarks about the role of the general-purpose
computer, about growth of computers,
and about expert systems.

For decades computers have been used to
recognize printed characters or spoken words
by means of general-purpose hardware running software
based on networks of
neuron-like data structures.
The data structures are abstractions based on what was
known about neurons in 1940.
Though much progress has been made in this way,
it is time we update the tried-and-true paradigm,
because by now we know a lot more about neurons.

It has always been tempting
to build
special-purpose computers by incorporating
at the level of the silicon more
of the knowledge we have about how neurons work.
For example
Carver Mead has described a way of using computer technology
at the chip level that is more like neurons than simulations
on neurons on general-purpose computers
[3].
Even though this is a quarter of a century old,
it is the most recent that Kurzweil quotes on the possibility
of special-purpose hardware.
So far artificial intelligence has piggy-backed on the
enormous increase in power of general-purpose hardware.
It makes sense not to do anything about this
until the hardware becomes a limiting factor.

Special-purpose hardware is one direction in which there
is room to grow.
Beyond that there is a more radical departure from
general-purpose computers.
Even Mead's radical departure from
purely digital assumes top-down control over the
layout of the chip.
Biological brains achieve their high degree of miniaturization
by growing,
that is, being built bottom-up at the molecular level.
The equivalent of chip layout is achieved
by being exposed to an environment that causes an effective
pattern of interconnections to form.
That is, by learning.
Doing this in silicon or in more suitable semiconductors
is a possibility when Mead's combination of analog and digital
runs out of steam.

By the 1970s computer programs ("expert systems") were built
that incorporated the knowledge and performance
of an expert in a specialized area, say,
gas chromatography.
The fact that the effort was successful
deservedly received a lot of attention.
Yet the practical impact was limited because
it was not enough to be an expert in
gas chromatography to build such a system.
For that a team was needed that not only included
such an expert, but also experts in other fields.

The additional expertise concerned how to transfer knowledge
from the expert to a form digestible by computer software.
This makes demands on both the software and the form of the
knowledge.
This is a far cry from the expert verbally expressing her knowledge
to a class of students.
So far it has not been possible for a computer system to pick
up anything from the unstructured informal kind of information
that humans learn from.

This is why a system such as "Watson",
designed to compete in the game show "Jeopardy",
is so intriguing:
not all of its information needed to be explicitly
entered by humans.
A significant part came from "Watson" being set loose
to read massive amounts of text aimed at human readers,
including in this case the reading of the entire Wikipedia.
Of course "reading" in this case only meant to pick up
lots of associations;
not what we would call "knowledge"
[4].

The Singularity is Near
is the earlier of the two Kurzweil books that I'm
reporting on.
It is the one with the wider scope:
Chapter Four is a preview of
How To Create a Mind.
"Singularity" has a wide scope indeed:
it begins long before humans, or even life, existed
and projects to a future in which intelligence
has evolved beyond recognition.
In Chapter One Kurzweil distinguishes the following
six epochs in cosmic history:

Physics and Chemistry

Biology and DNA

Brains

Technology

The Merger of Human Technology with Human Intelligence

The Universe Wakes Up

In Chapter Two Kurzweil starts by remarking on the acceleration
of development that is already apparent in the six epochs:
evolution spent more time in epoch 1 than in epoch 2
and so on down the list.
We are in epoch 4 and within this epoch acceleration is apparent.
By the time we reach the late 20th century we have
Moore's law predicting the increase in performance
of computer chips remarkably well for the past decades.
The increase is exponential:
it is not that a constant amount of performance
has been added,
but that performance has been multiplied by a near-constant
factor year over year.
In Chapter Two we find an intriguing variety of graphs
showing that exponential increases in performance
have been found in a variety of other technologies.

Will a computer be able to achieve human intelligence?
This would be an important milestone in the development of
technology.
A simple-minded approach is first to get a computer to
reach the computational capacity of the human brain
(for now, no computer does)
and by that time to make sure we can suitably program
such a computer.
An extrapolation of Moore's law says yes,
computer power will grow to equal that of the human brain,
and says that we don't have to wait very long.
But it is also likely that Moore's law will break down,
as exponential laws always do, and that this will happen
before that time.

It is wise to remember that Moore's law has applied to
a narrow paradigm:
a general-purpose processor with von Neumann architecture
on a single chip with a two-dimensional layout.
Chapter Three explores several alternative paradigms
according to which computer hardware can develop in the future.
These are likely to pick up where the current paradigm leaves off.
It is also wise to remember how the human brain was designed,
or, rather that it wasn't designed at all,
but got assembled randomly.
With a modicum of design human intelligence will be achieved with
less computing power.

With inexorable logic Chapter Four considers how to program
a computer with the computational capacity of the human brain.
This is a crucial consideration:
where technology has been extremely successful in hardware,
the opposite has been the case in software.

Chapter Five concerns three overlapping technologies:
genetics, nanotechnology, and robotics.
If we substitute computing for robotics and forget about
nanotechnology, then we have just two overlapping technologies
with the striking phenomenon that the same rapidly increasing
performance holds in two different domains.

The thesis of the book is that the various exponential
rates of improvement lead to an imminent discontinuity in
history.
But could it be that the exponential phenomenon is restricted
to the high-technology world?
What's it going to do about the fact that a large proportion
of humanity suffers from a lack of food and other
essentials?

Kurzweil does address this question and points out that
genetics has the potential to drastically improve food production.
He blithely ignores that science is not enough.
For example, in 2002 a drought in Africa threatened the lives
of 15 million people.
A 15,000 ton aid shipment of US corn,
about one third of it genetically modified,
was turned away by Zimbabwe on the grounds
that some kernels might be planted and thus endanger future
exports to the European Union, where sale of
GM corn is forbidden.
Part of the shipment was offered to neighbouring Zambia,
which had accepted such shipments in several earlier years.
In 2002 it was rejected even though three million Zambians
were facing famine.
President Levy Mnanawasa declared: "Simply because my people
are hungry is no justification to give them poison,
to give them food that is intrinsically dangerous to their
health" [5].
Did anybody tell the President that Americans have been eating that
stuff for years with no apparent ill effects?

I recommend The Singularity is Near: when humans transcend biology and wonder what it takes for humans to transcend stupidity.

References

[1]
The Singularity Is Near:
when humans transcend biology
by Ray Kurzweil, Viking, 2005.
[2]
How To Create a Mind:
the secret of human thought revealed
by Ray Kurzweil,
Viking, 2012.
[3]
Analog VLSI and Neural Systems
by Carver Mead, Addison-Wesley, 1986.
[4]
I don't believe Kurzweil's assertion on page 159
"Watson has already read hundreds of millions of pages
on the Web and mastered the knowledge
[italics mine] contained in these documents."
I wonder whether the hundreds of millions include
Project Gutenberg's works of Aristotle (mastered that knowledge?).
If only Kurzweil would have taken the time to read
the books that go out under his name.
I can only explain lapses like this by the "author" dictating
the text, and leaving text entry, copy editing, and proof reading
to the publisher.
[5]
Whole Earth Discipline by Stewart Brand.
Viking Penguin, 2009. This quote on page 154 of the
2010 Penguin edition.