Every now and then I indulge myself in a depressing little calculation: How many books will I
be able to read during my lifetime? Even at a book every single day---over an adult reading life
of, say, 60 years---I'd only cover about 22,000 volumes. Not so bad, I guess---if I could really
keep up such a pace. Most of the time I'm hard put to read an entire book every two or three
days, particularly if it's something more than fluff. At an average pace of one book every three
days, I'd read about 7,000 books in all. But sometimes I get so busy that I don't finish a whole
book in a week. . . . So, in the four or five decades which, if I'm lucky, remain to me, I'm not
likely to cover more than another 4,000 or 5,000 books---total.

Viewed from that perspective, the future---my personal future, anyway---looks disturbingly
short. It's rather unsettling. And I got a very similar feeling from reading Ray Kurzweil's The
Age of Spiritual Machines and Hans Moravec's Robot.

I didn't expect to have this response. These are two books I thought would invigorate and
inspire me. They're like nonfiction sf, futuristic speculations by two of the leading lights of the
artificial intelligence community. Kurzweil is the inventor of a variety of amazing devices, most
notably the OCR (optical character recognition) software that scans typed, printed, and
handwritten text. If you've ever used a flatbed scanner, you've encountered Kurzweil's
handiwork. Moravec, meanwhile, has been involved in some of the most intriguing efforts in
robotics over the past thirty years, particularly a series of attempts---remarkably successful, as
he describes them in Robot---to give mechanical devices the ability to see and navigate
through the world by themselves. (Most recently, these projects resulted in an AI-guided van
that drove from coast to coast, in direct command more than 98% of the time.)

I've been a champion of AI as long as I can remember. I cheered Deep Blue on when it
defeated chess master Gary Kasparov. I've vigorously debated anyone who denied the basic
feasibility of machine minds, and I've been exhilarated by the advancements in the field which
have come with the explosion in computing power and memory over the past decade. So I
opened The Age of Spiritual Machines, a follow-up to Kurzweil's 1990 book The Age of
Intelligent Machines, with nothing but eager anticipation.

I was not disappointed---at least not in the sense I might have feared. Kurzweil is anything
but conservative in his predictions for AI over the next 100 years. First, he assures us, a
computer with the full computing capacity of the human brain will be available for a mere
$1,000 as soon as 2019. With that power, and continuing improvements in brain scanning
techniques such as magnetic resonance imaging, Kurzweil says a machine with the full mental
capacities of human beings---the first full AI---will appear by 2029. And that's just the
beginning. All this computer power and complete knowledge of the brain's architecture and
functions will allow individual humans to upload their minds into the growing global network,
to live essentially forever among the amazing machine minds that will have evolved there.
Nanotechnology (you've all heard of that, I trust) will provide these minds---human and
machine---with clouds of microscopic nanobots they can assemble into whatever sorts of mobile
bodies they desire. By 2099, Kurzweil says, the majority of people will have made the leap into
virtual immortality, and even the few holdouts will probably be heavily modified with neural
implants and other add-ons.

No one can hope to be very specific about the technologies of a hundred years from now,
and Kurzweil's predictions beyond the next couple of decades remain consequently vague.
What will the world look like if his suggestions are anywhere near the truth? What will these
future minds be like, given the huge additions of speed and memory they'll have to work with?
Kurzweil offers only the barest of hints. Most of these come in sections of imaginary dialogue at
the end of each chapter. In these, he questions a citizen of this future world (dubbed Molly) at
its various stages of development. She tells him about her first reactions to AI, her doubts about
leaving her familiar flesh for the virtual world of the net, and her subsequent evolution into
something only vaguely resembling what we think of as a human mind. (Indeed, in the final
chapter, it's clear that she's holding converse with Kurzweil using a tiny fraction of her larger
self, and severely modifying her expressions and assumptions in order to make herself
comprehensible. She can simulate an old-style human personality, but she is one no longer.)

Much of Kurzweil's material---nanotechnology, uploaded minds, virtual reality---will be
familiar to sf readers. But there are some ideas that haven't been much treated in fiction yet
(quantum computing, evolutionary algorithms, etc.), and Kurzweil's discussion of these
concepts offers just the kind of sfnal kick I had expected. (Reading about quantum-computing
researchers using a cup of hot coffee as their computing medium, I couldn't help but think of
wizards scrying the future in a bowl of water. Arthur C. Clarke's famous dictum concerning
advanced technology and magic strikes again.)

The Age of Spiritual Machines claims a closer relationship to reality than most sf,
however. Kurzweil works like a draft horse during the opening chapters to convince us that
this is not a work of wild imagination, rooted only loosely in the trends and concerns of today,
but a plausible (perhaps even inevitable) extrapolation from processes going on right now. And
he gives us some good reasons to believe him. His record of prediction over a 10-year period
(based on speculations from The Age of Intelligent Machines) is remarkable, particularly
compared to the success rates of other futurists. Simple math backs up his earliest assertions---
unless something drastic and unlikely happens, I'll bet we do have $1,000 computers with the
processing and memory capacity of the human brain within twenty years. Kurzweil admits that
his speculations further down the line may well be off in the details---perhaps the first full AI
will arrive in 2032 instead of 2029---but on the larger points, he's absolutely sure, and it's hard
to argue with him.

Hard, but not impossible. For all his marshalling of trends, I think Kurzweil underestimates
the difficulty of certain aspects of his scenario. Computing power may grow at the speed he
suggests, but will the reverse-engineering of the brain keep pace? Will nanotechnology ever
really produce the wonders that he foresees? Sf readers have been down Disappointment Road
before. Weren't we supposed to be vacationing on the Moon by now? The study of the brain
may prove as difficult as the construction of a grand unified theory in physics, and
nanotechnology may be as impossible (at least in the short term) as interstellar travel. That
would put a major crimp in Kurzweil's future.

Worse, perhaps, is Kurzweil's near-total avoidance of social and economic issues. How much
will it cost to upload oneself into the virtual world, and who will be able to afford it? He
devotes some attention to the inevitable antitechnological reactions these developments will
occasion, but socioeconomic factors will likely have a lot more impact on the deployment of
intelligent machines and virtual reality in the next century. There's already a huge gap between
the technologies available to wealthy Americans and impoverished Ecuadorians, for example.
It'll only get worse as computing power explodes exponentially.

But the most critical issue Kurzweil overlooks is the inevitable question "Why would we
want to?" He relies rather too heavily on arguments of inevitability, and assumes too easily the
appeal of his imagined technologies. It won't be just neo-Luddites balking at a virtual
existence---lots of people will find that uploading prospect much too adventurous. (All the
more so if the nanotech lags behind, and there are no mobile bodies for a machine mind to
inhabit.) Kurzweil gives too little credit to the power of human choice in the face of
technological developments. Some applications may be inevitable, given the insatiable curiosity
of our species, but the widespread distribution of machine intelligences or uploaded
communities can't be taken for granted. Perhaps they'll never be anything more than a
footnote.

Some similar problems crop up in Hans Moravec's Robot, which takes off from assumptions
much like Kurzweil's. Moravec puts off the arrival of machines with human-level intelligence
until 2050, largely because he believes that intelligent machines will also need to reproduce the
locomotive and manipulative abilities of human beings in the physical world. (Hence his title.)
This may be due to his long-standing involvement in robotics research---in other words, it
could be tunnel vision---but his arguments suggest that the abilities of robots to assist us in the
physical environment will be a crucial factor in the economic and social acceptance of
intelligent machines. He recognizes that for machine intelligence to become more than a
laboratory curiosity it will have to be of some practical use.

Moravec's got a point. It's much easier to imagine the development of machine minds out of
a series of intermediate steps. And so Robot updates some familiar predictions, such as robot
house cleaners, robot cars, robot factories, etc. Moravec brings more hard experience to his
speculations than Kurzweil---he never underestimates the challenges of creating machines with
even marginally intelligent real-world functionality. Reading these predictions from someone
who has dedicated his career to issues such as machine vision and robot navigation makes it
much easier to accept at least the technical feasibility of Moravec's future.

Moravec's far-future speculations, however, are much more radical---and worrisome---than
Kurzweil's. Once we've developed robots of sufficient ability, they'll begin to make
improvements to themselves. (Even now, he points out, chip designers rely heavily on
computer technology in designing the next generation of computer parts.) An explosion will
ensue. Machines will become rapidly more intelligent and capable than humans. They'll be able
to maintain civilization all on their own with a tiny fraction of their attentions, and they'll
inevitably turn their super-minds in other directions. They'll venture out into the solar system,
and later the larger cosmos, a giant "bubble" of Mind that will eventually consume the entire
universe.

Obviously Moravec doesn't think this will happen by year's end. But once our machines
reach that critical point, subsequent developments will occur quite rapidly. The question, of
course, is what becomes of us? And that's where Robot presents its greatest challenge to the
reader. As many sf writers have speculated over the years, Moravec thinks intelligent machines
may well replace their human creators. And the idea seems not to bother him at all.

"Rather quickly, they could displace us from existence," Moravec writes. "I'm not as
alarmed as many by the latter possibility . . ." His sanguinity derives in part from his belief that
our superintelligent successors will treat their forebears gently, allowing us to live out our lives
on a peaceful, depopulated, ecologically balanced Earth. And even if the machines' interests
become "incompatible with old earth's continued existence," Moravec assures us that the
machines will preserve us "in some form." Most of all, though, what spares Moravec any pangs
of regret for humanity's passing is the belief that these machines are like our children, not
genetically but culturally. What pain should there be in bequeathing the planet and the
universe to our "mind children"?

To my own surprise, I found myself really struggling with the idea. The cool, rational part
of me followed Moravec's arguments and agreed, in principle. But, just as specifying the
number of books remaining to me brings on a surge of existential panic, so Moravec's future
(and, to a lesser extent, Kurzweil's) aroused in me a powerful resistance. No moon colonies? No
Mars bases? No warp drive and humanity spreading out into the galaxy? Though I know that I
won't live to see that anyway, I find it hard to accept a future in which if anyone were to boldly
go anywhere, it certainly wouldn't be us, feeble biological beings that we are.

Is this just sentimentality, a nostalgia for a future that can never be? Or is there some better
justification for opposing our own extinction? I've been pondering this question ever since I
finished Robot, and I'm still not sure. But I lean toward the latter position. No species
should accept extinction without a fight, and no more should we give up our dreams of the
stars.

I'm hardly alone in this opinion. Sf has frequently confronted similar scenarios and almost
invariably come down in favor of battling for survival rather than simply rolling over. Think of
Fred Saberhagen's Berserker series, or (more recently) Linda Nagata's Vast. In film, there's
The Terminator and The Matrix. Even sf novels that accept the notion of a future
cosmos devoured by machine intelligence---in which we might live, if at all, as simulations
running in some corner of the cosmic Mind---seek some measure of dignity and meaning for
our future selves. (Robert Charles Wilson's Darwinia, which I reviewed here some months
ago, attempts exactly this.) I don't think we'll give up hope quite so easily as Moravec expects.

Ultimately Moravec's book is vulnerable to the same question Kurzweil's was: Why would
we ever choose a future like this? Perhaps we'll choose not to build these ever-so-helpful robots.
(Perhaps Moravec's own book will plant the seeds of mistrust.) And, even if we do, perhaps
we'll make sure that they don't get any funny ideas about shoving us out of the driver's seat . . .

*†††† *†††† *

Arthur C. Clarke's latest book, Greetings, Carbon-Based Bipeds!, a compilation of some
of his best nonfiction pieces from the full span of his career, provides very interesting
comparisons and contrasts in this context. From his earliest published writings---reviews of
books on rocketry---through his most recent newspaper editorials and satellite-broadcast
addresses, Clarke displays an optimistic humanism that's so notably missing from Kurzweil's
and Moravec's books. "The spirit of curiosity and wonder is the driving force behind all of
Man's achievements," he wrote in 1946. "If it ever fails, the story of our race will be coming to
an end."

In all of his visions of the future, Clarke never strays far from the question "why?" "It is one
thing to show how spaceflight may be achieved," he wrote in 1955; "it is quite another to show
why." (The same could be said of thinking machines.) Though his essays are full of practical
justifications for the various positions he defends---manned space flights, universal education, a
planetary defense system against asteroids and comets---his ultimate reasons lie deeper, beyond
material considerations. In 1962's "Rocket to the Renaissance," he wrote: "The creation of
wealth is not to be despised, but in the long run the only human activities really worthwhile are
the search for knowledge and the creation of beauty." And he is never shy about hoping for
significant improvements in human life: "When a world economic system is functioning
smoothly, when all standards of living are approaching the same level, when no national
armaments are left . . ."

The utopian dreams of a načve young man? Perhaps (though Clarke was 29 at the time, and
expresses similar aspirations in his writings up to the present day). But such idealism, such
boundless hope, is one of the characteristics that brought me to sf in the first place. The
resignation evident in Kurzweil and Moravec, their unexamined assumption that the ways and
means of American capitalism will forever dominate our culture (even relations between
Moravec's superintelligent machines are described in terms of contemporary corporate politics),
strike me as a sad indication of the diminishment of our vision.

On the other hand, there are some instances of striking parallels between Clarke's essays
and the views of Kurzweil and Moravec. In "The Obsolesence of Man" (1962), Clarke pursued a
nearly identical argument: "To put it bluntly and brutally, the machine is going to take over."
Given Clarke's unblinking rationalism, it's not tremendously surprising that he considered this
possibility---would you expect less from the creator of HAL 9000? But what is strange is
Clarke's attitude toward this fate: He's as accepting of it as Moravec. "No individual exists
forever," he writes, "why should we expect our species to be immortal?"

This position seems to contradict Clarke's unflagging belief in the capabilities of human
beings. An essay from some years later---"The Mind of the Machine" (1972)---reveals a slightly
different take on the subject. Here Clarke, like Moravec, foresees a world of humans freed from
the toil of industrial society by the services of intelligent machines, and instead of extinction,
Clarke posits a distant coexistence, with humanity dwelling in on an edenic Earth while "the
culture of the ultraintelligent machines" goes on its own "unfathomable way." This is
something of a compromise; machines will outdo humans, but they won't necessarily extinguish
us or crush our spirits.

Clarke revisits the issue again in the more recent essay "The Coming Cyberclasm" (1995).
Here, in light of the "zombification" he sees in the slaves of the Sony Walkman, Clarke deplores
the possibility of a human race grown inert and dependent upon its machines for support. In the
end, he says, "The machines may unplug us," and that "would serve us right."

The essays in Greetings, Carbon-Based Bipeds! offer a reminder that one of the best
purposes of futuristic speculation, whether fictional or not, is the outlining of possible futures,
that we may choose. Science and technology, Clarke tells us, "decide the kind of futures that are
possible: human wisdom must decide which are desirable." This is the dimension that's missing
from Kurzweil's and Moravec's books. It's a dimension that sf naturally brings to bear, and one
can only hope that our visionary humanists---Robinson, Cadigan, Bear, et al.---will map out
this territory while it's still fictional, and help us claim a more desirable future while we still
can.