It’s the time of year when many of you are renewing your Foresight memberships, and helping us meet our $30,000 goal for our Challenge Grant by December 31:http://www.foresight.org/challenge

I believe that the next decade or two will be the period when nanotechnology and AI, along with some of the other technologies of the kind Foresight was founded to watch, really begin to have major effects on the world outside the labs. Here are some thoughts on the subject (this essay is also posted on Nanodot). We hope to expand and deepen the analysis here over the coming year, and we hope you can be a part of it.

Foresight — with Peripheral Vision

Back in the 60s, Marvin Minsky, John McCarthy, and others presided over
a burgeoning field of study, Artificial Intelligence. Using machines that were
pitifully small and underpowered by today’s standards, they made remarkable
strides toward a visionary goal: creating a machine that could think and converse
like a human being.

Then an unfortunate thing happened. In the 70s, the amount of money
going to AI research began to attract political attention, as money will do.
The people not getting the money used political skills to have it redistributed.
The result was that funding shifted from the core of AI to applications — in
fact, the infamous Kefauver Amendment made it illegal for ARPA to fund any
basic research at all! The decade of the 80s was seen as the decade of the
expert system, where techniques developed in AI were used to tackle real-world
problems — but within the field, it was known as “the AI winter.”

What had happened was that in the shift to applications, work shifted to
concerns that were peripheral to the key elements of the original vision. A
machine that plays a good game of chess is not necessarily intelligent. We call
a good human chess player intelligent because the human learned the game by
watching, imitating, modeling, and in general building up a skill. The machine
got the skill by having human programmers figure it out and build it in. It’s the
ability to learn and build skills that constitutes intelligence, not simply having
them. And AI had shifted to be largely a field which built skills directly, instead
of one which studied how to build a machine that could learn them.

Does this sound familiar? It’s essentially the same thing that happened to
nanotechnology two decades later. The core of the generative vision — productive
machinery built to atomic precision — fell out of favor, and was even attacked
by the people who thought they could, or at least claimed they could, produce
the same results by short-circuiting the process.

But of course in both cases the result is something evolutionary instead of
revolutionary. And in both cases, perhaps surprisingly, the missing element is
the same: autogeny.

Walk into any consumer electronics store and you can buy a GPS unit that
would have flabbergasted any AI researcher in the 60s. It knows the map of the
whole continent. It can plan routes, estimate times, and plot your course while
you drive, about as well as a good human navigator — the errors are different
but comparable. It speaks to you in English, and you can speak to it and, for a
limited set of commands, it understands. The GPS seems a tour de force of the
kind of capabilities AIers were trying to build, and it is a damned useful gadget.
But having a GPS won’t help you one single bit when you try to build the next
“AI” device. Neither will a chess-playing program or house-cleaning robot.

Similarly, the products of current-day “nanotechnology” are beginning to
approach, in some respects, some of the possibilities pointed out by Feynman and
Drexler. The density of memory and circuitry is rapidly approaching molecular
scale. An iPod can hold the text of ten tons of books. New materials are being
fabricated whose properties will likely enable single-stage-to-orbit spacecraft.
But while both are damned useful gadgets, neither is going to help you build a
cell repair machine.

But evolutionary advance, along with general scientific and technological
progress, ultimately lays the groundwork for enough new capabilities for
autogenous systems to be built. One can’t be certain, and wishcasting is always a
pitfall to guard against, but there seems to be some movement back to the
center. In AI, Marvin Minsky could say “AI has been brain-dead since the 70s” and
then be invited to keynote the leading AI conference. Ben Goertzel, originator
of the term “artificial general intelligence” and leading proponent of a return to
AI’s roots, reports that he is no longer laughed off the stage at mainstream AI
meetings. There is a AAAI-sanctioned AGI conference series now going in to
its second year.

In nanotechnology, the cracks in the glass ceiling are appearing in the form
of the Battelle/Foresight Roadmap for Productive Nanosystems and some grant
funding for mechanosynthesis work.

I’ll go out on a limb and say I expect the loop to close in AI sometime in the
next decade and in nanotech in the decade after that. The world will become
an interesting place. To foresee with even a cloudy lens, we need to look at
a variety of technologies that could support an autogenous feedback loop and
thus have a revolutionary impact. Here are some candidates:

• Biotech is already built on an autogenous base, the reproductive capacity
of life.

• Software likewise is a substrate capable of supporting autogeny at a
number of levels short of true AI. The intersection of software, datacomm, and
human-based memetics over the past couple of decades has been explosive.

• Robotics: replacing human workers in physical factories, particularly ones
that made humanoid worker robots, would take humans out of the productive
loop, enabling a takeoff.

• Desktop fabricators: the same idea in a smaller package. I expect the
2010s to be a decade of experimentation with them the way the 80s were
for PCs. There seems to be a fairly straightforward path from fabs to
nanofactories, with increased value added at each stage.

The convergence of these, along with AI and nanotech and who knows how
many others I haven’t thought of, will form the core of technological capability
in the twenty-first century. I suspect that a study of the properties of general
autogenous systems will be invaluable in understanding it.

Once this takes hold, virtually everything will begin changing at Moore’s
Law rates. Let’s hope we have enough foresight that the changes will be
improvements.

————————————————————————

Comments are welcome — you can email me at josh@foresight.org, or go to our blog Nanodot (http://foresight.org/nanodot) and respond in the comments field for “Foresight — with Peripheral Vision.”

Please, if you can, chip in on the Challenge Grant at http://www.foresight.org/challenge. Every dollar you donate will be automatically doubled, so Foresight can do twice as much to influence our future in the positive direction that we all hope for.

Interesting stuff. This kind of mirrors my own thinking, only I don’t think nanorobotics will come a decade after AI. With AI, it will be simple to design and simulate nanorobotics in silico, once that happens people will pull out all stops to make nanorobotics, I’m certain of this. But we’ll see soon enough.

Dr. Hall’s vision of Nanofuture is provocative. He leads me to wonder why we have not yet reached Nanofuture. In an attempt to answer my own question, I think we need to look harder at the present and stop trying to predict the future of nanotechnology. We’re starting to believe that our dreams are reality – they are just our dreams. 1984 never happened as George Orwell envisioned and Stanley Kubrick’s 2001 was light-years off on the level of technology we hope to achieve. Nanofuture may not unfold as we envision. Maybe it’s our way of keeping hope alive when we don’t see any solution in our minds eye? Maybe we’re just trying to justify our failings and feel good about continuing business as usual? If we keep this up, we may never see our dreams realized. Perhaps we should look more closely at non-traditional solutions that exist today – sitting outside of the traditional channels of discovery? If you were starving, you wouldn’t care where your next meal came from – or how it’s served. Maybe we should consider reengineering our system of innovation from top to bottom? How much of the peer review system is self-serving? Perhaps we need to learn a few lessons from the recent economic circus? For starters, most people will take care of their own interests before any social obligations – its human nature. New financial rules are needed to work with reality – to prevent a recurrence of this mayhem. With nanotech, like any innovation, the needs of the many outweigh the needs of the few. But human nature says that scientists like any other people will promote their self-interests before those of society. In the hyper connectivity of our society new knowledge grows. Innovation can come from any direction so we need to be more open with less prejudice. We cannot be aware of what we are not aware of. Take innovation for what it is and worry about its source later. A good place to start would be the financial seeding of many nanotech startups with many small grants. Not a few big ones for ‘the deserving’. Water the whole garden of possibilities and see which seeds show promise because the rules for innovation are changing.