Learning Machines

by Andy Oram

I started to get interested in robotics when I realized that an
intimate relationship with a robot would probably be part of my life
at some point. In Japan, robots are already being used to in elder
care, and theNeurobotics Laboratory
at Carnegie Mellon University illustrates research to develop robots
that can aid disabled people.

Why is this so important? By time my generation retires, there may be
about three working people for each one of us, and two of those three
could well be employed taking care of us. We need to automate in order
to remain productive.

I will explore the social implications of what journalistPhillip Longman
calls the global baby bust (Foreign Affairs, May/June 2004)
at the end of this article, but first I'll report on some of the
interesting activities going on currently in robotics, based on a
conversation I had last weekend withGeoffrey Gordon,
a research scientist focusing on reinforcement learning at CMU. To
me, Geoff described his specialty as applying algorithms found in
Artificial Intelligence to robotics.

To get some idea of the mind-boggling constellation of skills that go
into robotics, try browsing thecurrent projects
of the CMU Robotics Institute.

Software and networking

One major branch of CMU research goes into helping robots understand
their environment by communicating with other robots and drawing
conclusions from their combined data. Examples include figuring out
how an area is lit or heated from the values of sensors scattered
across the area. (Lighting is a popular research subject because it's
so easy for the researcher to control.)

Algorithms for this kind of distributed machine learning can get
surprisingly complicated. Some traditional AI algorithms assume that
samples can be collected in arbitrary order, but the dynamic changes
of robotic systems, with actors in constant motion, means that the
order in which events occur must be respected more.

Among the constraints is keeping communication to a minimum, because
communication uses much more power than the CPU.

In adapting typical AI algorithms such as neural networks, one has to
deal with unusually high failure rates of individual network
nodes. Sensors in real-life environments are fragile, and their
wireless communications are subject to noise and interference.

Links between nodes in real-life sensor networks are asymmetric and
changeable in the quality of communications. It's important to know
the quality of connections in order to find the most robust network
architecture, with the least transmission costs, that gives each node
the greatest chance of getting the data it needs.

While mesh networks (relatively unstructured collections of systems
with multiple redundant connections) are getting a lot of publicity, Geoff's colleague Carlos
E. Guestrin has found that hierarchical trees work better for many
applications. Each node exchanges its data with nodes above and
beneath it; experimentation shows that each node ends with a
reasonably accurate approximation of what is happening in its
environment. The tree scales better than a mesh, and one can predict
the overhead required for each operation. A tree, however, has to be
programmed to reconfigure itself quickly, because the node failures
already mentioned can cause major breaks in the tree.

The problems of peer-to-peer data collection in robotic systems are
philosophically interesting, because one realizes that no single actor
can possess the whole truth, and that one's understanding of the truth
is always restricted and distorted. In fact, the term "distorted" is
misleading, because it suggests that there is some absolute, ideal
truth we are approximating, a concept that is not particularly
helpful.

Hardware

Commoditization is a common theme in business and consumer
electronics, but sometimes it invades academia as well. Lab
researchers don't always want to build everything from scratch, even
if doing so makes a machine that's cheaper or more appropriate to the
application. Geoff Gordon would rather install something off the shelf
and get down to the work of his specialty faster.

Intel makes a low-cost chip called theXScale
for embedded systems, used in many robots. But it is not well suited
to Geoff's applications because it lacks a floating-point unit.
Activities such as mapping need trigonometric functions. While Geoff
could work around the missing FPU with such things as tables, that
would waste time and introduce the risk of bugs. So Pentiums rule the
robots. Linux is plenty small for these embedded systems, even without
custom recompilation.

I asked for impressive examples of commercially available robots, and
Geoff expressed a high opinion of the SonyAIBO
toy. He said it was gratifying to see such a powerful and
well-designed robot in everyday settings.

Applications

I asked Geoff whether any big breakthroughs seemed to be imminent in
robotics. He said ruefully that one aspect of AI and robotics is that
big breakthroughs often seem imminent, only to prove much more
difficult than researchers expected. Progress tends to affect some
deep aspect of the problem. For instance, he suggested that, in his
own field of reinforcement learning, researchers were learning how to
abstract elements of reinforcement. This essentially is the production
of generalized algorithms and libraries that can be applied to many
areas that have, up to now, reinvented each other's wheels.

Navigation is a well-known robotic task. Various kinds of robots are
tested in regularRobocup
soccer matches. On a much larger scale, the Department of Defense
sponsored a robot race last year from Los Angeles to Las Vegas. (Don't
worry, regular traffic is banned from the roads used by robots.) CMU'sRed Team
did the best of all the contestants last year, going seven miles
before its Humvee went over a bump and was left with its drive wheels in the
air. I noted that much of the race took place in the desert, and
wondered whether, if the DoD sponsored such a race forty years ago, it
would take place in the Florida Everglades. Don't the historians say
one always fights the last war?

I'll end by describing some research in aid to the disabled, which
began this article. Robots are being incorporated into motorized
walkers, so that a small pressure can direct the walker in the desired
direction. Some researchers are even trying to figure out how to tell
whether someone is falling and to support her.

A deeper intervention into a disabled person's environment is a set of
sensors that can tell if the person is disoriented and wandering
around an apartment. Hopefully these sensors can kick off some kind of
intervention to help anchor the person. We are used to thinking of
robots understanding and reacting to physical space and events. But
some research goes further, where they understand and react to
psychological states as well.

AI may provide guidance here. One of Geoff's students created a model
that reproduced how rats were found to learn new behaviors in some of
the classic experiments. Machine logic and biological logic here
converged. But it will not be within my lifetime, I expect, that a machine
can reproduce the way I have woven the rambles of my lunchtime
conversation with Geoff into this blog.

The Need for Robots

Robotics is a fascinating field, and my lunch with Geoff just opened
up new questions for me. The title of this article, therefore, refers
not so much to machines that learn as to my personal learning about
machines.

As I mentioned at the beginning of this article, the world's
population is expected to peak in the next century. The peak has
already occurred in the economically developed countries, and the
shortfall in labor is being made up with immigration. Within a couple
generations, if civilization survives, the rest of the world is
expected to take the same course.

In a well-known book, The End of Work, Jeremy Rifkin raised
the fear of billions of people displaced from the economy by
automation. That could well cause a catastrophe in the first half of
this century, but in the second half we might find that the potential
workforce is far too small.

The current debate over Social Security is a distant ripple from that
tsumami. Congress argues over funding. And while that issue is
important, it won't solve the problem unless young people adapt by
buying a lot of cans of food and storing them in their basements for
fifty years. Realistically, somebody in future generations is going to
have to produce all the goods that the huge numbers of old people need
to buy. The only way to provide a decent lifestyle for a population of
declining size is to vastly increase productivity, and robotics
promise to play a key role.

2 Comments

flursn
2005-01-13 02:39:35

I, Robot?
What I haven't seen adressed so far is the socio-historical aspect of robotics. In Japan we had 50 years of film-making where robots (and computers) saved mankind from space invaders, desasters, or mankind from itself. In the Western hemisphere, however, robots have a record of death, destruction and despair (the Terminator trilogy, the Matrix trilogy, Rifkin). The new friend of our old enemy is our old enemy.

So while Japanese society had been prepared for robots taking over vital parts of care, we're still scared of them. Perhaps "I, Robot" will mark the cornerstone for a new kind of robot, but I fear it'll take us decades, too, to make friends with them.

Xapp
2005-01-18 06:12:42

I, Robot?
It's a shame more people don't read. It took 35+ years for Asimov's robot ideologies to reach 'the masses' (not the geek masses) via a movie.

Sign up today to receive special discounts, product alerts, and news from O'Reilly.