The next decade promises another qualitative shift in the way we view technology, when old ideas condense into firm, available objects--robots, cyborg parts, and the many variations in between.

The coming of the robots will be obvious, just a few years away now. They will first work inside buildings with well-defined geometries, carrying paperwork or packages. Then they will be security guards, prowling company corridors through the night, using infrared vision in dark areas. We will leave a robot house-sitter behind when we go on trips, checking in over the internet to see--literally--what's going on.

Soon robots will be everywhere, performing surgery, exploring hazardous places, making rescues, fighting fires, and handling heavy goods. After a decade or two, they will be as unremarkable as the computer screen is now in offices, airports, or restaurants. Each new advance will create a momentary flurry of excitement, but the robots will increasingly blend in.
The coming of the cyborgs will be subtle. At first these additions to the human body will be interior, as rebuilt joints, elbows, and hearts are now. Then larger adjuncts will appear, perhaps on people's heads or limbs. Soon we will cross the line between repair and augmentation, probably first in sports medicine, then spreading to everyone who wants to make a body perform better than it ordinarily could. Controversy will arise, with many saying we are assuming more power than people should have over themselves. But such sentiments will not stop the desire to be better than we are; they never have.

Human self-change and robotic development are poles of the same general phenomenon, though an ever-shifting terrain. Some fields will surge ahead while others will simply fade away, and new, more promising approaches will emerge. But the broader implications of imminent change will remain.

Science has often followed cultural anticipation, not led it. Fiction and film have meditated upon the upcoming social issues of robots and cyborgs for well over a century.

Much technology, and even science, is connected firmly with its social context, and has even arisen from it. For example, in 1932 physicist Leo Szilard read H. G. Wells's 1914 novel, T=he World Set Free,= which predicted the discovery of artificial radioactivity in 1933--a direct hit, as it turned out. The novel depicted atomic power, bombs, and a world war between an alliance of England, France, and perhaps the U.S., against Germany and Austria. Wells's fictional bombs probably began the misnomer "atomic" instead of "nuclear," but they did work by "chain reactions." The novel was dedicated to Frederick Soddy, whose study of radium gave Wells the idea. Szilard saw the possibility of such weapons, and for decades was a central driver toward first making and then controlling them. He got Einstein's signature on the letter to President Roosevelt that launched the Manhattan Project.

Often research in human augmentation and robotics stems from speculations and projections in the surrounding culture. Many cultures have imagined altering ourselves and duplicating human abilities in machines. Much modern science fiction has imagined and thought through the personal and social effects of doing this, in well realized, realistic narratives. Such work can be a valuable guide to navigating the ever-rushing waters of change in the next century.

* * *

Can we be better?

The urge to improve upon the human body is at least as old as our species. It dates back to the first time a proto-human picked up a rock to smash a bone to reach the hidden marrow, or used a stick to help dig down to a tender root.

That edge allowed us not only to survive but to dominate the planet. We use other objects, now call it technology, and treat our tools and machines as though they are separate from us. But they are simply augmentations of our bodies, and have always been. They make us stronger, faster, and they restore hearing and vision.

The first question any seer into the future should ask is not what is technically possible, but rather, what is acceptable to people. Regarding cyborgs, two common assumptions seem dubious. First, that people will readily take to invasive technology. Second, that wedding the human brain to computer assistance will be simple, and will come soon.

By advancing beyond our current forms, we can better look back and define ourselves--a theme we meet often in sf. All the terms used to describe the coming forms that will go beyond the human norm--androids, cyborgs, bionic people, robots-- summon up the question of what it means, deep down, to be human.

Is it athletic prowess? (Probably not fundamental, but often depicted in films because it is easy to show.)

Intellect? (But computers press our abilities in many areas already.)

Creativity? (How creative is chess?—yet the world champion is now a computer.)

Ever since the 19th-century song "John Henry," we have given grudging ground to machine capabilities. ("He was a steel drivin' man, and he drove that steel as fast as he can . . . " but after beating the new rail-laying machine, he fell dead of exhaustion.)

A still greater anxiety lies below all these terms: the description of human parts as machinelike, and replacing them with actual machines, summons up the question of how much we are already essentially mechanical, right down to our minds. Hovering behind this nervousness lie questions of free will and just how much we truly control ourselves. This weds with Freud's revelations about unglimpsed impulses emerging from our unconscious. Who wants to go there?

Some do—if only to make a buck. In myriad science fiction films, autonomous humanlike beings betray all manner of human responses to the world. Some get away with it.

In 1984's film The Terminator, the killer robot gets more adept at imitating humans as it moves through Los Angeles. When someone addresses it, a pull-down menu of possible responses appears in its field of view, ranging from swear words to polite evasions. It learns to imitate voices from a single sentence, so it can fool others over the telephone, even getting right the tones of a mother's concern so that her daughter suspects nothing (a truly terrifying scene, quietly delivered).

Such adaptability and skill at seeming human, while not being truly so, calls up a fundamental fear that we will not be able to use our ancient primate skills of sight, sound, and smell to detect machine deceptions. In the film, dogs can smell killer robots, but not after the robots learn to grow human skin over their metal bodies.

Similarly, the robotic women in the 1975 film The Stepford Wives still seem sexy to men who know full well they are fake. This raises the uneasy question of whether we will be willing to go along with machines pretending to be human, if they satisfy our hungers and are reasonably adept. Some of us will simply not care whether others are robots or cyborgs, and will treat them all as objects to be used.

Analyzing ourselves as machines quickly calls forth those who argue that something mystically human transcends our physical basis. This blends into the concept of "emergent phenomena"-- recognizing that very complex behavior can come from simple rules. The seemingly infinite complexity of human culture could emerge, this suggests, from fairly simple patterns of logic, laid down long ago in the human mind and body by evolutionary pressures.

Increasingly, science fiction writers have spoken for the widespread, gut-level stresses we feel. Androids as sexual partners appeared in such titles as The Silver Metal Lover (Tanith Lee, 1982) and The Hormone Jungle (Robert Reed, 1988). The literature resounds with the earlier themes of robot revolution, machine takeover of society, and duplication of people without anyone knowing. These make for simple plot structures but seldom explore the depth that coming technology seems likely to force us to face.

Even so influential a film as 1982's Blade Runner= expressed profound anxiety about "androids" who were in fact completely fabricated humans, apparently made of organic parts, though with short life spans. They are termed "replicants" – as if they were mere replicas of us. This future rain-drenched Los Angeles features artificial animals (because the real ones are extinct) and many sickly humans (because most of humanity has left the polluted planet). Android replicants can be spotted with an empathy test, because they don't have any "real" feelings toward the natural world; they aren't part of it, after all.

The mere fact that these terms are routinely scrambled--'droids, 'bots, replicants, bionics--and blithely misused in the popular culture tells us something also. All artificial forms are suspect.

People assembled from cadaver parts are certainly not to be trusted. Moviegoers have consistently called the most famous monster 'Frankenstein,' but in Mary Shelley's 1818 novel, Frankenstein, or The Modern Prometheus,= it is the name of the creator. This most famous of all modern humanlike creations, fashioned from dead body parts, is the work of horrific over-ambition. It expresses the fear that our creations can put us in their dark, looming shadow. The enormous cultural legacy of Frankenstein= speaks to us now, nearly two centuries since a young woman created those images and ideas, of how much we distrust any attempt to ape or surpass us.

Can we go too far in making ourselves machine-like, or making machines resemble us? Many feel so already. And once made, will such creatures be like us, but will not end up liking us?

These questions will arise in myriad ways in the next few decades, as we press today against boundaries that only a short while ago we met only in works of the imagination.

* * *

Take a concrete example-- running. Ever since Puss in Boots's seven league boots, people have dreamed of being able to run faster than humanly possible. The first modern superhero, Superman, can "leap tall buildings at a single bound," besides being able to fly by no visible mechanical mechanism at all.

Okay, it's a metaphor. Taking it semi-realistically, though, all this is clearly impossible if we are confined to flesh and blood, but what about bionic assist?

The bionic man and woman from the old TV series had super-speed, but we never saw the nuts and bolts of how it was accomplished. Without limits, things get crazy. The zany clay animation film from Nick Park of 1993, "The Wrong Trousers," postulated a pair of unstoppable robotic trousers.

But seriously, how close are we to super speed, or even walking robots? Human walking, it turns out, is very difficult for machines to master. Walking on two legs demands movable joints, a pelvis, precise co-ordination among major muscle groups in the legs, and the action of stretchy tendons. If any of these components is missing or diminished, people have various problems with mobility, or can't walk at all.

To achieve upright balance, the body has gravity sensors in the inner ear (the cochlea) and mechanoreceptors (stretch receptors) in skeletal muscles. Together they tell the brain which way is up, which muscles are working, and enable it to program the legs to walk. At a minimum, eight leg muscles are required to stand; another eight are needed to walk. Graceful walking requires the help of even more muscles. Walking is actually a series of short forward falls, and the body is saved by moving a leg to the front at just the right time.

The balancing required in upright walking is still difficult for the human brain, even though our ancestors started doing it several millions of years ago. It's easy to lose the ability to balance if the muscles aren't exercised regularly, and falling is one of the most common symptoms of old age or failing health.
It's extraordinarily difficult to construct a mechanical device that can walk smoothly on two legs. (The old joke about being able to walk and chew gum simultaneously was not far wrong.) In Japan the Honda motor company spent more than a decade, and millions of dollars, on a robot that can walk as well as climb and descend stairs.

But that's= all it can do. Other robotics experts wonder if it was worth all the trouble. Joe Engelberger, one of the founding fathers of robotic devices, feels that wheeled robots are the most practical, due to the design problems of walking ones. Many other roboticists agree with him.

Nonetheless, the work at MIT's Leg Lab and elsewhere continues, in part because of the interest in helping amputees and paralyzed victims of spinal cord injuries. Designing mechanical devices that walk naturally helps the scientists understand how a person walks. Then they can help create better prostheses for amputees, as well as sensor and control systems for paraplegics.

* * *

[[FIGURE TO ACCOMPANY THIS SENT AS SEPARATE FILE]]

Take the most profound human limitation—mortality.

The oldest person whose age is reliably known was Jeanne Louise Calment of Arles, France, who lived from 1875 to 1997, achieving 122 years, 5 months. She had sold pencils to the young, unknown Vincent van Gogh. As the mortality curve here shows, there are two major stories taught by history about improving longevity.

First, the dramatic improvement has come mostly from the better survival rates of children. In a state of nature, children fall prey to cold, disease, and accident at high rates. Better sanitation, medicine, and nutrition have made their gains throughout the 20th century. This explains why the curve for Mexico in the 1920s differs greatly from British India at the same era: India had yet to enjoy the improvements slowly diffusing from the advanced nations, while Mexico had. Note that the difference between England of the 1960s and Mexico of the 1920s came mostly from better child mortality rates.

Second, the elderly death rate has shown some improvement, but not a lot. There is still a fairly solid "wall" around age 80, and beyond it, the population declines roughly exponentially. One might term this the "fragility wall" where people become prey to any passing microbe or severe accident. Their resistance and resilience has eroded until they are easy marks.

We all take extending longevity as something good to be sought. Some will carp about increased costs to Social Security, or population growth, but getting more out of a single life also promises huge gains, many of them economic, as people work to greater age. Given the search for more years, it is surprising that there is so little research into the deeper questions of how to push back the age frontier. Is there a basic limit? And is it the same for us all?

Notably, since 1900--when the death rates of the sexes were just about the same--women have consistently gained extra years of longevity over men, until now they live about 10% longer in advanced societies. Few take note of this remarkable inequality, which is still increasing after a century. Females consume more than two-thirds of health care budgets, and are consistently heavier users of health services throughout their lives. Upgrading male longevity to the level of females' in the advanced societies would improve the average human survival more than, say, completely eliminating cancer. This strongly suggests that social forces have a great deal to do with improving our expected lifespans, beyond the reach of technologies alone.

The other major factor affecting longevity is prosperity. Wealthy societies fare better. Medical technology is a major cause, leading one to ask if augmenting people will lead to longer lives. Certainly radical technologies like nanotech would profoundly affect longevity, allowing replacement of cellular materials and direct, pointed interventions in major causes of death today, diseases like cancer and arterial blockage. Even simple mechanical aids like better legs and hips could prevent the often-devastating falls among the elderly.

The curve labelled Present best possible is a guess at what might be achieved by present technologies on all fronts. Shoring up the elderly could plausibly lead to the dashed curve within about fifty years.

After all, in the advanced nations the average longevity has increased 50% in the last century. A similar improvement would take us to a curve that terminates somewhere between 100 and 110--a cotton-topped future. But technology changes, and the advances from augmentation plausibly can keep making inroads on the many causes of our mortality.

The curve labeled 2100? is of course my pure guess, building on the successes of the past century. It assumes the "fragility wall" around age 80 has been thoroughly broken down, with another 50% increase beyond the Present best possible curve. This line is not a serious prediction, but a suggestion of how much augmentation would change our views of the human condition.

We have no true idea of an upper limit on lifespan. If we eliminated all aging, so that we faced no "fragility wall," eliminated diseases, and could avoid all causes of death except accident (including suicide), how long could we live? Most people, when asked, guess at ages like 120, or 150. The answer gathered from death rate tables is astonishing: close to 1500 years! This seems more plausible when one reflects upon how many friends die of accident. Typically, one knows only a few who die in accidents before age 50, from a total of, say, 1000 friends. This translates to a death rate of order 1/1000 per year, or an average expected lifespan of about 1000 years.

With only a century or less of life, humans have developed many social forms to deal with this span, and nearly none that look beyond it. Take just a small step into that immensity: Imagine living to 150. How would you plan a career? A marriage?

With augmented bodies, new social problems will arise. I'll take up those next time, as they may be the most profound implication of the path the cyborg will lead us to follow.

Comments on this column welcome at gbenford@uci.edu, or Physics Dept., Univ. Calif., Irvine, CA 92717
This column was based in part on the PBS TV show and book Beyond Human by Gregory Benford and Elisabeth Malartre.