THE HUMAN GESTURE, Foreword to Humanoid

"What does it mean to be human?" I asked myself. I was staring at an android head in the former family room turned workshop and laboratory of Hanson Robotics. David Hanson, his wife, Amanda, and son, Zeno, had moved upstairs, ceding the ground floor to his start-up robot design and engineering firm known for making some of the most humanlike androids in the world. The research and development in robot consciousness and artificial intelligence that has taken place in this two-story home in Plano, Texas, a suburb of Dallas, was equal to that of any university or research center. But I'd been there all day and didn't have my shot yet. I'd placed my lens at every conceivable angle, 360°, had traced the light on the android's face as the sun made its transit from east to west;

I'd taken a lot of Polaroids but not shot a single piece of film. I could just snap a picture and leave. But it's not that easy. Humanoids, androids—they make them look like humans. Why make a robot with a head and eyes? Certainly, there are robots that don't have a head or a pair of eyes, robots that could be made or that already exist that are safe for humans to be around, that have the ability to perform various jobs, that are probably cheaper to build, and that may be even better at doing whatever we'd want or need a robot to do. But scientists have found it's the eye contact that matters: just as important as eye contact is between humans, it is the glue that makes human-robot social interaction work.

Like many androids and humanoids, Hanson's android has a pair of video cameras for eyes that feed data to the robot's artificial intelligence. Every time someone on Facebook uploads a photo, it gets tagged and linked to others like it, and the Facebook face recognition software gets smarter. Similar code has been written into many photo-editing apps and is in Hanson's androids too. It's how these robots see faces, make eye contact, and track you, make you feel like you're being watched by a real person; other algorithms (voice recognition) in the code allow his robots to understand speech, remember the interactions, and remember your face. Over time, Hanson's robots get smarter too.

Eerie, creepy. When you scan the Internet for the latest advances in robotics, you'll almost inevitably encounter such words in the headline or the first sentence describing the next new development in robot engineering. CB2, one of the most advanced research robots (for its time), is known on the Internet as "creepy baby." There's nothing creepy about CB2 at all. When Minoru Asada, director of the division of cognitive neuroscience robotics at Osaka University in Japan, asked how smarter robots could be built, he decided to build CB2, an infant robot, and study how robots learn. The experiment he designed asked the question, How does a child robot acquire knowledge and gain the ability to perform functions? The first thing Asada's team did was teach CB2 to crawl. But the data they collected said little about robots; rather, it revealed how humans learn. This was a paradigm shift in humanoid science—robots could be used to teach us about ourselves. What's creepy about that?

Yet I would be lying to myself, and you, if I didn't acknowledge that a general fear of robots exists. It's a pervasive theme in science fiction—Metropolis, Westworld, Blade Runner, Terminator—it's the same story over and over again. But what does this tell us, what power do these stories have over us? Are they like Joseph Campbell myths? Do they have the power to shape our thoughts and the culture we live in? It's Saturday morning, you're a child again, you're watching TV. If you're a baby boomer like me, you'll remember for a long time Nazis and evil-looking Japanese were the number one villains and all-around bad guys in the movies and in Saturday morning cartoons. Then, as I remember, it was the early 1980s, possibly coinciding with the Iran hostage crisis, or maybe it started earlier, in the aftermath of the Arab oil embargo, the oil and gas crisis in the early '70s, I started to notice Arabs supplanting Nazis or a Japanese soldier with buckteeth and exaggerated slanted eyes in the role of bad guy and supervillain. One wonders how much of the anti-Arab/anti-Muslim sentiment held by some, particularly post—9/11, was cultivated by this shift of cartoon bad guy. Whether subconsciously or consciously, Hollywood has indisputably played a part in creating stereotypes of the other.

Enter the Uncanny Valley, a hypothesis first posited in 1970 by Masahiro Mori, a professor of engineering at the Tokyo Institute of Technology, and published in Energy, the corporate magazine of the Japanese subsidiary of Exxon. As depicted in his graph, the Uncanny Valley plots human emotional response against the degree to which a robot is anthropomorphized. Putting the graph in words, Mori's theory states that as a robot approaches looking human, we experience an increased affinity and comfortableness with it, but the closer it comes but misses being human and alive (a zombie, or prosthetic hand, for example), our response abruptly turns to fear, disgust, revulsion at the failure, the schism. The curve jumps back up to positive and toward infinity when a healthy human is encountered.

At the time the editor of Energy had asked Mori to write an article for a special issue on robotics, but for the fictional robots seen in movies and science fiction in 1970 there were no androids or humanoids, no robots approaching human likeness, no ambulatory, talking robots such as we have now. With no actual robots to write about for his article, Mori thought about the electronic prosthetic hand, which had just been introduced. It made him feel uneasy, very much like his phobia of wax figures, which he'd had since he was a child; "they always looked creepy to me," Mori said. This was the basis for his hypothesis and the seminal article he wrote. That's it—there was no data collection, no research, no study on which he based his idea. What would later become a central tenant in robot theory and design was based on nothing more than a childhood fear. "There's no detailed scientific evidence," says Cynthia Breazeal, director of the Personal Robots Group at MIT. "It's an intuitive thing."

Throughout her career Breazeal has gone to great lengths to avoid the uncanny valley, using big eyes, oversized heads, making robots only so tall that an adult human can look down on them, and going so far as to hire cartoon animators to assist in the design of her bots. It was Breazeal who on meeting David Hanson in 2002 introduced him to the uncanny valley, warning him that he "should not make robots humanlike," lest he fall into it. "The robots will not work," she told him. "Humans will not respond." Hanson remembers feeling "a bit sad" but "defiant," like he was on to something, and soon afterward his robots and scientific papers went viral. Although the uncanny valley theory had largely lain dormant for some thirty years until science, robotics (Ishiguro and Hanson), and computer graphics (The Polar Express) had caught up with it, Mori marks a scientific conference in 2005 as the point when the Uncanny Valley began to gain traction. Mori had been invited to the conference but was unable to attend, so he wrote a letter that was published and disseminated at the meeting. Reading the letter today, I find it remarkable for two very interesting comments he made—first, his acknowledgment that while he'd come up with the "notion of uncanny valley," he'd "not examined it closely"; second, his statement, "Once I positioned living human beings on the highest point of the curve in the right-hand side of the uncanny valley. Recently, however, I came to think that there is something more attractive and amiable than human beings in the further right-hand side of the valley. It is the face of a Buddhist statue as the artistic expression of the human ideal." Ahhh, yes, the Buddha, and there Mori had just shot his theory in the foot. The first step to satori, enlightenment, the Buddha teaches, is the acceptance of our imperfection. As human beings, we cannot escape our imperfection, our humanness, the human condition. Follow Mori's thinking and look where you find yourself on the curve—you might not like what you see if you look in the mirror; the uncanny valley is steep. Surf the internet—people call Cynthia Breazeal's robots creepy too.

Junk science has been used in the past to rationalize fears and justify prejudice. Eugenics, the pseudoscience the Nazi party embraced in the pursuit of a superior master race, had the support of German scientists and has long been regarded as scientific racism. We need not look even that far back, but only to the mid-1990s, when a different curve and graph, The Bell Curve, was published. Presented by its authors as the rational explanation for the differences in intelligence between races, the book was widely condemned by the scientific community, all rational thinking people, and the world at large.

In the spring of 2010, I started researching humanoid robotics in preparation for the photography work I was about to begin. On the eve of photographing my very first robot, I watched Spielberg's AI, Artificial Intelligence. The main theme and plot revolve around the unrequited love of a mother who fears, even hates, her android son. In the film, androids are in constant fear of being hunted down by the police, rounded up, and destroyed; they live in hiding, always on the run. Prominent in the news at the time I first watched A.l. was a law that had just been passed in the state of Arizona. The law, SB 1070, was considered the most draconian immigration law ever passed (several other states followed suit; HB 56, passed in Alabama in 2011, is considered the worst). SB 1070 required law enforcement officers to determine an individual's immigration status during any "lawful stop, detention, or arrest"—in effect legalizing racial profiling. Another section of the law gave ordinary citizens the right to sue the police and government if they believed the laws weren't being enforced rigorously enough. It was not hard to connect the state-sanctioned fear and hatred depicted in the movie A./. to the xenophobia being reported in the news. The fear and loathing of strangers or of anything that is foreign, directed against Latinos, African Americans, Jews, Muslims, homosexuals, is in regard to robots just plain old fear of the "other."

A humanoid is a machine. It's just a toaster. You don't get afraid of your refrigerator, or your microwave, so why get upset about a robot? It's an inanimate object—there's nothing there. Now, if you are afraid, I understand. It's okay. I've learned to respect people's fears. When I began to photograph surgery more than twenty years ago and first started to show the photographs to people, to be honest, I was kind of shocked at the reactions I got, people telling me the photographs were "gross," or "disgusting," or worse. I didn't understand, because for me to see inside the human body, this incredible mystery we walk around in but are blind to, was like going to the moon. I was in awe. But as negative reactions kept coming, not all but enough, there were people who wouldn't even look at the photographs, and over the years many horrible things were said to me that were about me, not just about my work. After a while I stopped taking pictures in the OR and took time to just think about what I was doing and why I was doing it. On the verge of quitting altogether, I realized I had to continue. Not for the people who like myself saw beauty and found solace in the photographs, but for those who were frightened or even horrified, who had said the most awful things. What I had realized was that these photographs of surgery were like a Rorschach test, and what people saw in them and said about them had nothing to do with me but everything to do with themselves. I had to continue so that the work could be finished, so that someday, when they were ready to see, the photographs would be there for them.

They could open a book and close it, open it again and go through it at their own pace. The photographs would be there so that, one day, it might help them get over their fear, which I later realized was no more than the fear of death, our inescapable reality.

My experiences photographing surgery led me to a decision to attend medical school, and in 2004 1 earned my MD. As a doctor, I cared for the dying and had the responsibility, and would be called when needed, to pronounce people dead. It was a privilege and an obligation I performed with great respect. Some of those I pronounced dead I had gotten to know over the course of a long illness and hospitalization, and I had gotten to know their families as well; some patients I did not know but had been on call in the hospital and was the senior or only doctor present who could perform these duties. I will tell you, when someone has died—and I've seen that transition from one moment to the next many times—when they are dead, there is no one there. They are gone. I'm not religious, I don't believe in spirits or the religious concept of a soul. I'm just telling you, no one is there—I mean no disrespect, but a corpse is an inanimate object; it might as well be a rock. And being with the dead, there is nothing to fear. It's the same with the robots; there's nothing more to it than being in a room with your toaster. It's just a machine.

You walk into a room, you see a humanoid there, you suspend disbelief—you don't even realize it, but it happens. All of a sudden it has a gender: CB2 is a he, Bina48 is a she, and Valkyrie is a girl; they call her Val for short. You start talking to the robot, or it starts talking to you; you're talking to a machine and it's talking back to you. There is no higher means of achieving complex communication with ease and speed than human to human communication. As robots increasingly become a part of our daily lives and function in and around our surroundings, having a robot with a head and eyes, arms and legs, as opposed to performing complex data entry on a keypad or switching on one terminal, switching off another, or operating a series of switches you'd have to turn on or off to perform a single task, speaking to a robot is ideal. Why humanoids, why androids? Robots with heads and eyes allow for more than just speech; nonverbal communication is made possible as well, such as when the eyes say one thing and words another, or simply when the two agree. Nonverbal communication—conscious or unconscious gestures and signals, the mediation of personal space—is just as important and essential as trust is between any two individuals, whether they be human and human, or human and machine.

. . .

What does it mean to be human? I ask myself. I'm staring at an android head. I could just snap the picture and leave. But it's not that easy. It's the light. Finding that right place, the moment between the subject and me... that gesture, finding the right angle—is it here, is it there, right there—where; yeah, right, I've seen that someplace, on the street, on the beach, in a crowd, I've seen that, that's a human expression right there, and that's what I'm looking for, that is a human gesture, that is a person right there.