THE AGE OF INTELLIGENT MACHINES | Growing Up in the Age of Intelligent Machines: Reconstructions of the Psychological and Reconsiderations of the Human

April 25, 2001

The cultural fascination with computation and artificial intelligence has two faces. There is excitement about the artifacts themselves: their power, their ability to act as extensions of our minds much as the machines of earlier generations acted as extensions of our bodies. But also and equally important, there is an involvement with computers as mirrors that confront us with ourselves. Here the question is not whether we will ever build machines that will think like people but whether people have always thought like machines. And if this is the case, if we are in some important sense kin to the computer, is this the most important thing about us? Is this what is most essential about being human?

Such questions have long been the province of philosophers. But in recent years something has changed. Intelligent machines have entered the public consciousness not just as actors in science-fiction scenarios but as real objects, objects you can own as well as read about. This has put the philosophical debate in a new place. Artificial Intelligence has moved out from the world of the professionals into the life of the larger culture. But unlike the way in which a theory like psychoanalysis was able to move out into the wider culture, the new popular considerations of mind and machine arise from people’s actual relationships with an object they can touch and use. New ideas are carried by relationships with computers as objects to think with.

A simple example makes the point. Twenty years ago the question of how well computers could play chess was a subject of controversy for AI researchers and their philosophical interlocutors. Some writers felt that there was no limit to what computers could achieve in this domain. Others responded that there would be absolute limits to the powers of machines. “Real” chess, they argued, was the kind of chess only humans could play, since it called upon powers of synthesis and intuition that were uniquely rooted in human capacities.

That dialog with the strengths and limitations of the machine continues within philosophy, but it has been joined by other, more informal conversations: those of master players who sit across from computers in tournament play, those of recreational chess players who compete with chess computers at home. The chess computers have gotten very good; chess players respond by trying to determine what is special about their play, even if they cannot always exploit this specialness to actually beat the machine. And of course, the players include the first generation of children who have grown up playing chess with computers. A thirteen-year-old, Alex, plays daily with a chess computer named Boris. He comments that although he always loses if he “puts the setting high enough” (that is, if he asks the computer to play the best game it can), “it doesn’t really feel like I’m losing.” Why? Because “chess with Boris is like chess with somebody who is cheating. He can have all the most famous, all the best chess games right there to look at-I mean, they are inside of him. I can read about them, but I can’t remember them all, not every move. I don’t know if this is how he works, but it’s like in between every move he could read all the chess books in the world.” Here, human uniqueness is defined in terms not of strengths but a certain frailty. “Real” chess for this child is human chess, the kind of chess that works within the boundaries of human limitations.

Thus, the presence of intelligent machines in the culture provokes a new philosophy in everyday life. Its questions are not so different than the ones posed by professionals: If the mind is (at least in some ways) a machine, who is the actor? Where is intention when there is program? Where is responsibility, spirit, soul? In my research on popular attitudes toward artificial intelligence I have found that the answers being proposed are not very different either. Faced with smart objects, both professional and lay philosophers are moved to catalog principles of human uniqueness. The professionals find it in human intentionality, embodiment, emotion, and biology. They find it in the fact that the human life cycle confronts each of us with the certainty of death. There are clear echoes of these responses within the larger culture. As children tell it, we are distinguished from machine intelligence by love and affection, by spiritual urges and sensual ones, and by the warmth and familiarity of domesticity. In the words of twelve-year-old David, “When there are computers who are just as smart as people, the computers will do a lot of the jobs, but there will still be things for the people to do. They will run the restaurants, taste the food, and they will be the ones who will love each other-have families and love each other. I guess they’ll still be the ones who go to church.”

One popular response to the presence of computers is to define what is most human as what computers can’t do. But this is a fragile principle when it stands alone, because it leaves one trying to run ahead of what clever engineers will come up with next. An increasingly widespread attitude, at least among people who have sustained contact with computers, is to admit that human minds are some kind of computer, and then to find ways to think of themselves as something more as well. When they do so, people’s thoughts usually turn to their feelings. Some find it sufficient to say, as did David, that machines are reason, and people are sensuality and emotion. But others split the human capacity for reason. They speak of those parts of our reason that can be simulated and those parts that are not subject to simulation.

In all of this, the computer plays the role of an evocative object, an object that disturbs equanimity and provokes self reflection. That it should do so for adults is not surprising: after all, intelligent machines strike many of us as childhood science fiction that has become real. But I have already suggested that computers also play this role for children. You may give a computer to a child hoping that it will teach mathematics or programming skills or French verbs. But independent of what the computer teaches the child, it does something else as well. For the child, as for the adult, the machine is evocative. It creates new occasions for thinking through the philosophical questions to which childhood must give a response, among them the question of what is alive.

The Swiss psychologist Jean Piaget first systematized our understanding of the “child as philosopher.” Beginning in the 1920s, Piaget studied children’s emerging way of coming to terms with such aspects of the world as causality, life, and consciousness. He discovered that children begin by understanding the world in terms of what they know best: themselves. Why does the ball roll down the slope? “To get to the bottom,” says the young child, as though the ball, like the child, had its own desires. But childhood animism, this attribution of the properties of life to inanimate objects, is gradually displaced by new ways of understanding the physical world in terms of physical processes. In time the child learns that the stone falls because of gravity; intentions have nothing to do with it. And so a dichotomy is constructed: physical and psychological properties stand opposed to one another in two great systems. The physical properties are used to understand things; the psychological to understand people and animals. But the computer is a new kind of object, a psychological object. It is a thing (“just a machine”), yet it has something of a mind. The computer is betwixt and between, an object with no clear place in the sharply dichotomized system, and as such it provokes new reflection on matter, life, and mind.

Piaget argued that children develop the concept of life by making finer and finer distinctions about the kinds of activities that are evidence of life. In particular, Piaget described how the notion of life is built on progressive refinements of children’s concept of physical motion. At age six a child might see a rolling stone, a river, and a bicycle as alive for the same reason: “They go.” By age eight the same child might have learned to make a distinction between movement that an object can generate by itself and movement imposed by an outside agent. This allows “alive” to be restricted to things that seem to move of their own accord: a dog, of course, but also perhaps a cloud. An object drops out of the “alive” category when the child discovers an outside force that accounts for its motion. So at eight the river may still be alive, because the child cannot yet understand its motion as coming from outside of itself, but the stone and the bicycle are not alive, because the child can. Finally, the idea of motion from within is refined to a mature concept of life activity: growth, metabolism, breathing.

There are two key elements in the story as Piaget told it. First, children build their theories of what is alive and what is not alive as they build all other theories. They use the things around them: toys, people, technology, the natural environment. Second, in Piaget’s world, children’s sorting out the concept of life presented a window onto the child’s “construction of the physical.” Thinking about the idea of life required the child to develop distinctions about the world of physics. The motion theory for distinguishing the living from the nonliving corresponds to the world of “traditional” objects that have always surrounded children: animate objects-people and animals who act and interact on their own-and all the other objects, pretty well inert until given a push from the outside. In recent years there has been an important change. The new class of computational objects in children’s worlds has provoked a new language for theory building about the concept of life.

Today children are confronted with highly interactive objects that talk, teach, play, and win. Their computers and computer toys do not move but are relentlessly active in their “mental lives.” Children are not always sure about whether these objects are alive, but it is clear to even the youngest children that thinking about motion won’t take one very far toward settling the question. Children perceive the relevant criteria not as physical or mechanical but as psychological: Are the computers smart? Can they talk? Are they aware? Are they conscious? Do they have feelings? Even, do they cheat? The important question here is not whether children see intelligent machines as alive. Some do, some do not, and, of course, in the end all children learn the “right answer.” What is important is the kind of reasoning the child uses to sort out the question. The child knows that the computer is “just a machine,” but it presents itself with lifelike, psychological properties. Computers force the child to think about how machine minds and human minds are different. In this way, the new world of computational objects becomes a support for what Piaget might have called the child’s construction not of the physical but of the psychological.

In the adult world, experts argue about whether or not computers will ever become true artificial intelligences, themselves capable of autonomous, humanlike thought. But irrespective of future progress in machine intelligence, computational objects are even now affecting how today’s children think. The language of physics gives way to the language of psychology when children think about computers and the question of what is alive.

This change in discourse, this new use of machines in the construction of the psychological, is important for many reasons. Here, I mention three that are particularly striking. First, children are led to a new way of talking about the relationship between life and consciousness. In Piaget’s studies, the idea of consciousness evolved side by side with the idea of life. Generally, when children ascribed life to inanimate objects, they ascribed consciousness too; when life became identified with the biological, consciousness became a property unique to animals. But when today’s children reflect on computational objects, the pattern is very different. Many children allow intelligent machines to be conscious long after they emphatically deny them life. When one child remarks that the computer is not alive, but it cheats “without knowing it is cheating,” he is corrected by another who insists that “knowing is part of cheating.” Children talk about the nonliving computer as aware, particularly when it presents itself as “smarter” than they are, for example, in math or spelling or French. They talk about the nonliving computer as having malicious intent when it consistently beats them at games. Adults hold onto the fact that computers are not aware as a sign of their fundamental difference from people. But today’s children take a different view. The idea of an artificial consciousness does not upset them. They find it a very natural thing. They may be the first generation to grow up with such a radical split between the concepts of consciousness and life, the first generation to grow up believing that human beings are not necessarily alone as aware intelligences. The child’s splitting of consciousness and life is a clear case of where it does not make sense to think of adult ideas filtering down to children. Rather, we should think of children’s resolutions as prefiguring new positions for the next generation of adults whose psychological culture will be marked by the computer culture.

Second, children are led to make increasingly nuanced distinctions about the psychological. Younger children from around six to eight sometimes say that computers are like people in how they think but unlike people in their origins. (“The machine got smart because people taught it.”) But this is not a stable position. It is unsatisfying because it leaves the essential difference between computers and people tied to something that happened in the past, almost as though the computers’ minds and the children’s minds are alike; they differ only in their parents. Older children reach for a more satisfying way to describe what people share and do not share with the psychology of the machine. When younger children talk about the computers psychology, they throw together such undifferentiated observations as that a computer toy is happy, is smart, cheats, and gets angry. Older children make finer distinctions within the domain of the psychological. In particular, they divide it in two. They comfortably manipulate such ideas as “it thinks, but it doesn’t feel.” They comfortably talk about the line between the affective and the cognitive.

With the splitting of the psychological, it is no longer the issue of whether something has a psychology or does not. By developing a distinct idea of the cognitive, children find a way to grant to the computer that aspect of psychology which they feel compelled to acknowledge by virtue of what the machines do, while they reserve other aspects of the psychological for human beings. This response to machine intelligence of splitting psychology is particularly marked in children who use computers a great deal at school or at home. Katy, eleven, after a year of experience with computer programming said, “People can make computers intelligent: you just have to find about how people think and put it in the machine,” but emotions are a different matter. For Katy, the kinds of thinking the computer can do are the kinds that “all people do the same. So you can’t give computers feelings, because everybody has different feelings.”

The distinction between thought and feeling is not the only line that children draw across mental life in the course of coming to terms with the computer’s nature in the light of human nature. Discussions about whether computers cheat can lead to conversations about intentions and awareness. Discussions about the computer’s limitations can lead to distinctions between free will and autonomy as opposed to programming and predetermination. This often brings children to another distinction, the one between rote thinking, which computers can do, and originality, which is a human prerogative. Finally, discussion about how computers think at all can lead to the distinction between brain and mind. All of these are elements of how computers evoke an increasingly nuanced construction of the psychological.

Finally, children’s new psychologized appreciation of machines influences how they articulate what is most special, most important about being a person. While younger children may say that the machine is alive “because it has feelings,” older children tend to grant that the machine has intelligence and is thus “sort of alive” but then distinguish it from people because it lacks feelings. The Aristotelian definition of man as a rational animal (a powerful definition even for children when it defined people in contrast to their nearest neighbors, the animals) gives way to a different distinction. Today’s children came to understand computers through a process of identification with them as psychological entities. And they come to see them as our new nearest neighbors. From the point of view of the child, this is a neighbor that seems to share, or even excel in, our rationality. People are still defined in contrast to their nearest neighbors. But now people are special because they feel. The notion of a rational animal gives way to the paradoxical construction of people as emotional machines.

This last point brings me full circle to where I began: with the image of computers as evocative objects in the lives of adults. My studies show that many adults follow essentially the same path as do children when they talk about human beings in relation to the new intelligent machines. The child’s version is to be human is to be emotional. The adult’s version is to be human is to be unprogrammable. People who say that they are perfectly comfortable with the idea of mind as machine assent to the idea that simulated thinking is thinking but often cannot bring themselves to propose further that simulated feeling is feeling.

There is a sense in which both of these reactions, the child’s and the adult’s, contrast with a prevalent fear that involvement with machine intelligence leads to a mechanical view of people. Instead, what I find is something of a romantic reaction. There is a tendency to cede to the computer the power of reason but at the same time, in defense, to construct a sense of identity that is increasingly focused on the soul and the spirit in the human machine.

On a first hearing, many people find these observations reassuring. But it is important to underscore that there is a disturbing note in this technology-provoked reconsideration of human specificity. Thought and feeling are inseparable. When they are torn from their complex relationship with each other and improperly defined as mutually exclusive, the cognitive can become mere logical process, and the affective is reduced to the visceral. What was most powerful about Freud’s psychological vision was its aspiration to look at thought and feeling as always existing together in interaction with each other. In psychoanalytic thinking there is an effort to explore the passion in the mathematician’s proof as well as an effort to use reason in understanding the most primitive fantasy. The unconscious has its own highly structured language, which can be deciphered and analyzed. Logic has an affective side, and affect has a logic. Computational models of mind may in time deepen our appreciation of these complexities. But for the moment, the popular impact of intelligent machines on our psychological culture goes in the other direction. The too easy acceptance of the idea that computers closely resemble people in their thinking and differ only in their lack of feelings supports a dichotomized and oversimplified view of human psychology. The effort to think against this trend will be one of our greatest challenges in the age of intelligent machines.