Scientists compared the genetic sequences of ethnically and geographically diverse people from around the world and found that the genes which code for the nervous systems, had some sequence differences (known as polymorphisms) among individuals. By analyzing human and chimpanzee polymorphism patterns, genetic probabilities and various other genetic tools, and geographical distributions, they found evidence that some of these genes are experiencing ongoing positive selection in humans. They calculated that one genetic variant of microcephalin arose approximately 37,000 years ago, which coincides with the emergence of culturally modern humans, and it increased in frequency too rapidly to be compatible with random genetic drift or population migration. This suggests that it underwent positive selection.[xxi] An ASPM variant arose about 5800 years ago, coincident with the spread of agriculture, cities and the first record of written language. It too is found in such high frequencies in the population, that it indicates strong positive selection.[xxii]

ARE HUMAN BRAINS UNIQUE?By Michael Gazzaniga

MICHAEL GAZZANIGA, one of the world's leading neuroscientists, is a Professor of Psychology and the Director for the SAGE Center for the Study of Mind at the University of California Santa Barbara, and is a member of the President’s Council on Bioethics.

He is the author of several books including Human: The Science Behind What Makes Us Unique (Ecco; June 24, 2008).

[MICHAEL GAZZANIGA:] I always smile when I hear Garrison Keillor say, "Be well, do good work, and keep in touch." It is such a simple sentiment yet so full of human complexity. Other apes don't have that sentiment. Think about it. Our species does like to wish people well, not harm. No one ever says, "have a bad day" or "do bad work" and keeping in touch is what the cell phone industry has discovered all of us do, even when there is nothing going on.

There in one sentence Keillor captures humanness. The familiar cartoon that makes its way around evolutionary biologists circles shows an ape at one end of a line and then several intermediate early humans culminating in a standing tall, erect human. We now know the line isn't so direct but the metaphor still works. We did evolve and we are what we are through the forces of natural selection. And yet I would like to amend that cartoon. I see the human turning around with a knife in his hand and cutting his imaginary cord off, in being liberated to do things no other animal comes close to realizing.

We humans are special. All of us solve problems effortlessly and routinely. When we approach a screen door with our hands full of bags of groceries we instantly know how to stick out our pinky and hook it around the door handle to open it up. The human mind is so generative and given to animation that we do things such as map agency on to almost anything, our pets, our old shoes, our cars, our world. It is as if we don't want to be alone up here at the top of the cognitive chain, the smartest things on earth. We want to see our dogs charm us, appeal to our emotions, imagine they too can suffer and have pity, love and hate and all the rest. We are a big deal and we are a little scared about it.

Thousands of scientists and philosophers over hundreds of years have either recognized this uniqueness of ours or have denied it and looked for the antecedents of everything human in other animals. In recent years clever scientists have found antecedents to all kinds of things that we had assumed were purely human constructions. We used to think that only humans had the ability to reflect on their own thoughts, possess what is called 'meta-cognition". Well, think again. Two psychologists at the University of Georgia have shown that rats also have this ability. It turns out rats "know" what they don't know. Does that mean we should do away with our rat traps? I don't think so.

Everywhere I look I see tidbits of differences and one can always say a particular tidbit can be found in others aspects of biological life. Ralph Greenspan, the very talented neuroscientist and geneticist at the Neuroscience Institute in La Jolla studies, of all things, sleep in the fruit fly. Someone asked him at lunch one day, "Do flies sleep?" He quipped, " I don't know and I don't care." But then he got thinking about it and realized maybe he could learn something about the mysterious process of sleep, a process that has eluded understanding. The short version of this story is that flies do sleep, just like we do and more importantly, flies express the same genes during sleep and awake hours that we do. Indeed his current research suggest even protozoans sleep! Good grief. Maybe when I get up at night to urinate, I actually get up, because of other forces.

The point is that any human activity can be seemingly atomized. But to be swooned by such a fact is to miss the point of human experience. In the following chapters, we will comb though facts about our brains, our minds, our social world, our feelings, our artistic endeavors, our capacity to confer agency, our consciousness and indeed our growing knowledge that our brain parts can be replaced with silicon parts. From this jaunt one clear fact emerges. Although we are made up of the same chemicals, with the same physiological reactions, we are very different from other animals. Just as gases can become liquids, which can become solids, phase shifts occur, shifts so large in implications, it becomes almost impossible to think of a foggy mist being made up of the same stuff that makes up an ice berg. And yet the different substances have the same chemical structure. In a complex relationship with the environment, very similar stuff can become quite different in its reality and structure. Indeed, I have decided something like a phase shift has occurred in becoming human. There simply is no one thing that will ever account for our spectacular abilities, aspirations and capacity to travel mentally in time to almost the infinite world beyond our present existence. Even though we have all of these connections with the biologic world from which we came, and we have in some instances similar mental structures, we are hugely different. While most of our genes and brain architecture are held in common with animals, there are always differences to be found. And while we can use lathes to mill fine jewelry, and chimps can use stones to crack open nuts, the differences are light years apart. And while, the family dog may appear empathetic, no pet understands the difference between sorrow and pity.

A phase shift occurred and it occurred as the consequence of many things changing in our brains and minds. Personally, I love our species, and always have. I have never found it necessary to lessen our success and domination of this universe.

The brain is the organ that sets us apart from any other species. It is not the strength of our muscles or of our bones that makes us different, it is our brain.—Pasko T. Rakic

The great psychologist David Premack once lamented, “Why is it that the (equally great) biologist E.O Wilson can spot the difference between two different kinds of ants at a hundred yards, but can’t see the difference between an ant and a human?” The quip underlines strong differences of opinion on the issue of human uniqueness. It seems that half of the scientific world sees the human animal as on a continuum with other animals and others see a sharp break between animals and humans, see two distinct groups. The argument has been raging for years and it surely won’t be settled in the near future. After all, we humans are either lumpers or splitters. We either see the similarities or prefer to note the differences.

At the same time, I hope to illuminate the issue from a particular perspective. I think it is rather empty to argue that because, say, social behavior exists in humans and in ants, there is nothing unique about human social behavior. Both the F-16 and the piper cub are planes, both obey the laws of physics, both can get you from place A to place B, but both are hugely different and unique. I want to begin by simply recognizing the huge differences between the human mind and brain and other minds and brains and see what structures, processes, and capacities are uniquely human.

It has always been a puzzle to me why so many neuroscientists become agitated when someone raises the question of whether or not there might be unique features to the human brain. Why is it that one can easily accept that there are visible physical differences that make us unique, but to consider differences in our brains and how they work is so touchy? Recently, I queried a few with the question, “If you were recording electrical impulses from a slice of the hippocampus in a dish and you were not told if the slice came from a mouse, a monkey or a human, would you be able to tell the difference? Put differently, is something unique about the human neuron? Would a future brain carpenter have to use that kind of neuron to build a human brain or would a monkey or mouse neuron do? Don't we all assume there is nothing unique about the neuron per se, that the special tricks of being human will come in the subtleties of the wiring diagram itself?”

The intensity of the response can be captured with just a couple of the many responses, “A cell is a cell is a cell. It's a universal unit of processing that only scales in size between the bee and the human. If you scale appropriately a mouse, monkey, or human pyramidal cell you won't be able to say the difference even if you had Pythia to help you.” So there! When we are studying the neurons of a mouse or an ant we are studying mechanisms no different from a human neuron, period, end of story.

Or, another response: “There are differences in the types of neurons within a brain, and response properties of neurons within a brain. But across mammals - I think a neuron is a neuron. The inputs and outputs of that neuron (and synaptic composition) determine its function”. Bang! Once again the physiology of the animal neuron is identical to that of a human. Without this assumption, it makes little sense to be studying these neurons so arduously. Of course there are similarities. But are there no differences?

Humans are unique. It is the “how” and the “why” that have been intriguing scientists, philosophers and even lawyers for centuries. When trying to distinguish between animals and humans controversies arise and battles are fought over ideas and the meaning of data, and when the smoke clears, we are left with more information on which to build, stronger, tighter theories. Interestingly, in this quest, it appears that many opposing ideas are proving to be partially correct.

Although it is obvious to everyone that humans are physically unique, it is also obvious that we differ from other animals in far more complex aspects. We create art, pasta Bolognese, complex machines, and some of us understand quantum physics. We don’t need a neuroscientist to tell us that our brains are calling the shots, but we do need them to explain how it is done. How unique are we and how are we unique?

How the brain drives our thoughts and actions has remained elusive. Among the many unknowns is the great mystery of how a thought moves from the depths of the unconscious to become conscious. As methods for studying the brain have become more sophisticated, some mysteries are solved, but it seems that solving one mystery often leads to the creation of many more. Brain imaging studies have caused some commonly accepted tenets to come into question and others to be completely discounted. For example, the idea that the brain works as a generalist, processing all input information equally and in the same manner, and meshing it together is less well accepted than it was even fifteen years ago. Brain imaging studies have revealed that specific parts of the brain are active for specific types of information. When you look at a tool (a man-made artifact created with a specific purpose in mind), your entire brain is not engaged in the problem of studying it, rather there is a specific area that is activated for tool inspection.

Findings in this realm lead to questions of how many specific types of information are there with their own regions, what is the specific information that activates that region, why do we have specific regions for one type of activity and not another, and if we don’t have a specific region for some type of information, what happens then? Although we have sophisticated imaging techniques available to us that can show us what part of the brain is involved with specific types of thoughts or actions, these scans tell us nothing of what is going on in that part of the brain. Today the cerebral cortex is thought to be “perhaps the most complex entity known to science” [i] .

The brain is complicated enough on its own, but the sheer number of different disciplines[1] that are interested in it and are studying it has produced thousands of bits of information. It is a wonder that order can be put to the mountain of data. Words used in one discipline often carry different meanings in others. Findings can become distorted through poor or incorrect interpretation and become unfortunate foundations or inaccurate rebuttals of theories that may take decades to be questioned and re-evaluated. Politicians or other public figures can oftentimes misinterpret or ignore findings to support or further a particular agenda, or stifle politically inconvenient research altogether. There is no need to be dispirited though! Scientists are like a dog with a bone. They keep gnawing away and sense is being made.

Let’s start on our quest into human uniqueness the way it has been done in the past: by just looking at that brain. Can its appearance tell us anything special?

BIG BRAINS AND BIG IDEAS?

Comparative neuroanatomy does what the name implies. It compares the brains of different species for brain size and structure. This is important, because in order to know what is unique in the human brain, or any other brain for that matter, one needs to know how they are alike and how they differ. This used to be an easy job and didn’t take much in the way of equipment, maybe a good saw and a scale, which was about all that was available up until the middle of the 19th century. Then Charles Darwin published his Origin of Species and the question of whether man had descended from apes was front and center. Comparative anatomy was in the lime-light and the brain was center stage.

Throughout the history of neuroscience certain presumptions have been made. One of these is that the development of increased cognitive capacity is related to increased brain size over evolutionary time. This was the view held by Charles Darwin, who wrote “the difference between man and the higher animals, great as it is, is certainly one of degree and not of kind” [ii] and by his ally, neuroanatomist T.H. Huxley, who denied humans had any unique brain features other than size. [iii] The general acceptance of this notion, that all mammalian brains have the same components, but as the brain grew larger, its performance became more complex, led to the construction of the phylogenetic scale that some of us learned in school, with man sitting at the top of an evolutionary ladder, rather than out on the branch of a tree.i However, Ralph Holloway, now a professor of Anthropology at Columbia University, disagreed. In the mid 1960s, he suggested that evolutionary changes in cognitive capacity are the result of brain reorganization, rather than changes in size alone.[iv] This disagreement about how the human brain differs from other animals, and indeed how the brains of other animals differ from each other, whether it is one of quantity versus quality continues. Todd M. Preuss, a Neuroscientist at Yerkes National Primate Research Center, points out that why this disagreement is so controversial, and new discoveries of differences in connectivity and have been “inconvenient” i is many generalizations about cortical organization have been based on the “quantity” assumption. It has led scientists to believe that findings using models of brain structure found in other mammals such as rats and monkeys can be extrapolated to humans. If this is not correct, there are repercussions that reverberate into many other fields, such as anthropology, psychology, paleontology, sociology, and beyond. Preuss advocates the need for comparative studies of mammalian brains, rather than using the brain of a rat, say, as a model for how a human brain functions, but on a lesser scale. He, and many others now have findings that show on the microscopic level, mammalian brains differ widely from one another. [v]

Is this assumption about quantity correct? It would appear not. Many mammals have larger brains than humans. This is known as absolute brain size. The blue whale has a brain that is five times larger than a human brain.[vi] Is it five times smarter? Doubtful. It has a larger body to control and a simpler brain structure. Although Captain Ahab may have found a whale intellectually stimulating (all be it he was dealing with a sperm whale whose brain is also larger than a human’s), it has not been a universal experience. So perhaps proportional (allometric) brain size is important: that is the size of the brain compared to the size of the body. Calculating brain size differences this way puts a whale in his place with a brain size that is only .01% of his body weight as compared to a human brain that is 2%. At the same time consider the pocket mouse’s brain, which is 10% of its body weight. In fact in the early nineteenth century, Georges Cuvier, an anatomist, stated, “All things being equal, the small animals have proportionately larger brains.” vi As it turns out, proportional brain size increases predictably as body size decreases.

What can be said for human brains is they are four to five times larger than would be expected for an average mammal of its size.[vii] This is termed relative brain size. In fact in the hominid (ape) line in general (from which humans have evolved), brain size has increased much faster than body size; this is not true for other groups of primates, and the human brain has rocketed in size after the divergence from chimps.[viii] Whereas a chimp’s brain weighs about 400grams, a human’s brain is about 1300 grams.vi So we do have big brains. Is this what is unique and can explain our intellect?

Remember Neanderthals? Homo neanderthalensis had a comparable body mass to Homo sapiens’[ix], but with slightly larger cranial volumes measuring 1520 cubic centimeters (cc.) compared to 1340 cc. typical of modern humans, so they too had a larger relative brain size than humans. Did they have a similar intelligence to humans? Neanderthals made tools and apparently “imported” raw materials from distant sites, invented standardized techniques for making spears and tools[x], and about 50,000 years ago began to paint their bodies and inter their dead. [xi] These activities are considered by many researchers to indicate some self-awareness and the beginnings of symbolic thought vi, which is important because that is thought to be the essential component of human speech. [xii] No one knows the extent of their speech capabilities, but what is clear is that Neanderthal material culture was not nearly as complex as that of their contemporary Homo sapiens.[xiii] [xiv] Although the bigger brain of the Neanderthals was not as capable of that of Homo sapiens, it was clearly more advanced than that of a chimp. The other problem with the big brain theory is that Homo sapiens’ brain size has decreased about 150cc over the species’ history, while their culture and social structures have become more complex. So perhaps relative brain size is important, but it is not the whole story, and since we are dealing with “perhaps the most complex entity known to science”i, that should not surprise us at all.

From my own perspective on this issue, I have never been taken with the brain size argument. For the past 45 years I have been studying split-brain patients. These are patients who have had their two hemispheres of the brain surgically separated in an effort to control their epilepsy. Following their surgery, the left brain can no longer communicate meaningfully with their right brain, thus isolating one from the other. In effect, a 1340 gram interconnected brain has become a 670 gram brain. What happens to intelligence?

Well, not much. What one sees is the specialization that we humans have developed over years of evolutionary change. The left hemisphere is the smart half of the brain. It speaks, thinks, and generates hypothesis. The right brain does not and is a poor symbolic cousin to the left. It does, on the other hand have some skills that remain superior to those on the left, especially in the domain of visual perception. Yet, for present purposes, the overarching point is that the left hemisphere remains as cognitively adept as it was before it was disconnected from the right brain, leaving its 670 grams in the dust. Smart brains are derived from more than mere size.

Before we leave the question of brain size, there is some exciting new information from the field of genetics. Genetics research is revolutionizing many fields of study, including neuroscience. For those of us who are natural selection fans, it seems reasonable to assume that the explosion in human brain size is the result of natural selection, which works through many mechanisms. Genes are regions on chromosomes (a microscopic thread-like structure that is found in the nucleus of all cells and is the carrier of hereditary characteristics) and those regions have DNA sequences.[2] Sometimes these sequences vary slightly, and as a result, what that particular gene controls can vary in some way. These variations are called alleles. Thus, a gene coding for flower color can vary in its DNA base pairs and result in differing flower color. When an allele has a highly important and positive effect on an organism such that it improves its fitness, or allows it to reproduce more, there is what is called a positive selection or directional selection for that allele. Natural selection would favor such an event and that particular gene would be selected for quickly.

While not all genes’ functions are known, there are many genes that are involved with the development of the brain that are different from other mammals, and specifically from other primates.[3] These genes are involved in determining how many neurons there are as well as brain size during embryonic development. There is not much difference between species in the genes that do routine “house-keeping” in the nervous system, which are those that are involved in the most basic cellular functions such as metabolism and protein synthesis. [xv] However, two genes have been identified—the story of which is fascinating in itself--that are specific regulators of brain size: microcephalin[xvi] and ASPM (abnormal spindle-like microcephaly associated.)[4] [xvii] These genes were discovered because a defect in them caused a problem that was passed on through birth to other family members. Defects in either of these genes lead to primary microcephaly, an autosomal recessive[5] neurodevelopmental disorder. Two principal features characterize this disorder: a markedly reduced head size that is the consequence of a small but architecturally normal brain, and nonprogressive mental retardation. The genes were named for the disease that they caused if they were defective.[6] It is the cerebral cortex (remember this point) that shows the greatest size reduction. In fact the brain size is so markedly decreased (three standard deviations below normal) that it is comparable in size to that of early hominids![xviii]

Recent research from Bruce Lahn, a professor of genetics at the University of Chicago and The Howard Hughs Medical Institute, and his lab has shown that both of these genes have undergone significant changes under the pressure of natural selection during the evolution of Homo sapiens. Microcephalin (without the defect) showed evidence of accelerated evolution along the entire primate lineage[xix] and ASPM (also without the defect) has evolved most rapidly after the divergence of humans and chimps[xx], implicating these genes as the cause of the rapidly exploding brain size of our ancestors.

Accelerated evolution means what it sounds like. This gene was a hot item that produced a characteristic that gave its owners an obvious competitive advantage. Whoever had it had more offspring, and it became the dominant gene. Not complacent with these findings, these researchers wondered if these genes could be used to answer the question of whether the human brain was continuing to evolve. It turns out that they could and it is. They reasoned that if a gene has evolved adaptively in the making of the human species, such as these genes that increase brain size, then it may still be doing so. How do you figure this out?

Scientists compared the genetic sequences of ethnically and geographically diverse people from around the world and found that the genes which code for the nervous systems, had some sequence differences (known as polymorphisms) among individuals. By analyzing human and chimpanzee polymorphism patterns, genetic probabilities and various other genetic tools, and geographical distributions, they found evidence that some of these genes are experiencing ongoing positive selection in humans. They calculated that one genetic variant of microcephalin arose approximately 37,000 years ago, which coincides with the emergence of culturally modern humans, and it increased in frequency too rapidly to be compatible with random genetic drift or population migration. This suggests that it underwent positive selection.[xxi] An ASPM variant arose about 5800 years ago, coincident with the spread of agriculture, cities and the first record of written language. It too is found in such high frequencies in the population, that it indicates strong positive selection.[xxii]

This all sounds promising. We’ve got the big brains. Some of those big brains have discovered at least some of the genes that code for the big brains, and the genes appear to have changed at key times in our evolution. Doesn’t this mean they caused it all to happen and that they are what make us unique? What we don’t know is if the genetic changes caused the cultural changes or were synergistic[xxiii], and even if they did, what exactly is going on in those big brains and how is it happening? Is it just happening in ours or is it happening but just to a lesser extent in our relative the chimps? [7]

[1] Not only has the brain drawn the interest of anthropologists, psychologists, sociologists, philosophers, and politicians, it has intrigued biologists of all sorts, (microbiologists, anatomists, biochemists, geneticists, paleobiologists, physiologists, evolutionary biologists, neurologists,) chemists, pharmacologists and computer engineers. More recently, even marketers and economists are jumping in.

[2] Deoxyribonucleic acid, or DNA is a double stranded helical structure with a backbone made up sugars and phosphates. Each sugar has a one of four types of bases attached to it : adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T). These bases then attach to each other (A with T, C with G) and hold the helix together. It is the sequence of these bases that carry the genetic code.

[3] These include the genes named ASPM, Microcephalin, CDK5RAP2, CENPJ, Sonic Hedgehog, APAF1, and CASP3.

[4] The origins of this gene story are fascinating. Pakistan built the Mangla dam in the 1960s on Jhelm River to generate power and store water for irrigation. The lake that was created behind the damn flooded the valley, and 20,000 families lost their homes and fertile farms in the region of Mirpur in Kashmir. Many of these families moved to Yorkshire, England where there was a shortage of skilled textile workers. Many years later, C. Geoffrey Woods, a physician and clinical geneticist from St James’ University Hospital in Leeds, England, noticed that he was seeing several Pakistani families with children who had primary microcephaly. He began to study the DNA of the children with the affliction, and their unaffected relatives, which led to the discovery of these two genes. The Mangla dam was a controversial project at the time, and is once again. The Pakistani government is currently trying to increase the size of the dam and again displace somewhere between 44-100,000 people. A short review of the detective work that went into the discovery of these two genes can be found in:

[5] Every person has two copies of every gene on non-sexed linked chromosomes, one from the mother and one from the father. A recessive gene means that in order for it to cause a visible or detectable characteristic, there must be a copy of it from both the mother or father. If there is only one copy, say from the mother, then the dominant gene from the father would determine the visible characteristic. Both parents have to be carriers of a recessive trait in order for a child to have the visible characteristic. If both parents are carriers, there is a 25% chance with each child to show the recessive trait.

[6] If you are interested in the nomenclature of genes, check out this website: gene.ucl.ac.uk/nomenclature.

[7] We sit on a branch of an evolutionary tree, not on the top of a ladder. Chimpanzees are our closest living relatives and we share a common ancestor. Often times in animal studies, comparisons are made with chimpanzees, because they are the animal most likely to share similar abilities.

What is the compelling urgency of the machine that it can so intrude itself into the very stuff out of which man builds his world?

JOSEPH WEIZENBAUM1923 – 2008

The machine's influence shapes not only society's structures but the more intimate structures of the self. Under the sway of the ubiquitous, "indispensable" computer, we begin to take on its characteristics, to see the world, and ourselves, in the computer's (and its programmers') terms.

What is the compelling urgency of the machine that it can so intrude itself into the very stuff out of which man builds his world?

— Joseph Weizenbaum

Somehow I managed to miss, until just a few days ago, the news that Joseph Weizenbaum had died. He died of cancer on March 5, in his native Germany, at the age of 85. Coincidentally, I was in Germany that same day, giving a talk at the CeBIT technology show, and — strange but true — one of the books I had taken along on the trip was Weizenbaum's Computer Power and Human Reason.

Born in 1923, Weizenbaum left Germany with his family in 1936, to escape the Nazis, and came to America. After earning a degree in mathematics and working on programming some of the earliest mainframes, he spent most of his career as a professor of computer science at MIT. He became — to his chagrin — something of a celebrity in the 1960s when he wrote the Eliza software program, an early attempt at using a computer to simulate a person. Eliza was designed to mimic the conversational style of a psychotherapist, and many people who used the program found the conversations so realistic that they were convinced that Eliza had a capacity for empathy.

The reaction to Eliza startled Weizenbaum, and after much soul-searching he became, as John Markoff wrote in his New York Times obituary, a "heretic" in the computer-science world, raising uncomfortable questions about man's growing dependence on computers. Computer Power and Human Reason, published in 1976, remains one of the best books ever written about computing and its human implications. It's dated in some its details, but its messages seem as relevant, and as troubling, as ever. Weizenbaum argued, essentially, that computers impose a mechanistic point of view on their users — on us — and that that perspective can all too easily crowd out other, possibly more human, perspectives.

The influence of computers is hard to resist and even harder to escape, wrote Weizenbaum:

The computer becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure. That is virtually a tautology. The utility of this tautology is that it can reawaken us to the possibility that some human actions, e.g., the introduction of computers into some complex human activities, may constitute an irreversible commitment. . . . The computer was not a prerequisite to the survival of modern society in the post-war period and beyond; its enthusiastic, uncritical embrace by the most "progressive" elements of American government, business, and industry quickly made it a resource essential to society's survival in the form that the computer itself had been instrumental in shaping.

The machine's influence shapes not only society's structures but the more intimate structures of the self. Under the sway of the ubiquitous, "indispensable" computer, we begin to take on its characteristics, to see the world, and ourselves, in the computer's (and its programmers') terms. We become ever further removed from the "direct experience" of nature, from the signals sent by our senses, and ever more encased in the self-contained world delineated and mediated by technology. It is, cautioned Weizenbaum, a perilous transformation:

Science and technology are sustained by their translations into power and control. To the extent that computers and computation may be counted as part of science and technology, they feed at the same table. The extreme phenomenon of the compulsive programmer teaches us that computers have the power to sustain megalomaniac fantasies. But the power of the computer is merely an extreme version of a power that is inherent in all self-validating systems of thought. Perhaps we are beginning to understand that the abstract systems — the games computer people can generate in their infinite freedom from the constraints that delimit the dreams of workers in the real world — may fail catastrophically when their rules are applied in earnest. We must also learn that the same danger is inherent in other magical systems that are equally detached from authentic human experience, and particularly in those sciences that insist they can capture the whole man in their abstract skeletal frameworks.

His own invention, Eliza, revealed to Weizenbaum the ease with which we will embrace a fabricated world. He spent the rest of his life trying to warn us away from the seductions of Eliza and her many friends. The quest may have been quixotic, but there was something heroic about it too.

We have lost a lion of Computer Science. Joseph Weizenbaum’s life is proof
that someone can be an absolute alpha-geek and a compassionate, soulful
person at the same time. He displayed innovative courage in recognizing the
seductive dangers of computation.

History will remember Weizenbaum as the clearest thinker about the
philosophy of computation. A metaphysical confrontation dominated his
interactions with the non-human centered mainstream. There were endless
arguments about whether people were special in ways that cybernetic
artifacts could never be. The mainstream preferred to sprinkle the magic
dust of specialness on the “instruments,” as Weizenbaum put it, instead of
people.

But there was a less metaphysical side of Weizenbaum’s thinking that is
urgently applicable to the most pressing problems we all face right now. He
warned that if you believe in computers too much, you lose touch with
reality. That’s the real danger of the magic dust so liberally sprinkled by
the mainstream. We pass this fallacy from the lab out into the world. This
is what apparently happened to Wall Street traders in fomenting a series of massive financial
failures. Computers can be used rather too easily to improve the efficiency
with which we lie to ourselves. This is the side of Weizenbaum that I wish
was better known.

We wouldn’t let a student become a professional medical researcher without
learning about double blind experiments, control groups, placebos, the
replication of results, and so on. Why is computer science given a unique
pass that allows us to be soft on ourselves? Every computer science student
should be trained in Weizenbaumian skepticism, and should try to pass that
precious discipline along to the users of our inventions.

Weizenbaum’s legacy includes an unofficial minority school in computer
science that has remained human-centered. A few of the other members, in my
opinion, are David Gelernter, Ted Nelson, Terry Winograd, Alan Kay, and Ben
Schneiderman.

Everything about computers has become associated with youth. Turing’s
abstractions have been woven into a theater in which we can enjoy fantasies
of eternal youth. We are fascinated by wiz kids and the latest young
billionaires in Silicon Valley. We fantasize that we will be uploaded when
the singularity arrives in order to become immortal, and so on. But when we
look away from the stage for a moment, we realize that we computer
scientists are ultimately people. We die.