If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Monkey See, Monkey Do.

8 monkeys are in a room. There is a ladder in the room, and at the top of the ladder is a bunch of bananas. The monkeys are fed adequate but unappetizing food.

For the first week, any time a monkey climbs the ladder, a nozzle drenches all the monkeys in icy cold water. Soon, any time a monkey starts to climb the ladder, all the other monkeys beat him senseless to avoid being punished.

One monkey is removed, and another takes his place. Not knowing about the ice water punishment, he approaches the ladder. All the other monkeys beat him senseless. Every time he gets near the ladder, they all gang up on him.

A second monkey is replaced. Same thing. The first replacement monkey joins in the beating.

A third is replaced. A fourth. Eventually all the original monkeys are replaced. The new 8 monkeys have never experienced the ice water punishment, yet any time any of them approaches the ladder, the rest of them gang up on him and beat him senseless.

I’m reminded of another story. A woman is teaching her daughter how to cook a roast. She cuts a small portion off each end of the roast, and the daughter asks her why. The woman replies “I don’t know, but my mother always cut the ends off her roast, so I do it. The daughter calls up her grandmother, and asks her why she always cut the ends off the roasts. Grandmother replies “I don’t know, my mother always did it”. The next time Daughter visits great-grandma in the nursing home, she asks why she cut the ends off the roasts. Great-Grandma replies “Because my pan wasn’t large enough to cook a whole one.”

These two concepts are dogma. They are memes. The behaviour outlined in these tales poses a significant danger to an intellectual society.

Let’s add some ideas to the monkey problem. Now we have a ladder leading to one bunch, and a rope leading to another. One group of monkeys is punished any time they go up the ladder, the other is punished any time they go up the rope.

Take one monkey from the first group and put him in the second. Initially, he will try to fight any monkey going up the ladder, and the rest of the monkeys will punish him for going up the rope. For awhile, at least. Eventually, he’ll learn the new rules and conform to his new society.

What about when you put all 16 monkeys in a room with a rope and a ladder?

Chaos. An “Us vs Them” mentality develops. Each group continues to reinforce its own dogma. Peace only exists when no monkey tries to get a banana.

On a related topic, we recently had some discussion concerning the Precepts representing just common sense, or innate human wisdom. This story is worth reading in that regard.

Gassho, Jundo

Born to Be Good
Review by RICHARD RORTY

Nazi parents found it easy to turn their children into conscientious little monsters. In some countries, young men are raised to believe that they have a moral obligation to kill their unchaste sisters. Gruesome examples like these suggest that morality is a matter of nurture rather than nature — that there are no biological constraints on what human beings can be persuaded to believe about right and wrong. Marc Hauser disagrees. He holds that “we are born with abstract rules or principles, with nurture entering the

picture to set the parameters and guide us toward the acquisition of particular moral systems.” Empirical research will enable us to distinguish the principles from the parameters and thus to discover “what limitations exist on the range of possible or impossible moral systems.”

Hauser is professor of psychology, organismic and evolutionary biology, and biological anthropology at Harvard. He believes that “policy wonks and politicians should listen more closely to our intuitions and write policy that effectively takes into account the moral voice of our species.” Biologists, he thinks, are in a position to amplify this voice. For they have discovered evidence of the existence of what Hauser sometimes calls “a moral organ” and sometimes “a moral faculty.” This area of the brain is “a circuit, specialized for recognizing certain problems as morally relevant.” It incorporates “a universal moral grammar, a toolkit for building specific moral systems.” Now that we have learned that such a grammar exists, Hauser says, we can look forward to “a renaissance in our understanding of the moral domain.”

The exuberant triumphalism of the prologue to “Moral Minds” leads the reader to expect that Hauser will lay out criteria for distinguishing parochial moral codes from universal principles, and will offer at least a tentative list of those principles. These expectations are not fulfilled. The vast bulk of “Moral Minds” consists of reports of experimental results, but Hauser does very little to make clear how these results bear on his claim that there is a “moral voice of our species.”

Many of the experiments Hauser tells us about are intended to delimit stages in child development. Three-year-olds already know, for example, that “if an act causes harm, but the intention was good, then the act is judged less severely.” Hauser takes this fact to support the claim that “rather than a learned capacity ... our ability to detect cheaters who violate social norms is one of nature’s gifts.” But do such facts as that children learn to use expressions like “didn’t mean to do it” at roughly the same time as they learn “shouldn’t have done it” help us draw a line between nature and nurture? Hauser does not spell out the relevance of data about child development to the question of whether internalizing a moral code requires a dedicated area of the brain.

To convince us that such an organ exists, Hauser would have to start by drawing a bright line separating what he calls “the moral domain” — one that nonhuman species cannot enter — from other domains. But he never does. The closest he comes is saying things like “a central difference between social conventions and moral rules is the seriousness of an infraction.” He takes this to suggest “that moral rules consist of two ingredients: a prescriptive theory or body of knowledge about what one ought to do, and an anchoring set of emotions.” Apparently both rules of etiquette and moral rules embody knowledge about what ought to be done. All that is distinctive about morality is added emotional freight. But, as Hauser tells us, many nonhuman species obey social conventions. (For example, “Do not start tearing at the carcass before the alpha male has eaten his fill.”) It is hard to see why evolution had to carve out a new, specialized organ just to generate the extra emotional intensity that differentiates guilt from chagrin.

Perhaps Hauser does not mean to say that greater seriousness is the only, or the most important, mark of the moral domain. But the reader is left guessing about how he proposes to distinguish morality not just from etiquette, but also from prudential calculation, mindless conformity to peer pressure and various other things. This makes it hard to figure out what exactly his moral module is supposed to do. It also makes it difficult to envisage experiments that would help us decide between his hypothesis and the view that all we need to internalize a moral code is general-purpose learning-from-experience circuitry — the same circuitry that lets us internalize, say, the rules of baseball.

Hauser thinks that Noam Chomsky has shown that in at least one area — learning how to produce grammatical sentences — the latter sort of circuitry will not do the job. We need, Hauser says, a “radical rethinking of our ideas on morality, which is based on the analogy to language.” But the analogy seems fragile. Chomsky has argued, powerfully if not conclusively, that simple trial-and-error imitation of adult speakers cannot explain the speed and confidence with which children learn to talk: some special, dedicated mechanism must be at work. But is a parallel argument available to Hauser? For one thing, moral codes are not assimilated with any special rapidity. For another, the grammaticality of a sentence is rarely a matter of doubt or controversy, whereas moral dilemmas pull us in opposite directions and leave us uncertain. (Is it O.K. to kill a perfectly healthy but morally despicable person if her harvested organs would save the lives of five admirable people who need transplants? Ten people? Dozens?)

Hauser hopes that his book will convince us that “morality is grounded in our biology.” Once we have grasped this fact, he thinks, “inquiry into our moral nature will no longer be the proprietary province of the humanities and social sciences, but a shared journey with the natural sciences.” But by “grounded in” he does not mean that facts about what is right and wrong can be inferred from facts about neurons. The “grounding” relation in question is not like that between axioms and theorems. It is more like the relation between your computer’s hardware and the programs you run on it. If your hardware were of the wrong sort, or if it got damaged, you could not run some of those programs.

Knowing more details about how the diodes in your computer are laid out may, in some cases, help you decide what software to buy. But now imagine that we are debating the merits of a proposed change in what we tell our kids about right and wrong. The neurobiologists intervene, explaining that the novel moral code will not compute. We have, they tell us, run up against hard-wired limits: our neural layout permits us to formulate and commend the proposed change, but makes it impossible for us to adopt it. Surely our reaction to such an intervention would be, “You might be right, but let’s try adopting it and see what happens; maybe our brains are a bit more flexible than you think.” It is hard to imagine our taking the biologists’ word as final on such matters, for that would amount to giving them a veto over utopian moral initiatives.

The humanities and the social sciences have, over the centuries, done a great deal to encourage such initiatives. They have helped us better to distinguish right from wrong. Reading histories, novels, philosophical treatises and ethnographies has helped us to reprogram ourselves — to update our moral software. Maybe someday biology will do the same. But Hauser has given us little reason to believe that day is near at hand.

Richard Rorty recently retired from teaching at Stanford. He is the author of “Philosophy and Social Hope.”