A thousand years ago, a group of Vikings led by Erik the Red set sail from Norway for the vast Arctic landmass west of Scandinavia which came to be known as Greenland. It was largely uninhabitable?a forbidding expanse of snow and ice. But along the southwestern coast there were two deep fjords protected from the harsh winds and saltwater spray of the North Atlantic Ocean, and as the Norse sailed upriver they saw grassy slopes flowering with buttercups, dandelions, and bluebells, and thick forests of willow and birch and alder. Two colonies were formed, three hundred miles apart, known as the Eastern and Western Settlements. The Norse raised sheep, goats, and cattle. They turned the grassy slopes into pastureland. They hunted seal and caribou. They built a string of parish churches and a magnificent cathedral, the remains of which are still standing. They traded actively with mainland Europe, and tithed regularly to the Roman Catholic Church. The Norse colonies in Greenland were law-abiding, economically viable, fully integrated communities, numbering at their peak five thousand people. They lasted for four hundred and fifty years?and then they vanished.

The story of the Eastern and Western Settlements of Greenland is told in Jared Diamond?s ?Collapse: How Societies Choose to Fail or Succeed? (Viking; $29.95). Diamond teaches geography at U.C.L.A. and is well known for his best-seller ?Guns, Germs, and Steel,? which won a Pulitzer Prize. In ?Guns, Germs, and Steel,? Diamond looked at environmental and structural factors to explain why Western societies came to dominate the world. In ?Collapse,? he continues that approach, only this time he looks at history?s losers?like the Easter Islanders, the Anasazi of the American Southwest, the Mayans, and the modern-day Rwandans. We live in an era preoccupied with the way that ideology and culture and politics and economics help shape the course of history. But Diamond isn?t particularly interested in any of those things?or, at least, he?s interested in them only insofar as they bear on what to him is the far more important question, which is a society?s relationship to its climate and geography and resources and neighbors. ?Collapse? is a book about the most prosaic elements of the earth?s ecosystem?soil, trees, and water?because societies fail, in Diamond?s view, when they mismanage those environmental factors

.

There was nothing wrong with the social organization of the Greenland settlements. The Norse built a functioning reproduction of the predominant northern-European civic model of the time?devout, structured, and reasonably orderly. In 1408, right before the end, records from the Eastern Settlement dutifully report that Thorstein Olafsson married Sigrid Bjornsdotter in Hvalsey Church on September 14th of that year, with Brand Halldorstson, Thord Jorundarson, Thorbjorn Bardarson, and Jon Jonsson as witnesses, following the proclamation of the wedding banns on three consecutive Sundays.

The problem with the settlements, Diamond argues, was that the Norse thought that Greenland really was green; they treated it as if it were the verdant farmland of southern Norway. They cleared the land to create meadows for their cows, and to grow hay to feed their livestock through the long winter. They chopped down the forests for fuel, and for the construction of wooden objects. To make houses warm enough for the winter, they built their homes out of six-foot-thick slabs of turf, which meant that a typical home consumed about ten acres of grassland.

But Greenland?s ecosystem was too fragile to withstand that kind of pressure. The short, cool growing season meant that plants developed slowly, which in turn meant that topsoil layers were shallow and lacking in soil constituents, like organic humus and clay, that hold moisture and keep soil resilient in the face of strong winds. ?The sequence of soil erosion in Greenland begins with cutting or burning the cover of trees and shrubs, which are more effective at holding soil than is grass,? he writes. ?With the trees and shrubs gone, livestock, especially sheep and goats, graze down the grass, which regenerates only slowly in Greenland?s climate. Once the grass cover is broken and the soil is exposed, soil is carried away especially by the strong winds, and also by pounding from occasionally heavy rains, to the point where the topsoil can be removed for a distance of miles from an entire valley.? Without adequate pastureland, the summer hay yields shrank; without adequate supplies of hay, keeping livestock through the long winter got harder. And, without adequate supplies of wood, getting fuel for the winter became increasingly difficult.

The Norse needed to reduce their reliance on livestock?particularly cows, which consumed an enormous amount of agricultural resources. But cows were a sign of high status; to northern Europeans, beef was a prized food. They needed to copy the Inuit practice of burning seal blubber for heat and light in the winter, and to learn from the Inuit the difficult art of hunting ringed seals, which were the most reliably plentiful source of food available in the winter. But the Norse had contempt for the Inuit?they called them skraelings, ?wretches??and preferred to practice their own brand of European agriculture. In the summer, when the Norse should have been sending ships on lumber-gathering missions to Labrador, in order to relieve the pressure on their own forestlands, they instead sent boats and men to the coast to hunt for walrus. Walrus tusks, after all, had great trade value. In return for those tusks, the Norse were able to acquire, among other things, church bells, stained-glass windows, bronze candlesticks, Communion wine, linen, silk, silver, churchmen?s robes, and jewelry to adorn their massive cathedral at Gardar, with its three-ton sandstone building blocks and eighty-foot bell tower. In the end, the Norse starved to death.

Diamond?s argument stands in sharp contrast to the conventional explanations for a society?s collapse. Usually, we look for some kind of cataclysmic event. The aboriginal civilization of the Americas was decimated by the sudden arrival of smallpox. European Jewry was destroyed by Nazism. Similarly, the disappearance of the Norse settlements is usually blamed on the Little Ice Age, which descended on Greenland in the early fourteen-hundreds, ending several centuries of relative warmth. (One archeologist refers to this as the ?It got too cold, and they died? argument.) What all these explanations have in common is the idea that civilizations are destroyed by forces outside their control, by acts of God.

But look, Diamond says, at Easter Island. Once, it was home to a thriving culture that produced the enormous stone statues that continue to inspire awe. It was home to dozens of species of trees, which created and protected an ecosystem fertile enough to support as many as thirty thousand people. Today, it?s a barren and largely empty outcropping of volcanic rock. What happened? Did a rare plant virus wipe out the island?s forest cover? Not at all. The Easter Islanders chopped their trees down, one by one, until they were all gone. ?I have often asked myself, ?What did the Easter Islander who cut down the last palm tree say while he was doing it??? Diamond writes, and that, of course, is what is so troubling about the conclusions of ?Collapse.? Those trees were felled by rational actors?who must have suspected that the destruction of this resource would result in the destruction of their civilization. The lesson of ?Collapse? is that societies, as often as not, aren?t murdered. They commit suicide: they slit their wrists and then, in the course of many decades, stand by passively and watch themselves bleed to death.

This doesn?t mean that acts of God don?t play a role. It did get colder in Greenland in the early fourteen-hundreds. But it didn?t get so cold that the island became uninhabitable. The Inuit survived long after the Norse died out, and the Norse had all kinds of advantages, including a more diverse food supply, iron tools, and ready access to Europe. The problem was that the Norse simply couldn?t adapt to the country?s changing environmental conditions. Diamond writes, for instance, of the fact that nobody can find fish remains in Norse archeological sites. One scientist sifted through tons of debris from the Vatnahverfi farm and found only three fish bones; another researcher analyzed thirty-five thousand bones from the garbage of another Norse farm and found two fish bones. How can this be? Greenland is a fisherman?s dream: Diamond describes running into a Danish tourist in Greenland who had just caught two Arctic char in a shallow pool with her bare hands. ?Every archaeologist who comes to excavate in Greenland . . . starts out with his or her own idea about where all those missing fish bones might be hiding,? he writes. ?Could the Norse have strictly confined their munching on fish to within a few feet of the shoreline, at sites now underwater because of land subsidence? Could they have faithfully saved all their fish bones for fertilizer, fuel, or feeding to cows?? It seems unlikely. There are no fish bones in Norse archeological remains, Diamond concludes, for the simple reason that the Norse didn?t eat fish. For one reason or another, they had a cultural taboo against it.

Given the difficulty that the Norse had in putting food on the table, this was insane. Eating fish would have substantially reduced the ecological demands of the Norse settlements. The Norse would have needed fewer livestock and less pastureland. Fishing is not nearly as labor-intensive as raising cattle or hunting caribou, so eating fish would have freed time and energy for other activities. It would have diversified their diet.

Why did the Norse choose not to eat fish? Because they weren?t thinking about their biological survival. They were thinking about their cultural survival. Food taboos are one of the idiosyncrasies that define a community. Not eating fish served the same function as building lavish churches, and doggedly replicating the untenable agricultural practices of their land of origin. It was part of what it meant to be Norse, and if you are going to establish a community in a harsh and forbidding environment all those little idiosyncrasies which define and cement a culture are of paramount importance. ?The Norse were undone by the same social glue that had enabled them to master Greenland?s difficulties,? Diamond writes. ?The values to which people cling most stubbornly under inappropriate conditions are those values that were previously the source of their greatest triumphs over adversity.? He goes on:

To us in our secular modern society, the predicament in which the Greenlanders found themselves is difficult to fathom. To them, however, concerned with their social survival as much as their biological survival, it was out of the question to invest less in churches, to imitate or intermarry with the Inuit, and thereby to face an eternity in Hell just in order to survive another winter on Earth.

Diamond?s distinction between social and biological survival is a critical one, because too often we blur the two, or assume that biological survival is contingent on the strength of our civilizational values. That was the lesson taken from the two world wars and the nuclear age that followed: we would survive as a species only if we learned to get along and resolve our disputes peacefully. The fact is, though, that we can be law-abiding and peace-loving and tolerant and inventive and committed to freedom and true to our own values and still behave in ways that are biologically suicidal. The two kinds of survival are separate.

Diamond points out that the Easter Islanders did not practice, so far as we know, a uniquely pathological version of South Pacific culture. Other societies, on other islands in the Hawaiian archipelago, chopped down trees and farmed and raised livestock just as the Easter Islanders did. What doomed the Easter Islanders was the interaction between what they did and where they were. Diamond and a colleague, Barry Rollet, identified nine physical factors that contributed to the likelihood of deforestation?including latitude, average rainfall, aerial-ash fallout, proximity to Central Asia?s dust plume, size, and so on?and Easter Island ranked at the high-risk end of nearly every variable. ?The reason for Easter?s unusually severe degree of deforestation isn?t that those seemingly nice people really were unusually bad or improvident,? he concludes. ?Instead, they had the misfortune to be living in one of the most fragile environments, at the highest risk for deforestation, of any Pacific people.? The problem wasn?t the Easter Islanders. It was Easter Island.

In the second half of ?Collapse,? Diamond turns his attention to modern examples, and one of his case studies is the recent genocide in Rwanda. What happened in Rwanda is commonly described as an ethnic struggle between the majority Hutu and the historically dominant, wealthier Tutsi, and it is understood in those terms because that is how we have come to explain much of modern conflict: Serb and Croat, Jew and Arab, Muslim and Christian. The world is a cauldron of cultural antagonism. It?s an explanation that clearly exasperates Diamond. The Hutu didn?t just kill the Tutsi, he points out. The Hutu also killed other Hutu. Why? Look at the land: steep hills farmed right up to the crests, without any protective terracing; rivers thick with mud from erosion; extreme deforestation leading to irregular rainfall and famine; staggeringly high population densities; the exhaustion of the topsoil; falling per-capita food production. This was a society on the brink of ecological disaster, and if there is anything that is clear from the study of such societies it is that they inevitably descend into genocidal chaos. In ?Collapse,? Diamond quite convincingly defends himself against the charge of environmental determinism. His discussions are always nuanced, and he gives political and ideological factors their due. The real issue is how, in coming to terms with the uncertainties and hostilities of the world, the rest of us have turned ourselves into cultural determinists.

For the past thirty years, Oregon has had one of the strictest sets of land-use regulations in the nation, requiring new development to be clustered in and around existing urban development. The laws meant that Oregon has done perhaps the best job in the nation in limiting suburban sprawl, and protecting coastal lands and estuaries. But this November Oregon?s voters passed a ballot referendum, known as Measure 37, that rolled back many of those protections. Specifically, Measure 37 said that anyone who could show that the value of his land was affected by regulations implemented since its purchase was entitled to compensation from the state. If the state declined to pay, the property owner would be exempted from the regulations.

To call Measure 37?and similar referendums that have been passed recently in other states?intellectually incoherent is to put it mildly. It might be that the reason your hundred-acre farm on a pristine hillside is worth millions to a developer is that it?s on a pristine hillside: if everyone on that hillside could subdivide, and sell out to Target and Wal-Mart, then nobody?s plot would be worth millions anymore. Will the voters of Oregon then pass Measure 38, allowing them to sue the state for compensation over damage to property values caused by Measure 37?

It is hard to read ?Collapse,? though, and not have an additional reaction to Measure 37. Supporters of the law spoke entirely in the language of political ideology. To them, the measure was a defense of property rights, preventing the state from unconstitutional ?takings.? If you replaced the term ?property rights? with ?First Amendment rights,? this would have been indistinguishable from an argument over, say, whether charitable groups ought to be able to canvass in malls, or whether cities can control the advertising they sell on the sides of public buses. As a society, we do a very good job with these kinds of debates: we give everyone a hearing, and pass laws, and make compromises, and square our conclusions with our constitutional heritage?and in the Oregon debate the quality of the theoretical argument was impressively high.

The thing that got lost in the debate, however, was the land. In a rapidly growing state like Oregon, what, precisely, are the state?s ecological strengths and vulnerabilities? What impact will changed land-use priorities have on water and soil and cropland and forest? One can imagine Diamond writing about the Measure 37 debate, and he wouldn?t be very impressed by how seriously Oregonians wrestled with the problem of squaring their land-use rules with their values, because to him a society?s environmental birthright is not best discussed in those terms. Rivers and streams and forests and soil are a biological resource. They are a tangible, finite thing, and societies collapse when they get so consumed with addressing the fine points of their history and culture and deeply held beliefs?with making sure that Thorstein Olafsson and Sigrid Bjornsdotter are married before the right number of witnesses following the announcement of wedding banns on the right number of Sundays?that they forget that the pastureland is shrinking and the forest cover is gone.

When archeologists looked through the ruins of the Western Settlement, they found plenty of the big wooden objects that were so valuable in Greenland?crucifixes, bowls, furniture, doors, roof timbers?which meant that the end came too quickly for anyone to do any scavenging. And, when the archeologists looked at the animal bones left in the debris, they found the bones of newborn calves, meaning that the Norse, in that final winter, had given up on the future. They found toe bones from cows, equal to the number of cow spaces in the barn, meaning that the Norse ate their cattle down to the hoofs, and they found the bones of dogs covered with knife marks, meaning that, in the end, they had to eat their pets. But not fish bones, of course. Right up until they starved to death, the Norse never lost sight of what they stood for.

these Vikings still seem to be better than we are. if western civilization will collapse, the primal reasons won't be cultural factors (the culture is dying), or even nuclear war. the greed makes us look like snake eating its own tail.

There is likely a sense on this circular that I tend to interject economics into every topic, but let's think about that for a minute.

Look economics is the study of "human action in the face of scarcity"; it deserves to be part of every discussion. Especially this one, which is about people consuming every last tree or cow until they starve to death.

Did you know that early settlers in America starved to death, with losses higher than 50% in the first few months or the first year? The following quotes and ideas are from DiLorenzo's book "How Capitalism Saved America."

At the time, one of the Virginia colonists described the problem in this way. He said the cause of the starvation was "want of providence, industrie and government, and not the barennesse and defect of the Countrie, as is generally supposed." The colonial expeditions had been set up as communal organizations; everything was owned by the funding institution and individual colonists were simply indentured servants. Sir Thomas Dale was sent by the British government to serve as the high marshal of the Virginia colongy. DiLorenzo: "Dale noted that although most of the settlers had starved to death, the remaining ones were spending much of their time playing games in the streets and he immediately identified the problem: the system of communal ownership. He determined, therefore, that each man in the colony would be given three acres of land and be required to work no more than one month per year ... to contribute to the treasury of the colony."

DiLorenzo quotes historian Mathew Page Andrews: "As soon as the settlers were thrown upon their own resources, and each freeman had acquired the right of owning property, the colonists quickly developed what became the distinguishing characteristic of Americans -- and aptitude for all kinds of craftsmanship coupled with an innate genius for experimentation and invention." The colony immediately began to prosper.

Now, obviously the Greenland example is different from this history -- but the nature of property in the Eastern and Western Greenland settlements may be crucial to understanding how a people could destroy their grass land, cut every last tree, and eat every cow. Private owners do not do that to their property -- but people do it all the time to communal property.

It would be interesting to see if historians know any more about these communities. I am out of time this morning, but this, I propose, is a key question which is all too often ignored completely by historians. It deserves some discussion, IMO.

Dozens of new genes identifiedBy intensely and systematically comparing the human X chromosome to genetic information from chimpanzees, rats and mice, a team of scientists from the United States and India has uncovered dozens of new genes, many of which are located in regions of the chromosome already tied to disease.

Regions of the X chromosome, one of the two sex chromosomes (Y is the other), have been linked to mental retardation and numerous other disorders, but finding the particular genetic abnormalities involved has been difficult.

The team's accomplishment, described in the April issue of Nature Genetics, should speed research into diseases associated with the X chromosome and encourage similar analyses of other chromosomes.

"To our knowledge, this is the first time critical analysis of an entire chromosome has been done by a group that wasn't involved in determining the chromosome's genetic sequence," says study leader Akhilesh Pandey, M.D., Ph.D., an assistant professor in the McKusick-Nathans Institute of Genetic Medicine at Johns Hopkins and chief scientific adviser to the Institute of Bioinformatics (IOB) in Bangalore, India, where the analyses took place. "We didn't start small. We wanted to prove that complete annotation can be done, and done in a way that lets you find new and unexpected things."

For 18 months, 26 Indian scientists pored through the publicly available sequence of the X chromosome (information generated by the Wellcome Trust Sanger Institute in England and others) to identify genes and other important parts of its DNA.

But unlike other efforts, the team didn't just "mine the data" by using computers to search for known patterns in the genetic sequence. Instead, Pandey decided they would look for similarities between the human X chromosome's protein-encoding instructions and corresponding regions in the mouse. Regions that were identical or nearly so were then examined carefully by IOB biologists.

"We didn't want to start out by saying that genes had to look a certain way," says Pandey. "So our only initial assumption was that if a genetic region is important and codes for a protein, the sequence will be conserved at the protein level. Thus, even if the genetic sequence is different here and there, the protein sequence could still be the same."

Essentially, the researchers took advantage of the redundancy inherent in the genetic code. DNA's four building blocks -- A, T, C and G -- act as instructions for proteins in select three-block sets. These three-block sets each "code" for just one of the 20 possible protein building blocks, or amino acids, but some of the sets code for the same amino acid. For example, the DNA sequences TTGAGGAGC and CTACGATCA are quite different, but both specify the same three amino acids -- leucine, arginine and serine, in that order.

"Instead of telling the computer what to look for, we let nature tell the computer what was important," says Pandey. "When you align the protein-encoding instructions of the human and mouse, the genes jump out at you."

In the regions that were the same between species, the scientists found 43 new "gene structures" that encode proteins. Some of the newly identified genes sit in regions long tied to X-linked mental retardation syndromes, which appear only in boys, or other disorders. Quite remarkably, Pandey says, almost half of the new genes don't look like any previously known genes, nor do they look like each other.

"These would not be found any other way, because no one knew to look for them," he says. "No one had ever identified any aspect of their sequences as being important."

The IOB scientists and the U.S. members of the team experimentally investigated a few of the new genes to confirm the comparative approach's validity. Their results, as well as data created by other scientists since the U.S-India team started working, confirm the existence of some of the newly identified genes. The team's work also showed that some so-called pseudogenes on the X chromosome are actually expressed, or transcribed, which contradicts the widespread idea that they are functionless.

"We're really trying to show that complete annotation of chromosomes can be done, and that doing it this way means you can find things you don't expect to find," says Pandey. "It's long, painstaking work, but it's worth it."

Pandey hopes that researchers will take the initiative to annotate sequenced genetic information and validate regions used in their work.

I note tangentially that Konrad Lorenz made this point decades ago (in a form more subtle than this found here in that here the piece seems to argue for nuture over nature, which was NOT KL's point at all) but was belittled for thinking that genes were changed by environment (Buddenbrookism?)

Hells Angels have nothing on some water fleas. While these tiny crustaceans are best known for their uncanny ability to skim atop the water's surface, some also boast a "helmet" that makes them tough for a predator to swallow. But other fleas with the same DNA -- clones of the helmeted ones -- have no such armor. And the reason is shaking up the world of genetics.

The helmeted fleas live in a lab aquarium to which scientists added the chemical scent of fish, fleas' main predator. The fleas without helmets come from an aquarium with no fish in sight (or smell). The difference between genetic duplicates reflects the power of environment: It can elicit markedly different traits from the same DNA.

I have written in the past about how environment -- ranging from experiences to diet -- can alter DNA, putting the molecular version of a "not in service" sign on our genes so they remain silent and, as geneticists say, unexpressed. The water flea and other examples of "developmental plasticity" show that a given genotype can develop in any of several ways depending on what environment it's in. And that makes the notion of "innate" look more and more inane.

"If you have a gene with some purported effect, that effect depends on the environment in which it's expressed," says Eric Turkheimer of the University of Virginia. "Anything that looks genetic, because people with that gene always turn out a certain way, might not really be a genetic effect but an artifact of how few environments people with that gene have been exposed to. Once a new environment comes along it can change everything, so what you thought was a fixed effect of a gene isn't."

Oak-tree caterpillars that hatch in the spring, for instance, eat oak blossoms and grow up to look a bit like flowers. Caterpillars with the same genome, but which hatch in the summer, eat leaves and grow up to look like twigs. The different composition of blossoms and leaves affects what traits the caterpillars' genes produce. If you had never seen spring caterpillars, you would think their genome produces only twiggy caterpillars. But the twiggy look is, as Prof. Turkheimer says, only an artifact of how few environments those caterpillars have been exposed to, not genetic determinism.

In the past few years, scientists have found the first examples of such an effect in people, discovering how life experiences can alter gene-based traits once thought to be innate.

A certain form of a gene called MAOA, for instance, was so closely linked to aggression and criminality that it became known as a "violence gene." In a 2002 study, however, an international team of researchers followed 442 male New Zealanders who carried either of two versions of the MAOA gene. One version produces small amounts of MAOA, an enzyme active in the brain; a dearth of MAOA had been linked to criminality. The other produces high amounts of MAOA, as in a normal brain.

But the study found that men with the low-activity ("violent") form of the gene were no more likely to grow up to be antisocial or violent -- unless they had also been neglected or abused as children. In that case, they were about twice as likely to engage in persistent fighting, bullying, theft and vandalism. If they had the "violence gene" but were raised in a loving and nonabusive family, they turned out fine. A 2004 study by different scientists confirmed this.

In a 2003 study, geneticists examined claims that one form of a gene called 5-HTT is associated with depression and suicide. Instead, they found that people who carry this form are no more likely to suffer from depression than people with the "healthy" variant -- unless they also experience deeply stressful events. Two papers in 2004 confirmed this.

"These genes were not connected with aggression or depression, respectively, in the absence of exposure to environmental risk," says behavioral geneticist Terrie Moffitt of the University of Wisconsin, Madison, and King's College London. "That different environments can produce different [traits] from the same genotype is now emerging in many fields of health research."

For example, she says, studies show that "the effect of a gene on cholesterol levels depends on environmental risk -- high or low dietary fat. The effect of a gene on gum disease depends on whether you smoke or not."

Exactly how life experiences affect DNA has been most precisely worked out in lab animals. Last summer, Michael Meaney of McGill University, Montreal, and colleagues reported that a gene that shapes how fearful, jumpy and neurotic a rat is can be altered by how regularly its mother licks and grooms it. Maternal care changes the chemistry of a "neuroticism gene," and the rat grows up to be mellow and curious. The genetic trait of neuroticism -- deemed innate because scientists had found a gene "for" it -- is reversible by environment.

"The whole subject of what counts as innate has just exploded," says science historian and physicist Evelyn Fox Keller of the Massachusetts Institute of Technology. "Historically, nature/nurture divided what was fixed from what could be changed. But what our biology really gives us is our plasticity, our ability to respond to our experiences. That's what's innate."

Empathy allows us to feel the emotions of others, to identify and understand their feelings and motives and see things from their perspective. How we generate empathy remains a subject of intense debate in cognitive science.

Some scientists now believe they may have finally discovered its root. We're all essentially mind readers, they say.

The idea has been slow to gain acceptance, but evidence is mounting.

Mirror neurons

In 1996, three neuroscientists were probing the brain of a macaque monkey when they stumbled across a curious cluster of cells in the premotor cortex, an area of the brain responsible for planning movements. The cluster of cells fired not only when the monkey performed an action, but likewise when the monkey saw the same action performed by someone else. The cells responded the same way whether the monkey reached out to grasp a peanut, or merely watched in envy as another monkey or a human did.

Because the cells reflected the actions that the monkey observed in others, the neuroscientists named them "mirror neurons."

Later experiments confirmed the existence of mirror neurons in humans and revealed another surprise. In addition to mirroring actions, the cells reflected sensations and emotions.

"Mirror neurons suggest that we pretend to be in another person's mental shoes," says Marco Iacoboni, a neuroscientist at the University of California, Los Angeles School of Medicine. "In fact, with mirror neurons we do not have to pretend, we practically are in another person's mind."

Since their discovery, mirror neurons have been implicated in a broad range of phenomena, including certain mental disorders. Mirror neurons may help cognitive scientists explain how children develop a theory of mind (ToM), which is a child's understanding that others have minds similar to their own. Doing so may help shed light on autism, in which this type of understanding is often missing.

Theory theory

Over the years, cognitive scientists have come up with a number of theories to explain how ToM develops. The "theory theory" and "simulation theory" are currently two of the most popular.

Theory theory describes children as budding social scientists. The idea is that children collect evidence -- in the form of gestures and expressions -- and use their everyday understanding of people to develop theories that explain and predict the mental state of people they come in contact with.

Vittorio Gallese, a neuroscientist at the University of Parma in Italy and one of original discovers of mirror neurons, has another name for this theory: he calls it the "Vulcan Approach," in honor of the Star Trek protagonist Spock, who belonged to an alien race called the Vulcans who suppressed their emotions in favor of logic. Spock was often unable to understand the emotions that underlie human behavior.

Gallese himself prefers simulation theory over this Vulcan approach.

Natural mind readers

Simulation theory states that we are natural mind readers. We place ourselves in another person?s "mental shoes," and use our own mind as a model for theirs.

Gallese contends that when we interact with someone, we do more than just observe the other person?s behavior. He believes we create internal representations of their actions, sensations and emotions within ourselves, as if we are the ones that are moving, sensing and feeling.

Many scientists believe that mirror neurons embody the predictions of simulation theory. "We share with others not only the way they normally act or subjectively experience emotions and sensations, but also the neural circuits enabling those same actions, emotions and sensations: the mirror neuron systems," Gallese told LiveScience.

Gallese points out, however, that the two theories are not mutually exclusive. If the mirror neuron system is defective or damaged, and our ability to empathize is lost, the observe-and-guess method of theory theory may be the only option left. Some scientists suspect this is what happens in autistic people, whose mental disorder prevents them from understanding the intentions and motives of others.

Tests underway

The idea is that the mirror neuron systems of autistic individuals are somehow impaired or deficient, and that the resulting "mind-blindness" prevents them from simulating the experiences of others. For autistic individuals, experience is more observed than lived, and the emotional undercurrents that govern so much of our human behavior are inaccessible. They guess the mental states of others through explicit theorizing, but the end result is a list -- mechanical and impersonal -- of actions, gestures and expressions void of motive, intent, or emotion.

Several labs are now testing the hypothesis that autistic individuals have a mirror neuron deficit and cannot simulate the mental states of others.

One recent experiment by Hugo Theoret and colleagues at the University of Montreal showed that mirror neurons normally active during the observation of hand movements in non-autistic individuals are silent in those who have autism.

"You either simulate with mirror neurons, or the mental states of others are completely precluded to you," said Iacoboni.

Like the helmeted water flea noted above, I find these sorts of comparitive studies fascinating.

Genetic divergence of man from chimp has aided human fertility but could have made us more prone to cancer, Cornell study finds

By Krishna Ramanujan

ITHACA, N.Y. -- Chimpanzees and humans share a common ancestor, and even today 99 percent of the two species' DNA is identical. But since the paths of man and chimp diverged 5 million years ago, that one percent of genetic difference appears to have changed humans in an unexpected way: It could have made people more prone to cancer.A comparative genetic study led by Cornell University researchers suggest that some mutations in human sperm cells might allow them to avoid early death and reproduce, creating an advantage that ensures more sperm cells carry this trait. But this same positive selection could also have made it easier for human cancer cells to survive.

"If we are right about this, it may help explain the high prevalence of cancer," says Rasmus Nielsen, lead author of the paper, and a former assistant professor of the Department of Biological Statistics and Computational Biology at Cornell who is now a professor at the University of Copenhagen, Denmark. The study, published in a recent issue of PLoS Biology (Vol. 3, Issue 6), a peer-reviewed, open-access journal published by the Public Library of Science (PLoS), focuses on identifying biological processes where positive selection -- adaptations that lead to new directions -- produced evolutionary changes that can be identified in the genomes of both humans and chimps.

To make these comparisons, the researchers used chimpanzee DNA sequence data generated by Celera Genomics of Rockville, Md. The chimpanzee and human versions of each gene were aligned, and on average they differ at only slightly more than 1 percent of the positions in the DNA.

The researchers' searched out the relatively few genes (13,731 sequences) that have diverged the most since sharing a common ancestor, most likely a primate that looked like a cross between a gorilla, chimp and human. While the scientists more or less expected to see that immune defense systems in each species have rapidly evolved separately to keep pace with attacking, mercurial bacteria and viruses, they were surprised to find that genes associated with the brain were practically the same.

One of the more interesting observations occurred in some genes that govern cell death in sperm cells and tumor cells alike. Both types of cells use a mechanism called apoptosis -- a pathway that includes genes that program a cell's demise and death. During the production of sperm cells, for example, apoptosis kills many of the cells before they reach maturity. But mutations in these genes could inhibit apoptosis in some sperm cells, allowing more sperm to reach maturity, reproduce again and ensure that future cells will carry the gene that defuses early self-destruction.Unfortunately, this same machinery also allows cancer cells to live on. The researchers suspect that some mutations that allow sperm cells to increase their chances of reproduction might also diminish an organism's ability to turn off tumor cell growth and fight cancer.

"Eliminating cancer cells by apoptosis is one of the main processes used by the organism to fight cancer," says Nielsen.

According to the study, immune defense genes also have evolved quickly, creating greater genetic differences between humans and chimps.

"The main reason why immune- and defense-related genes have diverged is probably because they are involved in an evolutionary arms race with pathogens," says Nielsen. "Viruses and other pathogens evolve very quickly, and the human immune system is constantly being challenged by the emergence of new pathogenic threats." Pathogens such as the bubonic plague, AIDS and influenza put constant pressures on the human immune system to adapt by positive selection.Surprisingly, the study found that genes associated with the brain could not explain apparent differences in brain form, function and power between humans and chimps. The researchers wonder if a few small genetic changes had big effects on how the brains of each species have developed.

"It could be relatively few switch genes that account for the difference," says Andrew Clark, a co-author on the study and a professor of molecular biology and genetics at Cornell. Carlos Bustamante, an assistant professor of Biological Statistics and Computational Biology at Cornell, was also a major contributor to this study.

Message: washingtonpost.comInventing Our EvolutionWe're almost able to build better human beings. But are we ready?

PostMonday, May 16, 2005; A01

The surge of innovation that has given the world everything from iPods to talking cars is now turning inward, to our own minds and bodies. In an adaptation from his new book, Washington Post staff writer Joel Garreau looks at the impact of the new technology.

Some changes in what it means to be human:

? Matthew Nagel, 25, can move objects with his thoughts. The paralyzed former high school football star, whose spinal cord was severed in a stabbing incident, has a jack coming out of the right side of his skull. Sensors in his brain can read his neurons as they fire. These are connected via computer to a robotic hand. When he thinks about moving his hand, the artificial thumb and forefinger open and close. Researchers hope this technology will, within our lifetimes, allow the wheelchair-bound to walk. The military hopes it will allow pilots to fly jets using their minds.

? Around the country, companies such as Memory Pharmaceuticals, Sention, Helicon Therapeutics, Saegis Pharmaceuticals and Cortex Pharmaceuticals are racing to bring memory-enhancing drugs to market before the end of this decade. If clinical trials continue successfully, these pills could be a bigger pharmaceutical bonanza than Viagra. Not only do they hold the promise of banishing the senior moments of aging baby boomers; they might improve the SAT scores of kids by 200 points or more.

? At the Defense Sciences Office of the Defense Advanced Research Projects Agency (DARPA) in Arlington, programs seek to modify the metabolisms of soldiers so as to allow them to function efficiently without sleep or even food for as much as a week. For shorter periods, they might even be able to survive without oxygen. Another program seeks to allow soldiers to stop bleeding by focusing their thoughts on the wound. Yet another program is investigating ways to allow veterans to regrow blown-off arms and legs, like salamanders.

Traditionally, human technologies have been aimed outward, to control our environment, resulting in, for example, clothing, agriculture, cities and airplanes. Now, however, we have started aiming our technologies inward. We are transforming our minds, our memories, our metabolisms, our personalities and our progeny. Serious people, including some at the National Science Foundation in Arlington, consider such modification of what it means to be human to be a radical evolution -- one that we direct ourselves. They expect it to be in full flower in the next 10 to 20 years.

"The next frontier," says Gregory Stock, director of the Program on Medicine, Technology and Society at the UCLA School of Medicine, "is our own selves."

The process has already begun. Prozac and its ilk modify personality. Viagra alters metabolism. You can see deep change in the basics of biology most clearly, however, wherever you find the keenest competition. Sport is a good example.

"The current doping agony," says John Hoberman, a University of Texas authority on performance drugs, "is a kind of very confused referendum on the future of human enhancement." Some athletes today look grotesque. Curt Schilling, the All-Star pitcher, in 2002 talked to Sports Illustrated about the major leagues. "Guys out there look like Mr. Potato Head, with a head and arms and six or seven body parts that just don't look right."

Steroids are merely a primitive form of human enhancement, however. H. Lee Sweeney of the University of Pennsylvania suggests that the recent Athens Olympics may have been the last without genetically enhanced athletes. His researchers have created super-muscled "Schwarzenegger rats." They're built like steers, with necks wider than their heads. They live longer and recover more quickly from injuries than do their unenhanced comrades. Sweeney sees it as only a matter of time before such technology seeps into the sports world.

Human enhancement is hardly limited to sport. In 2003, President Bush signed a $3.7 billion bill to fund research at the molecular level that could lead to medical robots traveling the human bloodstream to fight cancer or fat cells. At the University of Pennsylvania, ordinary male mouse embryo cells are being transformed into egg cells. If this science works in humans, it could open the way for two gay males to make a baby -- blurring the standard model of parenthood. In 2004, a new technology for the first time allowed women to beat the biological clock. Portions of their ovaries, frozen when they are young and fertile, can be reimplanted in their sixties, seventies or eighties, potentially allowing them to bear children then.

The genetic, robotic and nano-technologies creating such dramatic change are accelerating as quickly as has information technology for the past four decades. The rapid development of all these fields is intertwined.

It was in 1965 that Gordon E. Moore, director of Fairchild's Research and Development Laboratories, noted, in an article for the 35th-anniversary issue of Electronics magazine, that the complexity of "minimum cost semiconductor components" had been doubling every year since the first prototype microchip was produced six years before. And he predicted this doubling would continue every year for the next 10 years.

Carver Mead, a professor at the California Institute of Technology, would come to christen this claim "Moore's Law."

Over time it has been modified. As the core faith of the entire global computer industry, it is now stated this way: The power of information technology will double every 18 months, for as far as the eye can see.

Sure enough, in 2002, the 27th doubling occurred right on schedule with a billion-transistor chip. A doubling is an amazing thing. It means the next step is as great as all the previous steps put together. Twenty-seven consecutive doublings of anything man-made, an increase of well over 100 million times-- especially in so short a period -- is unprecedented in human history.

This is exponential change. It's a curve that goes straight up.

Optimists say that culture and values can control the impact of these advances.

"You have to make a distinction between the science and the technological applications," says Francis Fukuyama, a member of the President's Council on Bioethics and director of the Human Biotechnology Governance Project. "It's probably true that in terms of the basic science, it's pretty hard to stop that. It's not one guy in a laboratory somewhere. But not everything that is scientifically possible will actually be technologically implemented and used on a large scale. In the case of human cloning, there's an abstract possibility that people will want to do that, but the number of people who are going to want to take the risk is going to be awfully small."

Taboos will play an important role, Fukuyama says. "We could really speed up the whole process of drug improvement if we did not have all the rules on human experimentation. If companies were allowed to use clinical trials in Third World countries, paying a lot of poor people to take risks that you wouldn't take in a developed country, we could speed up technology quickly. But because of the Holocaust -- "

Fukuyama thinks the school of hard knocks will slow down a lot of attempts. "People may in the abstract say that they're willing to take that risk. But the moment you have a deformed baby born as a result of someone trying to do some genetic modification, I think there will be a really big backlash against it."

Today, nonetheless, we are surrounded by the practical effects of this curve of exponential technological change. IBM this year fired up a new machine called Blue Gene/L. It is ultimately expected to be 1,000 times as powerful as Deep Blue, the machine that beat world chess champion Garry Kasparov in 1997. "If this computer unlocks the mystery of how proteins fold, it will be an important milestone in the future of medicine and health care," said Paul M. Horn, senior vice president of IBM Research, when the project was announced.

Proteins control all cellular processes in the body. They fold into highly complex, three-dimensional shapes that determine their function. Even the slightest change in the folding process can turn a desirable protein into an agent of disease. Blue Gene/L is intended to investigate how. Thus, breakthroughs in computers today are creating breakthroughs in biology. "One day, you're going to be able to walk into a doctor's office and have a computer analyze a tissue sample, identify the pathogen that ails you, and then instantly prescribe a treatment best suited to your specific illness and individual genetic makeup," Horn said.

What's remarkable, then, is not this computer's speed but our ability to use it to open new vistas in entirely different fields -- in this case, the ability to change how our bodies work at the most basic level. This is possible because at a thousand trillion operations per second, this computer might have something approaching the raw processing power of the human brain.

Nathan Myhrvold, the former technology chief of Microsoft, points out that it cost $12 billion to sequence the first human genome. You will soon be able to get your own done for $10, he expects.

If an implant in a paralyzed man's head can read his thoughts, if genes can be manipulated into better versions of themselves, the line between the engineered and the born begins to blur.

For example, in Silicon Valley, there is a biotech company called Rinat Neuroscience. DARPA provided critical early funding for its "pain vaccine," a substance designed to block intense pain in less than 10 seconds. Its effects last for 30 days. Tests show it doesn't stifle reactions. If you touch a hot stove, your hand will still automatically jerk away. But after that, the torment is greatly reduced. The product works on the inflammatory response that is responsible for the majority of subacute pain. If you get shot, you feel the bullet, but after that, the inflammation and swelling that trigger agony are substantially reduced. The company is deep into animal testing, is preparing reports for scientific conferences, and has now attracted venture capital funding.

Another DARPA program, originally christened Regenesis, started with the observation that if you cut off the tail of a tadpole, the tail will regrow. If you cut off an appendage of an adult frog, however, it won't, because certain genetic signals have been switched off. This process is carried out by a mass of undifferentiated cells called a blastema, also called a regeneration bud. The bud has the capability to develop into an organ or an appendage, if it gets the right signals. Early results in mice indicate that such blastemas might be generated in humans. The program, now called Restorative Injury Repair, is aimed at allowing regrowth of a blown-off hand or a breast removed in a mastectomy. (Instances of amputated fingertips regenerating in children under 12 have long been noted in scientific journals.) "We had it; we lost it; we need to find it again" was Regenesis's original slogan.Snooze and Lose?

There are three groups of people usually attracted to any new enhancement. In order, they are the sick, the otherwise healthy with a critical need, and the enterprising. This became immediately obvious when a drug called modafinil entered the market earlier this decade. It is intended to shut off the urge to sleep, without the jitter, buzz, euphoria, crash, or potential for paranoid delusion of stimulants such as amphetamines, cocaine or even caffeine.

The FDA originally approved modafinil for narcoleptics who fall asleep frequently and uncontrollably. But this widely available prescription drug, with the trade name Provigil, immediately was tested on healthy young U.S. Army helicopter pilots. It allowed them to stay up safely for almost two days while remaining practically as focused, alert and capable of dealing with complex problems as the well rested. Then, after a good eight hours' sleep, it turned out they could get up and do it again for another 40 hours, before finally catching up on their sleep.

But it's the future of the third group -- the millions who, in the immortal words of Kiss, "wanna rock-and-roll all night and party every day" -- that holds the potential for changing society. Will people feel that they need to routinely control their sleep in order to be competitive? Will unenhanced people get fewer promotions and raises than their modified colleagues? Will this start an arms race over human consciousness?

Consider the case of a little boy born in Germany at the turn of this century. As reported in the New England Journal of Medicine last year, his doctors immediately noticed he had unusually large muscles bulging from his tiny arms and legs. By the time he was 4 1/2 , it was clear that he was extraordinarily strong. Most children his age can lift about one pound with each arm. He could hold a seven-pound dumbbell aloft with each outstretched hand. He is the first human confirmed to have a genetic variation that builds extraordinary muscles. If the effect can be duplicated, it could treat or cure muscle-wasting diseases.

Wyeth Pharmaceuticals is testing a drug designed to do just that as a treatment for the most common form of muscular dystrophy. Will athletes try to exploit the discovery to enhance their abilities?

"Athletes find a way of using just about anything," says Elizabeth M. McNally of the University of Chicago, who wrote an article accompanying the findings in the New England Journal of Medicine. "This, unfortunately, is no exception."Views of the Future

Ray Kurzweil, an artificial-intelligence pioneer and winner of the National Medal of Technology, shrugs at the controversy over the use of stem cells from human embryos: "All the political energy that has gone into this issue -- it is not even slowing down the most narrow approach." It is simply being pursued outside the United States -- in China, Korea, Taiwan, Singapore, Scandinavia and Great Britain, where scientists will probably achieve success first, he notes.

In the next couple of decades, Kurzweil predicts, life expectancy will rise to at least 120 years. Most diseases will be prevented or reversed. Drugs will be individually tailored to a person's DNA. Robots smaller than blood cells -- nanobots, as they are called -- will be routinely injected by the millions into people's bloodstreams. They will be used primarily as diagnostic scouts and patrols, so if anything goes wrong in a person's body, it can be caught extremely early.

As James Watson, co-winner of the Nobel Prize for discovering the structure of DNA, famously put it: "No one really has the guts to say it, but if we could make better human beings by knowing how to add genes, why shouldn't we?"

Gregory Stock of UCLA sees this as the inevitable outcome of the decoding of the human genome. "We have spent billions to unravel our biology, not out of idle curiosity, but in the hope of bettering our lives," he said at a 2003 Yale bioethics conference. "We are not about to turn away from this."

Stock sees humanity embracing artificial chromosomes -- rudimentary versions of which already exist. Right now, the human body has 23 chromosome pairs, with the chromosomes numbered 1 through 46. Messing with them is tricky -- you never know when you're going to inadvertently step on unanticipated interactions. By adding a new chromosome pair (Nos. 47 and 48) to the embryo, however, the possibilities appear endless. Stock, in his book "Redesigning Humans: Our Inevitable Genetic Future," describes it as the safest way to substantially modify humans because, he says, it would minimize unintended consequences. On top of that, the chromosome insertion sites could have an off switch activated by an injection if we wanted to stop whatever we'd started. This would give future generations a chance to undo whatever we did.

Stock offers this analysis to counter the argument offered by some bioethicists that inheritable genetic line engineering should be unconditionally banned because future generations harmed by wrongful or unsuccessful modifications would have no control over the matter.

But the very idea of aspiring to such godlike powers is blasphemous to some. "Genetic engineering," writes Michael J. Sandel, a professor of political philosophy at Harvard, is "the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature. But the promise of mastery is flawed. It threatens to banish our appreciation of life as a gift, and to leave us with nothing to affirm or behold outside our own will."

Stock rejects this view. "We should not just accept but embrace the new technologies, because they're filled with promise," he says. Within a few years, he writes, "traditional reproduction may begin to seem antiquated, if not downright irresponsible." His projections, he asserts, are not at all out of touch with reality.

Adapted from the book "Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human" by Joel Garreau, to be published May 17 by Doubleday, a division of Random House Inc. ? 2005 by Joel Garreau. Reprinted with permission.

The high intelligence of Ashkenazi Jews may be a result of their persecuted past

THE idea that some ethnic groups may, on average, be more intelligent than others is one of those hypotheses that dare not speak its name. But Gregory Cochran, a noted scientific iconoclast, is prepared to say it anyway. He is that rare bird, a scientist who works independently of any institution. He helped popularise the idea that some diseases not previously thought to have a bacterial cause were actually infections, which ruffled many scientific feathers when it was first suggested. And more controversially still, he has suggested that homosexuality is caused by an infection.

Even he, however, might tremble at the thought of what he is about to do. Together with Jason Hardy and Henry Harpending, of the University of Utah, he is publishing, in a forthcoming edition of the Journal of Biosocial Science, a paper which not only suggests that one group of humanity is more intelligent than the others, but explains the process that has brought this about. The group in question are Ashkenazi Jews. The process is natural selection.

History before scienceAshkenazim generally do well in IQ tests, scoring 12-15 points above the mean value of 100, and have contributed disproportionately to the intellectual and cultural life of the West, as the careers of Freud, Einstein and Mahler, pictured above, affirm. They also suffer more often than most people from a number of nasty genetic diseases, such as Tay-Sachs and breast cancer. These facts, however, have previously been thought unrelated. The former has been put down to social effects, such as a strong tradition of valuing education. The latter was seen as a consequence of genetic isolation. Even now, Ashkenazim tend to marry among themselves. In the past they did so almost exclusively.

Dr Cochran, however, suspects that the intelligence and the diseases are intimately linked. His argument is that the unusual history of the Ashkenazim has subjected them to unique evolutionary pressures that have resulted in this paradoxical state of affairs.

Ashkenazi history begins with the Jewish rebellion against Roman rule in the first century AD. When this was crushed, Jewish refugees fled in all directions. The descendants of those who fled to Europe became known as Ashkenazim.

In the Middle Ages, European Jews were subjected to legal discrimination, one effect of which was to drive them into money-related professions such as banking and tax farming which were often disdained by, or forbidden to, Christians. This, along with the low level of intermarriage with their gentile neighbours (which modern genetic analysis confirms was the case), is Dr Cochran's starting point.

He argues that the professions occupied by European Jews were all ones that put a premium on intelligence. Of course, it is hard to prove that this intelligence premium existed in the Middle Ages, but it is certainly true that it exists in the modern versions of those occupations. Several studies have shown that intelligence, as measured by IQ tests, is highly correlated with income in jobs such as banking.

What can, however, be shown from the historical records is that European Jews at the top of their professions in the Middle Ages raised more children to adulthood than those at the bottom. Of course, that was true of successful gentiles as well. But in the Middle Ages, success in Christian society tended to be violently aristocratic (warfare and land), rather than peacefully meritocratic (banking and trade).

Put these two things together?a correlation of intelligence and success, and a correlation of success and fecundity?and you have circumstances that favour the spread of genes that enhance intelligence. The questions are, do such genes exist, and what are they if they do? Dr Cochran thinks they do exist, and that they are exactly the genes that cause the inherited diseases which afflict Ashkenazi society.

That small, reproductively isolated groups of people are susceptible to genetic disease is well known. Constant mating with even distant relatives reduces genetic diversity, and some disease genes will thus, randomly, become more common. But the very randomness of this process means there should be no discernible pattern about which disease genes increase in frequency. In the case of Ashkenazim, Dr Cochran argues, this is not the case. Most of the dozen or so disease genes that are common in them belong to one of two types: they are involved either in the storage in nerve cells of special fats called sphingolipids, which form part of the insulating outer sheaths that allow nerve cells to transmit electrical signals, or in DNA repair. The former genes cause neurological diseases, such as Tay-Sachs, Gaucher's and Niemann-Pick. The latter cause cancer.

That does not look random. And what is even less random is that in several cases the genes for particular diseases come in different varieties, each the result of an independent original mutation. This really does suggest the mutated genes are being preserved by natural selection. But it does not answer the question of how evolution can favour genetic diseases. However, in certain circumstances, evolution can.

West Africans, and people of West African descent, are susceptible to a disease called sickle-cell anaemia that is virtually unknown elsewhere. The anaemia develops in those whose red blood cells contain a particular type of haemoglobin, the protein that carries oxygen. But the disease occurs only in those who have two copies of the gene for the disease-causing haemoglobin (one copy from each parent). Those who have only one copy have no symptoms. They are, however, protected against malaria, one of the biggest killers in that part of the world. Thus, the theory goes, the pressure to keep the sickle-cell gene in the population because of its malaria-protective effects balances the pressure to drive it out because of its anaemia-causing effects. It therefore persists without becoming ubiquitous.

Dr Cochran argues that something similar happened to the Ashkenazim. Genes that promote intelligence in an individual when present as a single copy create disease when present as a double copy. His thesis is not as strong as the sickle-cell/malaria theory, because he has not proved that any of his disease genes do actually affect intelligence. But the area of operation of some of them suggests that they might.

The sphingolipid-storage diseases, Tay-Sachs, Gaucher's and Niemann-Pick, all involve extra growth and branching of the protuberances that connect nerve cells together. Too much of this (as caused in those with double copies) is clearly pathological. But it may be that those with single copies experience a more limited, but still enhanced, protuberance growth. That would yield better linkage between brain cells, and might thus lead to increased intelligence. Indeed, in the case of Gaucher's disease, the only one of the three in which people routinely live to adulthood, there is evidence that those with full symptoms are more intelligent than the average. An Israeli clinic devoted to treating people with Gaucher's has vastly more engineers, scientists, accountants and lawyers on its books than would be expected by chance.

Why a failure of the DNA-repair system should boost intelligence is unclear?and is, perhaps, the weakest part of the thesis, although evidence is emerging that one of the genes in question is involved in regulating the early growth of the brain. But the thesis also has a strong point: it makes a clear and testable prediction. This is that people with a single copy of the gene for Tay-Sachs, or that for Gaucher's, or that for Niemann-Pick should be more intelligent than average. Dr Cochran and his colleagues predict they will be so by about five IQ points. If that turns out to be the case, it will strengthen the idea that, albeit unwillingly, Ashkenazi Jews have been part of an accidental experiment in eugenics. It has brought them some advantages. But, like the deliberate eugenics experiments of the 20th century, it has also exacted a terrible price.

As an Ashkenazi Jew I note the theory that the Ashkenazi are the descendents of the semitic tribes of Israel has been intriguingly challenged by historian Arthur Koestler in his hard-to-find book "The Thirteenth Tribe" which hypothesizes that we came principally from the Khazar Empire. Between the Black and Caspian Seas, it was the Khazar Empire that stopped the northern expansion of Islam at the same time that Charles Martel was stopping it in France. At some point, the rulers of the Khazars converted to Judaism and suggested their people do so as well. AK thinks as the Empire broke down several hundred years later, these Jews, the 13th tribe if you will, are the one who moved into eastern Europe.

A very interesting book. Stating the matter plainly, he is saying that Ashkenazi Jews are Aryans, not Semites.

Public participation, including yours, is critical to the Genographic Project's success. Here's how you can get involved:

Purchasing a Public Participation Kit will fund important research around the world?and open the door to the ancient past of your own genetic background.

With a simple and painless cheek swab you can sample your own DNA. You'll submit the sample through our secure, private, and completely anonymous system, then log on to the project Web site to track your personal results online.

This is not a genealogy test and you won't learn about your great grandparents. You will learn, however, of your deep ancestry, the ancient genetic journeys and physical travels of your distant relatives.

To insure total anonymity you will be identified at all times only by your kit number, not by your name. There is no record, no database that links test results with the names of their contributors. If you lose the kit number there will be no way to access your genetic results.

As your own genetic ancestry is revealed you'll also see worldwide samples map humankind's shared genetic background around the world and through the ages.

If you'd like to contribute your own results to the project's global database you'll be asked to answer a dozen "phenotyping" questions that will help place your DNA in cultural context.

This process is optional and completely anonymous, but it's also important. Each of us has a part in the ancient story of humankind's genetic journey. Together we can tell the whole story before it's too late.

Order a KitThe Participation Kit costs U.S. $99.95 (plus shipping and handling and tax if applicable). The kit includes:

1. DVD with a Genographic Project overview hosted by Dr. Spencer Wells, visual instructions on how to collect a DNA sample using a cheek scraper, and a bonus feature program: the National Geographic Channel/PBS production The Journey of Man. 2. Exclusive National Geographic map illustrating human migratory history and created especially for the launch of the Genographic Project. 3. Buccal swab kit, instructions, and a self-addressed envelope in which to return your cheek swab sample. (You can download a pdf of instructions or the consent form. You will need Acrobat Reader.) 4. Detailed brochure about the Genographic Project, featuring stunning National Geographic photography 5. Confidential Genographic Project ID # (GPID) to anonymously access your results at this Web site

The purchase price also includes the cost of the testing and analysis?an expensive process?that will take place once your sample is sent in.

Return Your KitOnce you have completed the cheek scraping process, you will secure the scrapers inside the transport tubes, sign the informed consent form and mail the tubes and form off to the lab.

That's it! In about 4 to 6 weeks?the time necessary for the laboratory to correctly analyze your DNA?your results will be ready. In the meantime, visit the Web site to see where your sample is in the analysis process.

Get Your ResultsSamples will be analyzed for genetic "markers" found in mitochondrial DNA and on the Y chromosome. We will be performing two tests for the public participants:

To be clear?these tests are not conventional genealogy. Your results will not provide names for your personal family tree or tell you where your great grandparents lived. Rather, they will indicate the maternal or paternal genetic markers your deep ancestors passed on to you and the story that goes with those markers.

Once your results are posted, you will be able to learn something about that story and the journey of your ancestors. The genetic profile you receive is more than a static set of data. It is like an ongoing subscription to your genetic history. Your profile might become more detailed as the Genographic Project amasses more data from around the world, so be sure to return to the Genographic Project Web site for project updates.

Public participation is critical to the Project's success. By purchasing a Genographic Project Public Participation Kit, you will not only contribute to the impact of this great endeavor, but you may discover something about your own genetic past as well.

A Note on PrivacyTo ensure the privacy of participants, we have built an anonymous analysis process. Your Participation Kit will be mailed with a randomly-generated, non-sequential Genographic Participant ID number (GPID). Although we will have mailed a Participation Kit to your address, we do not know the random code included in the Kit. When you send in your DNA sample with your consent form, they will only be identified by your GPID. Therefore, your cheek cells will be analyzed completely anonymously.

In order to access your test results, you will need to access the Genographic Project Web site and enter your GPID, so it is very important that you do not lose your GPID. See the Genographic Project Terms and Conditions for more information. Also, be sure to visit our FAQs.

For International Participants (outside the United States and Canada)Public participation may be restricted in some countries where the export of genetic material requires government approval. China is one country that has such restrictions in place. The Genographic Project will work with the relevant authorities to achieve the broadest level of public participation possible.

Jared Diamond, author of the bestselling Collapse: How Societies Choose to Fail or Succeed, has been dazzling colleagues with his expertise in a wide range of subjects for decades. For the past few years, he?s been dazzling the general public as well. And now the new PBS series based on his previous book, the Pultizer Prize-winning Guns, Germs and Steel: The Fates of Human Societies, should bring him an even broader audience when it begins airing July 11.

Diamond, currently a UCLA geography professor, was for more than 30 years a professor of physiology at UCLA?s medical school, with specialized research in the evolutionary process of snake digestion. You can see an echo of this in his description of Collapse: ?Its plan resembles a boa constrictor that has swallowed two very large sheep,? with one long section about (so-far un-collapsed) Montana, and another about the disasters of Easter Island, Norse Greenland, and other vanished societies.

His books draw upon knowledge of seemingly unconnected topics, such as (to name just a few) the domestication of animals, the development of the Indo-European family of languages, the primitive tribes of New Guinea, the reason for menopause, the history of Japan, the origins of horsemanship, the latitude-related features of climate, and the unfortunate ecological consequences of humans? encountering previously uninhabited worlds. Collapse may sound depressing, but Diamond cautions that not all societies fail and that in any case all of them have a choice.

Guns, Germs and Steel sprang from a simple question Diamond was asked a quarter-century ago by a New Guinean friend (Diamond is also an ornithologist, specializing in the birds of New Guinea): Why did Europeans and Asians conquer the indigenous peoples of Africa, the New World, Australia, and the South Pacific instead of the reverse? A key part of the answer, Diamond argues, was the availability of large, domestic-able animals in Eurasia and the lack of them (with minor exceptions such as the South American llama) elsewhere, with vital effects on the development ? or not ? of civilization.

Perhaps because of his background in hard science, Diamond is reticent about how all this affects his political views. ?Gosh, I don?t think there?s any easy way to sum them up,? he said amiably, during an interview at his UCLA office, when he?d just started work on Collapse. ?On some things I would rate as conservative; probably on most things I would rate as reasonable liberal.? However, as a naturalist he scoffs at the crunchy-granola notion that what?s natural is therefore good.

?Genocide is natural! Rape is natural!? he exclaims in response. ?No, what?s natural is not necessarily good ? often it?s repulsive. One of the most important functions of human society, and the driving force behind most political institutions, is to prevent humans from doing what comes naturally.?

But a point he emphasizes in Guns, Germs and Steel is that, ?contrary to what white racists believe,? advanced societies didn?t develop because of innate genetic ability but because of their luck of the draw in biogeography. On the other hand, he undermines the tender-hearted conventional wisdom that aboriginal peoples are ecological saints.

?Every human colonization of a land mass formerly lacking humans has been followed by a wave of extinction of large animals,? Diamond writes in Collapse, a point he?s made in his other books. The problem isn?t that American Indians or New Zealand Maoris were particularly bad managers, but that, like us, they were human ? and thus prone to wiping out strange species before settling into a new environment.

Humans have also had a habit of exterminating other humans ever since Cain and Abel. It?s impossible to take the currently fashionable notion of a ?people of color? brotherhood seriously after reading Diamond; his chapters on the globally genocidal history of the human race in The Third Chimpanzee, his first and in some ways most accessible book, are devastating.

The Norse were unable to sustain their Greenland settlement partly because they refused to hunt seals like the local Inuit, whom they dismissed as ?skraelings,? or wretches. ?If you regard people as wretches,? Diamond noted dryly, ?you are not likely to learn from them.?

?Having been born in 1937, I grew up with the view that the Nazis were unique,? he told me. ?And yes, the efficiency of [the Holocaust] was unique, but the effort was totally mundane. All the groups I work with in New Guinea, they?ve got their own stories of what they did to someone else.? Diamond has found the remote island so dangerous that he won?t let his twin teenage sons accompany him on expeditions. His stock response to their requests to go? ?Once you learn to be really careful. Maybe when you?re 42 years old.?

As a longtime Los Angeles resident, Diamond worries that problems like traffic congestion and overcrowded public schools have increased so gradually that people have gotten used to them, a phenomenon he describes in Collapse as ?creeping normalcy.? Worse, the rich often remove themselves from the problems of ordinary citizens by living in gated communities and sending their children to expensive private schools.

?A blueprint for disaster in any society is when the elite are capable of insulating themselves,? Diamond says. Still, there are signs of hope, particularly on the environmental front.

?A few years ago Home Depot realized it would be in their best interest to phase out wood from old-growth forests,? he adds. ?That was a big surprise to me. The Chevron oil fields in Papua New Guinea are managed more rigorously than any national park I?ve ever been in ? Chevron figured out it could make more money in the long run by adhering to rigorous environmental standards. Choice is certainly not a delusion.?

Perhaps better filed under political rants, Derbyshire makes mincemeat of arguments that "intelligent design" should be taught as part of a science curriculum.

August 30, 2005, 8:23 a.m.Teaching ScienceThe president is wrong on Intelligent Design.John Derbyshire

Catching up on back news this past few days ? I was out of the country for the first two weeks of August ? I caught President Bush's endorsement of teaching Intelligent Design in public school science classes. "Both sides ought to be properly taught," President Bush told a reporter August 2, "so people can understand what the debate is all about."

This is Bush at his muddle-headed worst, conferring all the authority of the presidency on the teaching of pseudoscience in science classes. Why stop with Intelligent Design (the theory that life on earth has developed by a series of supernatural miracles performed by the God of the Christian Bible, for which it is pointless to seek any naturalistic explanation)? Why not teach the little ones astrology? Lysenkoism? Orgonomy? Dianetics? Reflexology? Dowsing and radiesthesia? Forteanism? Velikovskianism? Lawsonomy? Secrets of the Great Pyramid? ESP and psychokinesis? Atlantis and Lemuria? The hollow-earth theory? Does the president have any idea, does he have any idea, how many varieties of pseudoscientific flapdoodle there are in the world? If you are going to teach one, why not teach the rest? Shouldn't all sides be "properly taught"? To give our kids, you know, a rounded picture? Has the president scrutinized Velikovsky's theories? Can he refute them? Can you?

And every buncombe theory ? every one of those species of twaddle that I listed ? has, or at some point had, as many adherents as Intelligent Design. The hollow-earth theory was taken up by the Nazis and taught, as the Hohlweltlehre, in German schools. It still has a following in Germany today. Velikovsky's theories ? he believed that Jupiter gave birth to a giant comet which, after passing close to earth and causing the miracles of the Book of Exodus, settled down as the planet Venus ? were immensely popular in the 1950s and generated heated controversy, with angry accusations by the Velikovskians that they were being shut out by closed-minded orthodox astronomers determined to protect their turf, etc., etc. Lysenkoism was state doctrine in Stalin's Russia and was taught at the most prestigious universities. Expressing skepticism about it could get you shot. (Likewise with the bizarre linguistic theories of Stalin's prot?g? N.Y. Marr, who believed that every word in every human language derived from one of four basic elements, pronounced "sal," "ber," "yon," and "rosh." I tell you, the house of pseudoscience has many, many mansions.) Dianetics was rebranded as Scientology and is now a great force in the land ? try criticizing it, and you'll find out.

Nor is any of these theories lacking in a certain appeal, as Martin Gardner, from whose book Fads and Fallacies in the Name of Science I compiled that list, is charitable enough to point out. Of Lawsonomy ? "The earth is a huge organism operating by Suction and Pressure..." ? Gardner says generously: "This makes more sense than one might think." Pseudoscience is in fact a fascinating study, though as sociology, not as science. Gardner's book, now 50 years old, is still an excellent introduction, and great fun to read.

What, then, should we teach our kids in high-school science classes? The answer seems to me very obvious. We should teach them consensus science, and we should teach it conservatively. Consensus science is the science that most scientists believe ought to be taught. "Conservatively" means eschewing theories that are speculative, unproven, require higher math, or even just are new, in favor of what is well settled in the consensus. It means teaching science unskeptically, as settled fact.

Consider physics, for example. It became known, in the early years of the last century, that Newton's physics breaks down at very large or very tiny scales of distance, time, and speed. New theories were cooked up to explain the discrepancies: the special and general theories of relativity, quantum theory and its offspring. By the 1930s these new theories were widely accepted, though some of the fine details remained (and some still remain!) to be worked out.

Then, in the late 1950s, along came your humble correspondent, to study physics to advanced level at a good English secondary school. What did they teach us? Newtonian mechanics! I didn't take a class in relativity theory until my third year at university, age 21. I never have formally studied quantum mechanics, though I flatter myself I understand it well enough.

My schoolmasters did the right thing. Newton's mechanics is the foundation of all physics. "But it's wrong!" you may protest. Well, so it is; but it is right enough to form that essential foundation; right enough that you cannot understand the nature of its wrongness until you have mastered it. (Along with some college-level math.) Furthermore, it is consensus science. By that I mean, if you were to poll 10,000 productive working physicists and ask them what ought to be taught in our high schools, I imagine that upwards of 9,900 of them would say: "Well, you have to get Newtonian mechanics into their heads..." No doubt you'd find the odd Velikovskian or adherent of the Hohlweltlehre, but Newtonism would be the consensus. Intelligent high-school seniors should, I think, be encouraged to read popular books about relativity and quantum mechanics. Perhaps, nowadays ? I couldn't say, I am out of touch ? teachers have even figured out how to make some of that higher stuff accessible to young minds, and are teaching it. If so, that's great. The foundation, though, must be consensus science, conservatively taught.

I think intelligent teenagers should also be given some acquaintance with pseudoscience, just so that they might learn to spot it when they see it. A copy of that excellent magazine Skeptical Inquirer ought to be available in any good high school library, along with books like Gardner's. I am not sure that either pseudoscience or its refutation has any place in the science classroom, though. These things properly belong in social studies, if anywhere outside the library.

And what should we teach our kids in biology classes, concerning the development of living things on earth? We should teach them Darwinism, on exactly the same arguments. There is no doubt this is consensus science. When the Intelligent Design people flourished a list of 400 scientists who were skeptical of the theory of evolution, the National Center for Science Education launched "Project Steve," in which they asked for affirmation of the contrary view, but only from scientists named Steve. (Which they estimate to be about one percent of all U.S. scientists.) The Steve-O-Meter stands at 577 as of this July 8, implying around 57,000 scientists on the orthodox side. That's consensus science. When the I.D. support roster has 57,000 names on it, drop me a line.

And Darwinism ought to be taught conservatively, without skepticism or equivocation, which will only confuse young minds. Darwinism is the essential foundation for all of modern biology and genomics, and offers a convincing explanation for all the phenomena we can observe in the life sciences. It may be that, as we get to finer levels of detail, we shall find gaps and discrepancies in Darwinism that need new theories to explain them. This is a normal thing in science, and new theories will be worked out to plug the gaps, as happened with Newtonism a hundred years ago. If this happens, nobody ? no responsible scientist ? will be running round tearing his hair, howling "Darwinism is a theory in crisis!" any more than the publication of Einstein's great papers a hundred years ago caused physicists to make bonfires of the Principia. The new theories, once tested and validated, will be welcomed and incorporated, as Einstein's and Planck's were. And very likely our high schools will just go on teaching Darwinism, as mine taught me Newtonism fifty years after Einstein's revolution. They will be right to do so, in my opinion, just as my schoolmasters were right.

If you are afraid that your children, being confronted with science in school, will turn into atheists and materialists, you have a wide variety of options available to you in this free nation. Most obviously, you should take your kids to church regularly, encourage them to pray, say grace before meals, and respond to those knotty questions that children sometimes ask with answers from your own faith. Or you could homeschool them, or send them to a religious school, and make sure they are not exposed to the science you fear so much.

You really shouldn't be afraid of science, though. Plenty of fine scientists have been religious. The hero of my last book, one of the greatest mathematicians of the 19th century, was a very devout man, as I took pains to make clear. The same can be said of many Darwinists. I am currently researching the life of the Victorian writer Charles Kingsley, who was a keen naturalist, an early and enthusiastic supporter of Darwin, and also a passionate Christian, who preached the last of his many fine sermons from the pulpit of Westminster Abbey. (The last words of that sermon were: "Come as thou seest best, but in whatsoever way thou comest, even so come, Lord Jesus." I suppose this man would be considered impious by the Intelligent Design merchants.)

A great deal of nonsense is being talked in this zone recently. Science is science, and ought to be taught in our public schools conservatively, from the professional consensus, as settled fact. Religion is quite a different thing. It is not entirely unconnected with science. Many scientists have believed that in their inquiries, they were engaging with God's thoughts. Faraday certainly thought so; probably Newton did, too; possibly Einstein did. This has even been a strong motivation for scientific research, and it is probable that in a world with no religion, we should have much less science than we have. Those are matters psychological and motivational, though. They don't ? they can't ? inform the content of scientific theories, because those theories are naturalistic by definition. Whether miracles happen in the world is a thing you must decide for yourself, based on your own faith, study, and life experiences. To admit miracles into a scientific theory, however, turns it into pseudoscience at once; and while pseudoscience can be fun, it is not science. Nor is it religion, except in the widest and loosest possible sense of that word, a sense that includes every kind of supernatural baloney that any clever crackpot can come up with ? a sense I personally will not accept.

An international research team led by scientists at the Broad Institute of MIT and Harvard announced today the completion of a high-quality genome sequence of the domestic dog, together with a catalog of 2.5 million specific genetic differences across several dog breeds. Published in the December 8 issue of Nature, the dog research sheds light on both the genetic similarities between dogs and humans and the genetic differences between dog breeds. Comparison of the dog and human DNA reveals key secrets about the regulation of the master genes that control embryonic development. Comparison among dogs also reveals the structure of genetic variation among breeds, which can now be used to unlock the basis of physical and behavioral differences, as well the genetic underpinnings of diseases common to domestic dogs and their human companions.

"Of the more than 5,500 mammals living today, dogs are arguably the most remarkable," said senior author Eric Lander, director of the Broad Institute, professor of biology at MIT and systems biology at Harvard Medical School, and a member of the Whitehead Institute for Biomedical Research. "The incredible physical and behavioral diversity of dogs -- from Chihuahuas to Great Danes ? is encoded in their genomes. It can uniquely help us understand embryonic development, neurobiology, human disease and the basis of evolution."

Similarities to humans

Dogs not only occupy a special place in human hearts, they also sit at a key branch point in the evolutionary tree relative to humans. By tracking evolution's genetic footprints through the dog, human and mouse genomes, the scientists found that humans share more of their ancestral DNA with dogs than with mice, confirming the utility of dog genetics for understanding human disease.

Most importantly, the comparison revealed the regions of the human genome that are most highly preserved across mammals. Roughly 5% of the human genome has been well preserved by evolution over the past 100 million years and must encode important biological functions. The researchers discovered that the most highly conserved of these sequences are not randomly distributed throughout the genome. Instead, they are crowded around just a tiny fraction (about 1%) of the genes that encode crucial regulatory proteins involved in development (such as transcription factors or axon guidance receptors). "The clustering of regulatory sequences is incredibly interesting," said Kerstin Lindblad-Toh, first author of the Nature paper and co-director of the genome sequencing and analysis program at Broad. "It means that a small subset of crucial human genes is under much more elaborate control than we had ever imagined."

Differences between dog breeds

Dogs were domesticated from gray wolves as long as 100,000 years ago, but selective breeding over the past few centuries has made modern dog breeds a testament to biological diversity. Obvious examples include the contrasting body sizes of 6-pound Chihuahuas and 120-pound Great Danes, the hyperactivity of Jack Russell terriers relative to mild-mannered basset hounds, and the herding instincts of Shetland sheepdogs compared with the protective proclivity of dalmatians.

Efforts to create the genetic tools needed to map important genes in dogs have gained momentum over the last 15 years, and already include a partial survey of the poodle genome. More than two years ago, Lindblad-Toh, Lander, and their colleagues embarked on a two-part project to assemble a complete map of the dog genome. First, they acquired high-quality DNA sequence from a female boxer named "Tasha," covering nearly 99% of the dog's genome. Using this information as a genetic 'compass,' they then sampled the genomes of 10 different dog breeds and other related canine species, including the gray wolf and coyote. By comparing these dogs, they pinpointed ~2.5 million individual genetic differences among breeds, called single nucleotide polymorphisms (SNPs), which serve as recognizable signposts that can be used to locate the genetic contributions to physical and behavioral traits, as well as disease.

Finally, the scientists used the SNP map to reconstruct how intense dog breeding has shaped the genome. They discovered that selective breeding carried large genomic regions of several million bases of DNA into breeds, creating 'haplotype blocks' that are ~100 times larger than seen in the human population. "The huge genomic regions should make it much easier to find the genes responsible for differences in body size, behavior and disease," said Lander. "Such studies will need many fewer markers than for human studies. It should be like hitting the side of a barn."

Mapping human disease-related genes in dogs

Breeding programs not only selected for desired traits, they also had the unintended consequence of predisposing many dog breeds to genetic diseases, including heart disease, cancer, blindness, cataracts, epilepsy, hip dysplasia and deafness. With the dog genome sequence and the SNP map, scientists around the world now have the tools to identify these disease genes.

Humans suffer from many of the same illnesses as their four-legged friends and even show similar symptoms, but the genetic underpinnings have proved difficult to trace. "The genetic contributions to many common diseases appear to be easier to uncover in dogs," said Lindblad-Toh. "If so, it is a significant step forward in understanding the roots of genetic disease in both dogs and humans."

For this work, the dog-owner community is an essential collaborator. "We deeply appreciate the generous cooperation of individual dog owners and breeders, breed clubs and veterinary schools in providing blood samples for genetic analysis and disease gene mapping," said Lindblad-Toh. "Without their interest and help we could not be doing this work."

Funding and data access

Sequencing of the dog genome began in June 2003, funded in large part by the National Human Genome Research Institute (NHGRI). The Broad Institute is part of NHGRI's Large-Scale Sequencing Research Network. NHGRI is one of 27 institutes and centers at the National Institutes of Health (NIH), an agency of the Department of Health and Human Services. The NHGRI Division of Extramural Research supports grants for research and for training and career development at sites nationwide. Information about NHGRI, including the dog genome initiative, can be found at: www.genome.gov.

###In alignment with the mission of both NHGRI and the Broad Institute, all of these data can be accessed through the following public databases: ? for the boxer genome sequence: www.ncbi.nlm.nih.gov, www.ensembl.org, genome.ucsc.edu

Durham, NC - Researchers have discovered the first brain regulatory gene that shows clear evidence of evolution from lower primates to humans. They said the evolution of humans might well have depended in part on hyperactivation of the gene, called prodynorphin (PDYN), that plays critical roles in regulating perception, behavior and memory.

They reported that, compared to lower primates, humans possess a distinctive variant in a regulatory segment of the prodynorphin gene, which is a precursor molecule for a range of regulatory proteins called "neuropeptides." This variant increases the amount of prodynorphin produced in the brain.

While the researchers do not understand the physiological implications of the activated PDYN gene in humans, they said their finding offers an important and intriguing piece of a puzzle of the mechanism by which humans evolved from lower primates.

They also said that the discovery of this first evolutionarily selected gene is likely only the beginning of a new pathway of exploring how the pressure of natural selection influenced evolution of other genes.

They also said their finding demonstrates how evolution can act more efficiently to alter the regulatory segments, or "promoters," that determine genes' activity, rather than on the gene segment that determines the structure of the protein it produces. Such regulatory alteration, they said, can more readily generate variability than the hit-or-miss mutations that alter protein structure and function.

Proteins constitute the molecular machinery of the cell, for example, catalyzing the multitude of chemical reactions in the cell. DNA genes constitute the blueprints for such proteins, with the regulatory segments of these genes determining how actively the genes churn out proteins.

The researchers published their findings in the December 2005 issue of the Public Library of Science. They were Gregory Wray and David Goldstein of Duke University; Matthew Rockman of Princeton University; Matthew Hahn of Indiana University; Nicole Soranzo of University College London; and Fritz Zimprich of the Medical University of Vienna in Austria. The research was sponsored by the National Science Foundation and NASA.

"We focused on the prodynorphin gene because it has been shown to play a central role in so many interesting processes in the brain," said Wray. "These include a person's sense of how well they feel about themselves, their memory and their perception of pain. And it's known that people who don't make enough of prodynorphin are vulnerable to drug addiction, schizophrenia, bipolar disorders and a form of epilepsy. So, we reasoned that humans might uniquely need to make more of this substance, perhaps because our brains are bigger, or because they function differently.

"Also importantly, the part of the gene that produces the prodynorphin protein shows no variation within humans, or even between humans and any of the great apes," said Wray, who is a professor of biology. "So, if we found any variation in this gene due to evolution, it was likely to be in its regulation. And our premise is that the easiest way to generate evolutionary change is to alter regulation."

In their studies, the researchers analyzed the sequence structure of the PDYN promoter segment in humans and in seven species of non-human primates -- chimpanzees, bonobos, gorillas, orangutans, baboons, pig-tailed macaques and rhesus monkeys. They found significant mutational changes in the regulatory sequence leading to humans that indicated preservation due to positive evolutionary selection. They also found an "evolution-by-association," in which sequences near the regulatory segment showed greater mutational change -- as if they were "dragged along" with the evolving regulatory sequence.

In contrast, the researchers found that the DNA segment that coded for the PDYN protein itself -- as well as other sites spread around the genome -- showed evidence of "negative selection" that would preserve their original structure.

A key experiment, said Wray, was a laboratory demonstration that such regulatory mutations did have functional significance. When the researchers cultured human neural cells with either the human or chimpanzee regulatory PDYN segments, they found that the human segments caused the cells to produce more PDYN neuropeptide.

"So, these experiments told us that those mutations that we flagged by a statistical method as being likely to be under selection actually do something important in terms of function," said Wray. "The human version increases expression of the gene and production of prodynorphin, which is the direction of change we predicted."

The researchers also found evidence of evolutionary selection when they compared the regulatory sequences in people from different populations -- including those from Cameroon, China, Ethiopia, India, Italy and Papua New Guinea. Those analyses showed higher differences among the individual populations, but reduced variation within them. Such a pattern is a signature of evolutionary selection acting on the genetic sequence, said Wray.

Still mysterious, he said, is how the prodynorphin gene changes affect human neural development.

"All we can conclude now is that this gene is a very strong candidate for having a functional role in human evolution, and that its role probably has something to do with cognition. But beyond that, it's very hard to make a clear argument about specifically what that role is.

"We do know that not making enough prodynorphin causes clinical problems, but we don't know what having more of it did for us humans. We're hoping the clinical psychiatrists and psychologists can give us more insight into that aspect."

Wray and his colleagues have already identified a collection of some 250 other candidate genes -- mainly those active in the brain -- that they are beginning to analyze for evidence of evolutionary selection. They plan to perform the same basic analyses, in which they compare sequence information between humans and non-human primates for signatures of evolutionary selection.

Personalized Home Page Setup Put headlines on your homepage about the companies, industries and topics that interest you most.

You've Got Male!

By LIONEL TIGERDecember 17, 2005; Page A10

Male resentment of the self-righteous and automatic public support for women's interests and issues has been increasingly on the boil for some time. Civic celebrations of antipathy to men such as the Violence Against Women Act are finally generating specific and pointed responses by men fatigued, if still baffled, by the knee-jerk assumption that they suffer irredeemably from what I call Male Original Sin.

At my university as at countless others, one of the very first official greeting to students is a rape seminar predicated on the intrinsic danger which males carry with them. And in family courts, the presumption of male behavioral malefaction has yielded heartbreakingly numerous cases in which men are charged with domestic violence to which courts overwhelmingly -- often in brief hearings in which the male is not even present -- issue temporary "restraining orders." These frequently segue into permanence, and award women the dwelling they've shared, financial support and the all-important privilege of custody -- mothers gain custody in 66% of uncontested cases and 75% of contested ones. Less than a quarter of parents are awarded joint custody.

Judges issue such orders based only on the word of the alleged victim. It is small wonder the overwhelming majority of such actions are sought and achieved by women. It has been legitimately argued that there is a merciless post-marital racket of therapists, lawyers, judges and governmental advocates who prosper because it is so easy to define males as guilty.

Meanwhile, the publicly financed educational system is at least 20% better at producing successful female students than male, yet hardly anyone sees this as remarkable gender discrimination. While there is a vigorous national program to equalize male and female rates of success in science and math, there is not a shred of equivalent attention to the far more central practical impact of the sharp deficit males face in reading and writing.

There are countless thriving "women's studies" programs and only a paltry number of male equivalents. The graduates of such programs (which rarely pass the laxest test for gender diversity) staff the offices of politicians and judges, and assert the obligation of society to redress centuries of dominance by that gaseous overgeneralization -- "patriarchy."

When it comes to health status, the disparity in favor of women is enhanced by such patterns as seven times more Federal expenditure on breast cancer than on the prostate variety. And no one is provoked into action because vaunted male patriarchs commit suicide between four and 10 times more frequently than oppressed and brainwashed women. This isn't simply carping about invidious comparison, or reluctance to support legitimate social responses to the needs of women as workers, parents, citizens and virtuousi of their private lives. It is solely about inequity in law, funding and productive public attention. There is scant acknowledgment of the fact that we face a generation of young men increasingly failing in a school system seemingly calibrated to female rhythms.

A consequence is that male income falls and female income rises. Nothing wrong with that, except that men inexorably withdraw from domestic life: they become out-laws rather than in-laws. Legions of women despair of finding a mate compatible in function and vibrancy. So they go it alone: a third of babies are born to unmarried women, perhaps making a sage choice given the feckless, demoralized chaps from whom they must choose. We lead the world in fatherless families -- 40% of children fall asleep without a resident father regularly within reach.

* * *Into this acrimonious climate has whispered a breath of spring air in winter -- an extraordinary document which may have surprising impact because of its severe countercultural implications and its almost sweet innocence of purpose. In early November, the New Hampshire Commission on the Status of Men issued its first report (www.nh.gov/csm). The commission was proposed in a 1999 bill by N.H. Rep. David Bickford. The House passed the bill, awarding a budget of $69,561. But months later, the state Senate stripped away funding. The commission was finally established in 2002. According to its report, the Senate's effort to defund it reflects "the inaction of good people who apparently have been led to believe that legislative activity designed to primarily benefit men is somehow not appropriate politically, financially, or otherwise."

To the contrary, the commission's report frontally accepts that there are intrinsic differences in how men and women cope with health, education, responsibility and violence. It concludes that social policies must not begin by denying differences. If you're running a zoo, know the real nature of your guests. This applies nationally, not only in New Hampshire. The clout of female voters has been transmuted into a strangely pervasive inattention to the legitimate needs of boys and men. While there remain grating sources of unfairness to women, the community is in the process of steadily creating a new legal and educational structure which generates new gender unfairness: 90% of the victims of Ritalin and similar drugs prescribed for schoolkids are boys; but even drugged they perform less well than girls. A 2005 study at Yale found nationally that even in prekindergarten boys are nearly five times more likely to be expelled than girls.

What is going on in this country?

Of course those who can do the work should receive the rewards. However, the broader question is: Who defines the work and evaluates it? The drastic occupational and familial situation of especially minority males suggests the urgency of a hard review of this issue. Were females the victims of such apparent sex-based unfairness, the legal paper attacking the matter would cloud the air like flakes of New Hampshire snow. But since it's only males . . .

The report is an innovative 44 pages focused on life in one state. It grips the macrocosm of stunning changes in American sociosexual and family experience. Like those which affect the terrain of a delta the changes are gradual and barely perceptible and yet suddenly it becomes clear there is a new barrier, a new channel, a new uncertainty. So with the issue of men in America. The New Hampshire report may not be a full map of the delta but its alerts us to the large reality of implacable changes. And we may not like them.

Mr. Tiger, Charles Darwin Professor of Anthropology at Rutgers, is the author of "The Decline of Males" (St. Martin's, 1999).

Elizabeth Gould overturned one of the central tenets of neuroscience. Now she?s building on her discovery to show that poverty and stress may not just be symptoms of society, but bound to our anatomy.

Professor Elizabeth Gould has a picture of a marmoset on her computer screen. Marmosets are a new world monkey, and Gould has a large colony living just down the hall. Although her primate population is barely three years old, Gould is clearly smitten, showing off these photographs like a proud parent. Marmosets are the ideal experimental animal: a primate brain trapped inside the body of a rat. They recognize themselves in the mirror, form elaborate dominance hierarchies and raise their young cooperatively. If you can look past their rodent-like stature and punkish pompadour, marmosets can seem disconcertingly human.

In her laboratory at Princeton University?s Department of Psychology, Gould is determined to create a marmoset environment that takes full advantage of their innate intelligence. She doesn?t believe in metal cages. ?We are housing our marmosets in large, enriched enclosures,? she says, ?and with a variety of objects to support foraging. These are social animals, and it?s important to let them be social. Basically, we want to bring our experimental conditions closer to the wild.?

But Gould is not a primatologist. She doesn?t give her marmosets adorable names, or spend time cuddling with their young. In fact, these marmosets don?t even know she exists: Gould prefers to observe them remotely, on a little video screen. Staring at the televised frenzy of this little marmoset world, it is poignant to know how their lives will end. Their brains will be cut into thousands of transparent slices. Their dissected neurons will be stained neon green and the density of their dendritic connections will be quantified under a powerful microscope. They will live on as data.

The naturalistic habitat that Gould has created for these marmosets is essential to her studies, which involve understanding how the environment affects the brain. Eight years after Gould defied the entrenched dogma of her science and proved that the primate brain is always creating new neurons, she has gone on to demonstrate an even more startling fact: The structure of our brain, from the details of our dendrites to the density of our hippocampus, is incredibly influenced by our surroundings. Put a primate under stressful conditions, and its brain begins to starve. It stops creating new cells. The cells it already has retreat inwards. The mind is disfigured.

The social implications of this research are staggering. If boring environments, stressful noises, and the primate?s particular slot in the dominance hierarchy all shape the architecture of the brain?and Gould?s team has shown that they do?then the playing field isn?t level. Poverty and stress aren?t just an idea: they are an anatomy. Some brains never even have a chance.

Viewed through the magnified eyes of a confocal microscope, a newborn neuron looks fragile, almost lonely. Everything around it is connected to everything else, but the new cell is all alone, just a seed of soma and a thin stalk of axon desperately trying to plug itself into the network. If it doesn?t, it will die. Staring at this tenuous neuron, it is hard to believe that so much depends upon its presence.

Dr. Gould insists on being called Liz. She wears faded jeans to work and ties back her long dark hair in a loose braid. She smiles easily, and intersperses discussions of marmoset families with stories about her own children. Gould doesn?t talk about her research in listless sentences full of acronyms. Instead, she takes you through the experimental process, confessing all the difficulties and ambiguities along the way.

Gould?s casual air conceals a necessary tenaciousness: It is not easy to shift a paradigm. Four days after giving birth to her third child, Gould was back at work, lecturing to a room full of undergraduates. She has always worked long hours, and expects nothing less of her employees. (Saturdays in the Gould lab are indistinguishable from Mondays.) And even though her research has set off a frenzy of activity?neurogenesis is now one of the hottest topics in neuroscience?Gould has managed to remain at the cutting edge of the field she helped to invent.

For such a high-profile scientist, Gould?s lab at Princeton is surprisingly small. Lavishly outfitted (she has her own $400,000 confocal microscope and large marmoset colony) the lab consists of just two post-docs and two grad students. They are a close knit group, and work on overlapping problems. ?When I first began at Princeton,? Gould says, ?I had tunnel vision. I was just so determined to answer my critics and prove that adult neurogenesis was real. But now I?m finally able to think about neurogenesis in a broader context. We are free to figure out what all these new cells actually do.?

To understand how neurogenesis?the process of creating new brain cells? works, Gould?s lab studies the effect of two separate variables: stress and enriched environments. Chronic stress, predictably enough, decreases neurogenesis. As Christian Mirescu, one of Gould?s post-docs, put it, ?When a brain is worried, it?s just thinking about survival. It isn?t interested in investing in new cells for the future.?

On the other hand, enriched animal environments?enclosures that simulate the complexity of a natural habitat?lead to dramatic increases in both neurogenesis and the density of neuronal dendrites, the branches that connect one neuron to another. Complex surroundings create a complex brain.

Gould?s field is a new one. Only a decade ago, the idea that the primate brain is constantly creating new neurons, and that these new neurons are not only functional but responsive to changes in the environment, was unimaginable. Indeed, the fact that neurogenesis did not exist was one of modern neuroscience's founding principles. This theory, first articulated by Santiago Ram?n y Cajal at the start of the 20th century, held that brain cells?unlike every other cell in our body?don?t divide. They don?t die, and they are never reborn. We emerge from the womb with the only brain we will ever have.

The most convincing modern defender of this theory was Pasko Rakic, the chairman of Yale University?s neurobiology department and among the most respected neuroscientists of his generation. In the early 1980s, Rakic realized that neurogenesis had never been properly tested in primates. He set out to investigate. Rakic studied 12 rhesus monkeys, injecting them with radioactively-labeled thymidine which allowed him to trace the development of neurons in the brain. Rakic then killed the monkeys at various stages after the injection of the thymidine, and searched for any signs of new neurons. There were none.

?All neurons of the rhesus monkey brain are generated during pre-natal and early post-natal life,? Rakic wrote in his 1985 paper, ?Limits of Neurogenesis in Primates.? ?Not a single? new neuron ?was observed in the brain of any adult animal.? While Rakic admitted that his proof was limited, he persuasively defended the dogma. He even went so far as to construct a plausible evolutionary theory as to why neurons can?t divide: Rakic imagined that at some point in our distant past, primates had traded the ability to give birth to new neurons for the ability to retain plasticity in our old neurons. According to Rakic, the ?social and cognitive? behavior of primates required the absence of neurogenesis. His paper, with its thorough demonstration of what everyone already believed, seemed like the final word on the matter. No one bothered to verify his findings.

The genius of the scientific method, however, is that it accepts no permanent solution. Skepticism is its solvent, for every theory is imperfect. Scientific facts are meaningful precisely because they are ephemeral, because a new observation, a more honest observation, can always alter them. This is what happened to Rakic?s theory of the fixed brain. It was, to use Karl Popper?s verb, falsified.

The subject of stress has been the single continuous thread running through Gould?s research career. From the brain?s perspective, stress is primarily signaled by an increase in the bloodstream of a class of steroid called glucocorticoids, which put the body on a heightened state of alert. But glucocorticoids can have one nasty side-effect: They are toxic for the brain. When stress becomes chronic, neurons stop investing in themselves. Neurogenesis ceases. Dendrites disappear. The hippocampus, a part of the brain essential for learning and memory, begins withering away.

Gould?s insight was that understanding how stress damages the brain could illuminate the general mechanisms?especially neurogenesis?by which the brain is affected by its environ-mental conditions. For the last several years, she and her post-doc, Mirescu, have been depriving newborn rats of their mother for either 15 minutes or three hours a day. For an infant rat, there is nothing more stressful. Earlier studies had shown that even after these rats become adults, the effects of their developmental deprivation linger: They never learn how to deal with stress. ?Normal rats can turn off their glucocorticoid system relatively quickly,? Mirescu says. ?They can recover from the stress response. But these deprived rats can?t do that. It?s as if they are missing the ?off? switch.?

Gould and Mirescu?s disruption led to a dramatic decrease in neurogenesis in their rats? adult brains. The temporary trauma of childhood lingered on as a permanent reduction in the number of new cells in the hippocampus. The rat might have forgotten its pain, but its brain never did. ?This is a potentially very important topic,? Gould says. ?When you look at all these different stress disorders, such as PTSD [post-traumatic stress disorder], what you realize is that some people are more vulnerable. They are at increased risk. This might be one of the reasons why.?

Subsequent experiments have teased out a host of other ways stress can damage the developing brain. For example, if a pregnant rhesus monkey is forced to endure stressful conditions?like being startled by a blaring horn for 10 minutes a day?her children are born with reduced neurogenesis, even if they never actually experience stress once born. This pre-natal trauma, just like trauma endured in infancy, has life-long implications. The offspring of monkeys stressed during pregnancy have smaller hippocampi, suffer from elevated levels of glucocorticoids and display all the classical symptoms of anxiety. Being low in a dominance hierarchy also suppresses neurogenesis. So does living in a bare environment. As a general rule of thumb, a rough life?especially a rough start to life?strongly correlates with lower levels of fresh cells.

Gould?s research inevitably conjures up comparisons to societal problems. And while Gould, like all rigorous bench scientists, prefers to focus on the strictly scientific aspects of her data?she is wary of having it twisted for political purposes?she is also acutely aware of the potential implications of her research.

?Poverty is stress,? she says, with more than a little passion in her voice. ?One thing that always strikes me is that when you ask Americans why the poor are poor, they always say it?s because they don?t work hard enough, or don?t want to do better. They act like poverty is a character issue.?

Gould?s work implies that the symptoms of poverty are not simply states of mind; they actually warp the mind. Because neurons are designed to reflect their circumstances, not to rise above them, the monotonous stress of living in a slum literally limits the brain.

In 1989, Gould was a young post-doc working in the lab of Bruce McEwen at Rockefeller University, investigating the effect of stress hormones on rat brains. Chronic stress is devastating to neurons, and Gould?s research focused on the death of cells in the hippocampus. (Rakic?s declaration that there was no such thing as neurogenesis was still entrenched dogma.) While the idea was exciting?stress research was a booming field?the manual labor was brutal. She had to kill her rats at various time points, pluck the tiny brain out of its cranial encasing, cut through the rubbery cortex, slice the hippocampus thinner than a piece of paper, and painstakingly count the dying neurons under a microscope. But while Gould was documenting the brain?s degeneration, she happened upon something inexplicable: evidence that the brain also healed itself. ?At first, I assumed I must be counting [the neurons] incorrectly,? Gould said. ?There were just too many cells.?

Confused by this anomaly, Gould assumed she was making some simple experimental mistake. She went to the library, hoping to figure out what she was doing wrong. But then, looking through a dusty, 27-year-old science journal buried in the Rockefeller stacks?this was before the Internet?Gould found the explanation she needed, though not the one she was looking for.

Beginning in 1962, a researcher at MIT named Joseph Altman published several papers claiming that adult rats, cats, and guinea pigs all formed new neurons. Although Altman used the same technique that Rakic would later use in monkey brains?the injection of radioactive thymidine?his results were at first ridiculed, then ignored, and soon forgotten.

As a result, the field of neurogenesis vanished before it began. It would be another decade before Michael Kaplan, at the University of New Mexico, would use an electron microscope to image neurons giving birth. Kaplan discovered new neurons everywhere in the mammalian brain, including the cortex. Yet even with this visual evidence, science remained stubbornly devoted to its doctrine. Kaplan remembers Rakic telling him that ?Those [cells] may look like neurons in New Mexico, but they don?t in New Haven.? Faced with this debilitating criticism, Kaplan, like Altman before him, abandoned the field of neurogenesis.

The Connecticut Mental Health Center is a drab brick building a mile from the Yale campus. After passing through a metal detector and walking by a few armed guards, a visitor enters a working mental institution. The cramped halls are an uneasy mixture of scientists, social workers and confined patients. The lights are bright and sterile.

Ronald Duman, a professor of Psychiatry and Pharmacology at Yale, has a lab on the third floor, opposite a ward for the mentally ill. His lab is isolated from the rest of the building by a set of locked doors. There is the usual clutter of solutions (most of them just salt buffers), the haphazard stacks of science papers and the soothing hum of refrigerators set well below zero. It is here, in these rooms with a view of New Haven, that Duman is trying to completely change the science of depression and antidepressants.

For the last 40 years, medical science has operated on the understanding that depression is caused by a lack of serotonin, a neurotransmitter that plays a role in just about everything the mind does, thinks or feels. The theory is appealingly simple: sadness is simply a shortage of chemical happiness. The typical antidepressant?like Prozac or Zoloft?works by increasing the brain?s access to serotonin. If depression is a hunger for neurotransmitter, then these little pills fill us up.

Unfortunately, the serotonergic hypothesis is mostly wrong. After all, within hours of swallowing an antidepressant, the brain is flushed with excess serotonin. Yet nothing happens; the patient is no less depressed. Weeks pass drearily by. Finally, after a month or two of this agony, the torpor begins to lift.

But why the delay? If depression is simply a lack of serotonin, shouldn?t the effect of antidepressants be immediate? The paradox of the Prozac lag has been the guiding question of Dr. Ronald Duman?s career. Duman likes to talk with his feet propped up on his desk. He speaks with the quiet confidence of someone whose ideas once seemed far-fetched but are finally being confirmed.

?Even as a graduate student,? Duman says, ?I was fascinated by how antidepressants work. I always thought that if I can just figure out their mechanism of action?and identify why there is this time-delay in their effect?then I will have had a productive career.?

When Duman began studying the molecular basis of antidepressants back in the early 90s, the first thing he realized was that the serotonin hypothesis made no sense. A competing theory, which was supposed to explain the Prozaz lag, was that antidepressants increase the number of serotonin receptors. However, that theory was also disproved. ?It quickly became clear that serotonin wasn?t the whole story,? Duman says. ?Our working hypothesis at the time just wasn?t right.?

But if missing serotonin isn?t the underlying cause of depression, then how do antidepressants work? As millions will attest, Prozac does do something. Duman?s insight, which he began to test gradually, was that a range of antidepressants trigger a molecular pathway that has little, if anything, to do with serotonin. Instead, this chemical cascade leads to an increase in the production of a class of proteins known as trophic factors. Trophic factors make neurons grow. What water and sun do for trees, trophic factors do for brain cells. Depression was like an extended drought: It deprived neurons of the sustenance they need.

Duman?s discovery of a link between trophic factors and antidepressant treatments still left the essential question unanswered: What was causing depressed brains to stop producing trophins? Why was the brain hurting itself? It was at this point that Duman?s research intersected the work of Robert Sapolsky and Bruce McEwen (Gould?s advisor at Rockefeller), who were both studying the effects of stress on the mammalian brain. In an influential set of studies, Sapolsky and McEwen had shown that prolonged bouts of stress were devastating to neurons, especially in the hippocampus. In one particularly poignant experiment, male vervet monkeys bullied by their more dominant peers suffered serious and structural brain damage. Furthermore, this neural wound seemed to be caused by a decrease in the same trophic factors that Duman had been studying. From the perspective of the brain, stress and depression produced eerily similar symptoms. They shared a destructive anatomy.

Just as Duman was beginning to see the biochemical connections between trophins, stress, and depression, Gould was starting to document neurogenesis in the hippocampus of the primate brain. Reading Altman?s and Kaplan?s papers, Gould had realized that her neuron-counting wasn?t erroneous: She was just witnessing an ignored fact. The anomaly had been suppressed. But the final piece of the puzzle came when Gould heard about the work of Fernando Nottebohm, who was, coincidentally, also at Rockefeller. Nottebohm, in a series of beautiful studies on birds, had showed that neurogenesis was essential to birdsong. To sing their complex melodies, male birds needed new brain cells. In fact, up to 1% of the neurons in the bird?s song center were created anew, every day.

Despite the elegance of Nottebohm?s data, his science was marginalized. Bird brains were seen as irrelevant to the mammalian brain. Avian neurogenesis was explained away as an exotic adaptation, a reflection of the fact that flight required a light cerebrum. In The Structure of Scientific Revolutions, the philosopher Thomas Kuhn wrote about how pre-paradigm-shift science excludes its contradictions: ?Until the scientist has learned to see nature in a different way, the new fact is not quite a scientific fact at all.? Evidence of neurogenesis was excluded from the world of ?normal science.?

But Gould, motivated by the strangeness of her own observations, connected the dots. She realized that Altman, Kaplan and Nottebohm all had strong evidence for mammalian neurogenesis. Faced with this mass of ignored data, Gould began pursuing cell birth in the adult brain of rats.

She would spend the next eight years quantifying endless numbers of radioactive rat hippocampi. But the tedious manual labor paid off. Gould?s data would shift the paradigm. More than thirty years had passed since Altman first traced the ascent of new neurons in the adult brain, but neurogenesis had finally become a real science.

After her wearisome post-doc, during which her data was continually criticized, Gould was offered a job at Princeton. The very next year, in a series of landmark papers, Gould began documenting neurogenesis in primates, thus confronting Rakic?s data directly. She demonstrated that adult marmosets created new neurons in their brains, especially in the olfactory cortex and the hippocampus. The mind, far from being stagnant, is actually in a constant state of cellular upheaval. By 1999, even Rakic had admitted that neurogenesis is real. He published a paper in Proceedings of the National Academy of Sciences that reported seeing new neurons in the hippocampus of macaques, an old world primate. The textbooks were rewritten. The brain, Elizabeth Gould had now firmly established, is always giving birth. The self is continually reinventing itself.

Gould?s finding has led, via work Duman has done that builds on it, to a rash of R&D to stimulate neurogenesis in the brain. Duman had an epiphany reading Gould?s papers. He realized that stress and depression didn?t simply kill cells, they might also prevent new cells from being born. ?I was reading these papers by

McEwen and Gould,? Duman says, ?and they were showing this relationship between stress and the adrenal hormones and neurogenesis. It just sort of all gradually came together.? Perhaps the time lag of antidepressants was simply the time it took for new cells to be created.

He immediately set to work to test this hypothesis. In December 2000, Duman?s lab published a paper in the Journal of Neuroscience demonstrating that antidepressants increased neurogenesis in the adult rat brain. In fact, the two most effective treatments they looked at?electroconvulsive therapy and fluoxetine, the chemical name for Prozac?increased neurogenesis in the hippocampus by 75% and 50%, respectively. Subsequent studies did this by increasing the exact same molecules, especially trophic factors, that are suppressed by stress.

Duman was surprised by his own data. Fluoxetine, after all, had been invented by accident. (It was originally studied as an antihistamine.) ?The idea that Prozac triggers all these different trophic factors that ultimately lead to increased neurogenesis is just totally serendipitous,? Duman says. ?Pure luck.?

But demonstrating a connection between antidepressants and increased neurogenesis was the easy part. It is much more difficult to prove that increased neurogenesis causes the relief provided by antidepressants, and is not just another of the drugs many side-effects. To answer this question, Duman partnered with the lab of Ren? Hen at Columbia.

The research team, led by post-doc Luca Santarelli, effectively erased neurogenesis with low doses of radiation. All other cellular processes remained intact. If the relief from depression was due to changes in serotonin, then halting neurogenesis with radiation should have had no effect.

But it did. Hen and Duman?s data was unambiguous. If there is no increase in neurogenesis, then antidepressants don?t work in rodents. They stay ?depressed.?

Duman and Hen?s work was greeted, as expected, by a howl of criticism. Mice aren?t people. The experiment was flawed. The radiation wasn?t specific enough. Robert Sapolsky, whose work on stress paved the way for much of Duman?s own research, is one of the most incisive skeptics. He argues that neurogenesis researchers have no plausible model for how decreased neurogenesis might cause the symptoms of depression. Why would having a handful fewer new cells in the hippocampus have such an effect? ?The more expertise someone has about the hippocampus,? Sapolsky wrote in a review in Biological Psychiatry, ?the less plausible they find this novel role.?

Duman himself is reluctant to discuss the clinical implications of his data. He imagines that neurogenesis in humans is just a single part of the antidepressant effect. ?It?s a long way from looking at mice in cages to talking about depression in humans. All of these connections are very exciting, but we still don?t understand what?s actually going on inside the brain. We don?t know what the function of all these new cells is, and we have no idea how they might relate, if they do, to the mechanism of action of antidepressants in humans.?

Nevertheless, Duman?s research is completely changing the way neuroscience imagines depression. Several major drug companies and a host of startups are now frantically trying to invent the next generation of antidepressants (a $12-billion-a-year business). Many expect these future drugs to selectively target the neurogenesis pathway. If these pills are successful, they will be definitive proof that antidepressants work by increasing neurogenesis. Depression is not simply the antagonist of happiness. Instead, despair might be caused by the loss of the brain?s essential plasticity. A person?s inability to change herself is what drags her down.

Scientists who pursue neurogenesis are audacious by definition?they have staked their career on a lark?and Dr. Jonas Fris?n is no exception. He is probably the only person in Stockholm who wears a cowboy hat. ?Super-exciting? is his favorite superlative. (He speaks English fluently, with a singsong Scandinavian accent.) Occasionally, Fris?n gives his science papers titles lifted from Bob Dylan songs, as in his 2003 paper ?Blood on the tracks: a simple twist of fate?? He thanks Dylan in the acknowledgments for ?inspiration.?

Fris?n has never known a brain that wasn?t filled with new cells. He became a neuroscientist after med school, just as neurogenesis was becoming a genuine fact. Although he is now a full professor in stem-cell research at the Karolinska Institute, the university in charge of administering the Nobel Prize for Medicine, Fris?n began his career as a doctor. When he started medical school, he assumed he would become a brain surgeon, or perhaps a psychiatrist. That, after all, was how you healed the brain back then: either with a scalpel or with words. The few drugs that worked on the mind?like antidepressants?performed their job mysteriously.

Fris?n has helped to change that. He has pursued the neurogenesis hypothesis into the realm of clinical medicine, and his rise has been astonishingly swift. In 1998, only three years after becoming a doctor, Fris?n was a tenured professor, in charge of a 15-person lab. He has a long list of influential papers to his name, published in frequently-cited journals like Cell and Nature.

Fris?n first leapt to the attention of the neuroscience community in 1999, when his lab announced that they had identified stem cells in the brain. Stem cells are the source of neurogenesis: It is their mitotic divisions that create new neurons.

Subsequent experiments in Fris?n?s lab have explored exactly how these neural stem cells are regulated. His ambition is to decipher the complicated and convoluted cascade of proteins that connect the feeling of stress to a decrease in neurogenesis. Only then, Fris?n says, ?will we be able to create drugs that selectively target neurogenesis. And that is what everybody wants to do. Just think of all the things you can heal.?

To achieve this, Fris?n has founded a biotech firm, NeuroNova, dedicated to pursuing drugs which stimulate neurogenesis. When it launched, neurogenesis remained a controversial concept; founding an entire company on its therapeutic promise seemed like an imprudent gamble. In Fris?n?s case, the gamble is paying off.

The first disease NeuroNova targeted for treatment was Parkinson?s Disease. Parkinson?s is caused by the death of dopamine-producing neurons, and doctors have repeatedly tried to compensate for this selective cell death by surgically transplanting embryonic brain tissue into patients? brains, often with disappointing results. Fris?n realized that the Parkinson?s brain was capable, at least in theory, of healing itself. Driven by this radical hypothesis, NeuroNova began screening thousands of potential compounds for their effect on neurogenesis. Perhaps increased neurogenesis might compensate for the rapid death of dopamine neurons.

The results so far have exceeded everyone?s expectations. In November 2005, NeuroNova announced that one of their leading drug candidates?clandestinely called sNN0031?restored normal bodily movement in rodent models of Parkinson?s. Rats that were barely able to walk had their symptoms erased after only five weeks of treatment. Furthermore, initial results suggest that the drug worked by rapidly increasing neurogenesis, thus restoring normal dopamine signaling in the rat brain. ?The results really are spectacular,? Fris?n says.

The next step is to begin testing in primate models of Parkinson?s, beginnig early this year. If the drug doesn?t produce toxic side effects?and that?s unlikely, since it is already approved as a human treatment for an unrelated condition?human clinical trials are expected to begin shortly thereafter.

Neurogenesis is an optimistic idea. Though Gould?s lab has thoroughly demonstrated the long-term consequences of deprivation and stress, the brain, like skin, can heal itself, as Gould is now beginning to document, finding hopeful antidotes to neurogenesis-inhibiting injuries. ?My hunch is that a lot of these abnormalities [caused by stress] can be fixed in adulthood,? she says. ?I think that there?s a lot of evidence for the resiliency of the brain.?

On a cellular level, the scars of stress can literally be healed by learning new things. Genia Kozorovitskiy, an effusive graduate student who began working with Gould as a Princeton undergrad, has studied the effects of various environments on their colony of marmosets. As predicted, putting marmosets in a plain cage?the kind typically used in science labs?led to plain-looking brains. The primates suffered from reduced neurogenesis and their neurons had fewer interconnections.

However, if these same marmosets were transferred to an enriched enclosure?complete with branches, hidden food, and a rotation of toys?their adult brains began to recover rapidly. In under four weeks, the brains of the deprived marmosets underwent radical renovations at the cellular level. Their neurons demonstrated significant increases in the density of their connections and amount of proteins in their synapses.

The realization that typical laboratory conditions are debilitating for animals has been one of the accidental discoveries of the neurogenesis field. Nottebohm, for example, only witnessed neurogenesis in birds because he studied them in their actual habitat. Had he kept his finches and canaries in metal cages, depriving them of their natural social context, he would never have observed such an abundance of new cells. The birds would have been too stressed to sing. As Nottebohm has said, ?Take nature away and all your insight is in a biological vacuum.?

Gould has also become concerned about the details of experimental design. She now stresses the importance, for both rodents and primates, of living in a naturalistic setting. An artificial cage creates artificial data.

(Precisely how artificial prior data from studies on brains of animals kept in un-naturalistic settings remains to be determined. Gould said that studying neurogenesis had led her to ?reflect much more on the question of experimental design. This really should be a concern for all neuroscientists.?)

The mind is like a muscle: it swells with exercise. Gould?s and Kozorovitskiy?s work reminds us not only how easy it is to hurt a brain, but how little it takes for that brain to heal. Give a primate just a few extra playthings, and its neurons are capable of escaping the downward cycle of stress.

When Gould first presented at the Society of Neuroscience?s annual meeting, there was no such thing as the field whose birth she was there to announce; she was filed away in the ?spinal cord rejuvenation? section. Today, she is almost frightened that her field has grown so big: ?I do get worried sometimes that neurogenesis has gotten overblown. The science of it still isn?t clear. But at the same time I understand why there is so much enthusiasm for the idea. It?s a new way of looking at a lot of old problems.?

Neurogenesis is a field that doubts itself. Because it has been scorned from the start, its proponents talk most emphatically about what they don?t know, about all the essential questions that remain unanswered. Their modesty is accurate: The purpose of all of our new cells remains obscure. No one knows how experiments done in rodents will relate to humans, or whether neurogenesis is just a small part of our mind?s essential plasticity.

Nevertheless, it is startling how much has been accomplished since Liz Gould, confused by her counting, went to the library in search of an answer. In 1989, no one would have dared to imagine that the environment we live in can profoundly influence the actual structure of our brain, or that childhood stress might have permanent neurological effects. No scientist could have guessed that Prozac modulates cellular division, or that a Swedish start-up would one day get a rodent brain to repair itself. If neurogenesis has taught us anything, it is that these extraordinary new facts aren?t simply answers to an old set of questions. The paradigm has shifted: what Gould and others are working on now is a whole new list of mysteries. And like the newborn neurons in our brain, these scientists are only beginning.

The pharmaceutical company Schering-Plough is excluding African-American patients from the Phase II trial of its new Hepatitis C (HCV) anti-viral drug. Activist groups are denouncing this exclusion as racist. For its part, Schering-Plough argues that it has valid scientific grounds for limiting the research at this stage to other racial groups. Company researchers point out that for unknown reasons, black people do not respond as well to HCV treatments as do members of other racial groups. One prominent activist, Judith Dillard, told the Newark Star-Ledger, "The bottom line is that African-Americans have been left out of this study to make the drug look good." Which is precisely the point.

In the past, drug trials would generally include members from diverse racial and ethnic groups. If the drug being tested was effective in all groups, then that was great; the company testing it had a potential blockbuster. If, however, some groups in the trial did not respond well to a treatment, then it would appear to be ineffective compared to placebo, and it would not be approved. Eventually researchers began to notice that not all groups respond the same way to the same medicines.

For example, researchers at the pharmaceutical company NitroMed ran clinical trials testing a combination therapy of two drugs on patients suffering from heart failure. The idea of the treatment was that the drugs together would boost nitric oxide in patients' bloodstreams. This would dilate their arteries and veins, allowing for more blood flow. In the company's initial trials NitroMed researchers were disappointed to find that their treatment seemed relatively ineffective. But they went back to the data and noted that black patients appeared to respond well to it.

So NitroMed decided to run a drug trial using just African-American heart patients. It was a success. Black patients taking the combination therapy called BiDil have a 43-percent reduction in death and a 39-percent decrease in hospitalization for heart failure. Researchers suspect that BiDil works well in black patients because they tend to have lower average amounts of nitric oxide. This may be the result of an evolutionary adaptation that helped their African ancestors retain scarce salt in tropical climates.

NitroMed encountered opposition when it asked the U.S. Food and Drug Administration to approve BiDil as treatment for blacks suffering from heart failure. Some objected to approving BiDil for blacks only on the grounds that all racial and ethnic distinctions are invidious. Nevertheless, the FDA, focusing on real patient benefits, brushed those objections aside and approved BiDil for use by African-American patients last year. Resistance to BiDil continues and sales have not met NitorMed's expectations.

Now, other drugs are turning out to have differential effects in patients from diverse racial and ethnic groups. For example, this month researchers at the University of Alabama at Birmingham are reporting a study of the effects of ramipril?an angiotensin converting enzyme (ACE) inhibitor medicine to treat high blood pressure and prevent kidney failure?in black patients. The study found that ramipril treatment is associated with a significantly lower rates of diabetes in African Americans with hypertensive kidney disease than conventional treatment with other drugs. Interestingly, an earlier study found that one side effect of ACE inhibitors in many Asian women is a persistent cough so bad that it often drove them to stop taking the medication.

So is the Schering-Plough HCV anti-viral study "racist"? Not really. The researchers have identified a patient subpopulation that they believe is more likely to benefit from the new treatment. If Schering-Plough can demonstrate that the medicine does work for whites, then the company can get it approved for sale by the FDA for that patient population. Admittedly race is a crude biomarker, but it would surely be bowing to political correctness about race to deny patients the benefit of treatments that are more likely to help them. In any case, the use of race to identify and classify patient subpopulations will doubtless be superseded by tests for more selective genetic markers before long.

Journalist Andrew Gumbel, a BiDil critic, argued, "Now, there may well be some genetic predisposition to respond to BiDil that shows up in a large number of people who happen to be African-Americans. But in that case it is the genetic predisposition which should be identified, and tested in all patients regardless of skin color." Clearly that day is coming. Researchers are forging ahead with the HapMap project which is sequencing the genomes of people drawn from a diverse mix of racial and ethnic groups. The goal of the project is to identify "genes that affect health, disease, and individual responses to medications and environmental factors."

As genetic biomarkers associated with various diseases are identified and tests devised to detect those differences in individual patients, race and ethnicity will fall away as a treatment biomarker. Therapies will then go beyond "whites only" or "blacks only," and become personalized: "Joe only" or "Julie only." In the meantime, it would be stupid and immoral not to take advantage of the crude genetic knowledge that racial and ethnic differences provide in making current treatments more effective.

Ronald Bailey is Reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.

Brain Scans Get at Roots of PrejudiceLiberals' minds lit up differently when thinking about conservatives

By E.J. MundellHealthDay Reporter

WEDNESDAY, May 17 (HealthDay News) -- The human brain may have a built-in mechanism for keeping racially or politically distinct groups apart, a new Harvard study suggests.

U.S. researchers observed the brain activity of liberal college students who were asked to think about Christian conservatives. As they did so, a brain region strongly linked to the self and to empathy with others nearly shut down, while another center -- perhaps linked to stereotypic thoughts -- swung into high gear.

"It's as if you think that 'they' don't think like you do -- it's like you believe they are governed by a different set of rules when they think," explained study author Dr. Jason Mitchell, a postdoctoral researcher at Harvard University's department of psychology.

His team published its findings in the May 18 issue of Neuron.

According to Mitchell, social psychologists have long known that people engage different mental criteria when thinking about the possible thoughts and actions of people within their own ethnic, cultural or political group, vs. those outside that group.

The neurological mechanisms governing this process has been much less clear, however.

"Our work is about 'other-ness,' " Mitchell said. "There's this question of 'How do I figure out what's going on inside your head? How do I make inferences about what you are feeling?' "

One theory that's gained credence among social neuroscientists is that people look to themselves when thinking about people they already include in their "group."

"If you and I are similar, then I can use what I know about myself to figure out what you are thinking," said Mitchell, who will become an assistant professor of psychology at Harvard in July.

Previous studies have shown that an area toward the front of the brain, called the medial prefrontal cortex (mPFC), always lights up when people think about themselves or people they consider similar to themselves.

But which part of the mPFC activates when people think about those outside their group?

To find out, the Harvard team hooked up a group of liberal Boston college students to a functional MRI machine, which tracks real-time changes in brain energy use.

They then asked the students to read detailed profiles of two people: one, a liberal-minded person much like themselves, and the other, a fundamentalist Christian conservative with views and activities very different from their own.

"We showed that there are distinct brain regions active in the mPFC," depending on the political stripe of the object in question, Mitchell said.

When the students thought about the liberal person, the mPFC's ventral region -- strongly associated with thoughts about the self -- got very active. But it quieted down when the subject was the Christian conservative -- instead, the mPFC's dorsal area took over.

"The dorsal region is a lot more mysterious," Mitchell said. "It's more engaged when I think about a dissimilar other."

"These data challenge the naive view that we bring the same mental orientation to bear when we think about those who are similar or different from us," said study co-author Dr. Mahzarin Banaji, a professor of social ethics in Harvard's department of psychology. "In particular, it raises questions about who can, objectively speaking, sit in judgment on whom."

Mitchell stressed that scientists currently have no way to tell what kinds of thoughts get processed in the dorsal mPFC. But he suspects it could be responsible for stereotypic thoughts that fail to take similarities between people into account, and instead stress their dissimilarities. So, people may consult the dorsal mPFC when they make snap judgments that assume that the "other" does not think or act like they do.

That's unfortunate, Mitchell said, because when people of different political, racial or cultural backgrounds focus on what they have in common, tensions ease. "If I can find a way to reach common ground -- for example, we both love baseball -- that might be enough to trump our dissimilarities," he said.

Another social neuroscientist who's worked in this field praised the study.

"We already knew much of this from psychology, but what we know now is more about how this is represented in the prefrontal cortex," said Dr. Elizabeth Phelps, a professor of psychology and neural science at New York University in New York City.

She said it's natural that humans lean to stereotypes when thinking about those outside their circle.

"We evolved to be in groups, and somebody who is part of your group is seen as less of a threat than somebody who is not," she said. "And it's a natural thing to assume that people who are not like you are going to have a different set of qualities."

But Phelps also believes that we might be able to override our ingrained "dorsal" response to strangers. "I imagine that you can think compassionately, highlighting similarities between you and another person that will change your interpretation of their actions," she said.

Mitchell agreed. He said a new set of fMRI experiments will soon get under way to see if that neural switch can easily occur. But, he said, there are limits to empathy, of course.

For example, for most people, finding out that Adolf Hitler loved dogs "isn't going to be enough" to mentally allow him into one's group -- even for the most hardened dog-lover, Mitchell said.

"It's not everyday that you interact with Hitler, however," he added. "Hopefully, in your everyday life you'll encounter less extreme examples."

More information

Learn more about brain imaging at the U.S. National Institute on Drug Abuse.

Mahzarin Banaji can show how we connect "good" and "bad" with biased attitudes we hold, even if we say we don't. Especially when we say we don't

By Sally Lehrman

Mahzarin Banaji wrestled with a slide projector while senior executives filed grumpily into the screening room at New Line Cinema studios in Los Angeles. They anticipated a pointless November afternoon in which they would be lectured on diversity, including their shortcomings in portraying characters on-screen. "My expectations were of total boredom," admitted Camela Galano, president of New Line International.By the break, though, executives for New Line and its fellow Time Warner subsidiary HBO were crowding around Banaji, eager for more. The 50-year-old experimental social psychologist from Harvard University had started with a series of images that showed the tricks our minds play. In one video clip, a team passed around a basketball. Of the 45 executives watching, just one noticed the woman who walked slowly right through the game, carrying an open white umbrella. After a few more examples, Banaji had convinced the audience that these kinds of mistakes in perception, or "mind bugs," operate all the time, especially in our unconscious responses to other people.

"It's reasonable and rational," Banaji told them. "And it's an error." We may intend to be fair, she ex--plain--ed, but underneath our awareness, our minds automatically make connections and ignore contradictory information. Sure enough, in a paper quiz, the executives readily associated positive words with their parent firm, Time Warner, but they found it harder to link them to their top competitor, the Walt Disney Company. To their chagrin, they discovered the same tenden-cy to pair positive terms with faces that have European features and negative ones with faces that have African features.

Banaji has been studying these implicit attitudes and their unintended social consequences since the late 1980s, when she first teamed up with Anthony Greenwald of the University of Washington. Greenwald created the very first implicit association test (IAT). He measured how quickly people tapped keys on a computer keyboard in response to prompts on the screen. Would they more easily associate positive words such as "happy" or "peace" with pictures of flowers and negative words such as "rotten" or "ugly" with insects? Predictably, they did. Then he began testing responses to words and images associated with ethnicity and race. Participants' automatic reactions did not match the attitudes they said they held. Among social psychologists seeking investigative instruments, "the IAT just took off in a flash," Greenwald recalls.

In the decades since, Banaji, Greenwald and a third collaborator, Brian Nosek of the University of Virginia, have continued to find fresh ways to use the IAT and other tools to probe bias: its nature, where it comes from and how it works. With neuroscientists, for instance, Banaji combined classical fear conditioning, implicit attitude measures and people's own descriptions of interracial dating to study how social groups come to fear one another. Banaji hopes next to work with primatologists to learn about our predisposition as a species to build bias into our perceptions.

Even in people with genuinely egalitarian views, Banaji and her colleagues find that bias is ordinary and ingrained and remains active outside our awareness. When the team realized the power of unconscious attitudes in everyday decision making, she says, "we knew the right thing was to take this to the public." On an IAT Web site (implicit.harvard.edu/implicit/), users can try 14 measures--to find out whether they automatically favor young over old, for instance, or prefer thin to overweight. Ten new sections include country-specific IATs, such as Muslim-Hindu and Pakistan-India associations.

At least two million people have tried the tests online so far, and many have offered suggestions. "Once you put it out there, you have to listen to what people are saying--and their ideas are brilliant," Banaji finds. She has begun venturing from the lab to teach people about prejudice, employing humor, intellect and kindness as she alerts investment bankers, media executives and lawyers to the buried biases that lead to mistakes.

As a research tool, the IAT has fed close to 300 papers in fields ranging from neuroscience to marketing. It has also fueled academic challenge and debate, with a few social psychologists accusing the team of liberal bias and overinterpretation of the results. Some critics insist that the test does not really measure unconscious prejudice, only harmless cultural knowledge that differs from true racism. Psychologists argue over the underlying cognitive mechanism. One project found that some people will show bias just because they fear they will.

After finishing a meta-analysis across 61 studies, however, Greenwald and Banaji decided that the validity of the IAT holds. The test predicted judgments, behavior, and physiological reactions linked to stereotyping and prejudice better than expressed attitudes could. "In my own field, subtle prejudice, the IAT has helped crystallize ideas that we've been talking about for years," observes Jack Dovidio of the University of Connecticut. And it is an excellent teaching tool, he adds. When users experience their own discomfort and slowness in making associations, it is hard to ignore the message, agrees Princeton University social psychologist Susan Fiske. "Part of Mahzarin's genius was to see the IAT's potential impact on real-world issues," she points out.

Most recently, Banaji has been trying to discern when race attitudes first form and when conscious beliefs begin to diverge from those below the surface. In child-friendly tests, Banaji discovered that Japanese and white New England children as young as six both openly and implicitly preferred people like themselves. By age 10, their unconscious and conscious attitudes started to split. Despite expressing more egalitarian views as they grew older, people in the two societies continued to show automatic bias against black faces. For Japanese participants, both implicit and explicit attitudes toward European faces became more positive.

Banaji now suspects that if she could test for prejudice in babies, she would find it. But that does not mean that we are born with bias. Certainly we have the mental machinery to generalize and rank across social categories, she says, but culture fills in the necessary information. And humans absorb ideas about racial status early. In a study of 234 Hispanic-Americans, for instance, children compared themselves favorably with African-Americans. But when they used the IAT to compare themselves with white children, the natural preference for their own group fell away. "This work suggests that what we value, what we think is good, is in the air," Banaji remarks. It might develop through things like the warnings that a parent conveys to a child, in a tightening grip on a little hand. As adults, we continue to observe our environment and unintentionally adapt the stereotypes we hold to match.

Fortunately, our brains do not seem permanently stuck on bias. Powerful cultural signals push in one direction, but awareness, close relationships and experience can push back. Banaji, Greenwald and Nosek are starting a nonprofit to help people apply their research. They envision seminars and lectures, followed by "booster shots" of online exercises.

By weaving awareness into our day, Banaji states, we can help our conscious attitudes take charge. It is like exercising regularly and eating healthfully, she explained to the -filmmakers. And she suggested that they could build protective measures into their lives and work, much like fluoride in drinking water. "In every movie where you can do things counter to stereotype," she told them, "you are likely to produce change."

Published: July 22, 2006Thanks to museum dioramas and magazine illustrations, most of us can close our eyes and envision a stylized Neanderthal ? the muscular build, the broad, projecting face, the low, rounded skull. Some of us would swear we have actually seen one on the street, though the species died out, of course, some 30,000 years ago. The link between Neanderthals and modern humans has always been visually obvious ? we are clearly kin of some kind ? but imprecise in most other ways. That may change abruptly. Scientists from the Max Planck Institute for Evolutionary Anthropology and 454 Life Sciences have announced that they plan to reconstruct the Neanderthal genome.

The task is not simple. It means sifting the bits of Neanderthal DNA ? brittle with age ? from 45,000-year-old bones that have been contaminated by bacteria and by the humans who have handled them. The resulting string of genetic information will then be compared with the genomes of humans and chimpanzees. The hope is that the Neanderthal genome will answer basic questions about those hominids that scientists have been unable to resolve using only the fossil evidence. Could Neanderthals talk? Did they interbreed with humans? These are some of the things the genome might reveal.

But the real question to be answered is this: Which genes in our own genome do we share with Neanderthals and which belong uniquely to us? The answers are likely to change the way we think about ourselves as radically as they do the way we think about Neanderthals. Nearly everything we humans have chosen to know about Homo sapiens over time has emphasized how separate we are from the rest of nature. Genetic research has begun to contribute the precise details ? part of the broader evolutionary argument advanced by Darwin ? that show how surprisingly unseparate we are. The 1 percent genetic difference between us and chimpanzees still feels, to most of us, like a whopping difference. We?ll see how it feels to know our exact relationship to a far closer cousin

WASHINGTON - If you look into the eyes of someone who is frightened, your brain will pick up on the fear in a split second, well before you can consciously put a name to the emotion, scientists say.Why such a hair-trigger response to what someone else is feeling? Recognizing a fearful expression on another person?s face might save your skin some day, because whatever has spooked your friend might also be a danger to you.According to a new study in the journal Science, published by AAAS, the nonprofit science society, seeing the enlarged whites of fear-widened eyes is enough to activate a fear-related structure in the brain called the amygdala.Fear CentralThe amygdala, an almond-shaped nugget buried deep in the brain, is an ancient structure found in all vertebrates. Scientists have learned much about the amygdala by studying human patients with damage to this part of the brain. These patients are remarkably normal in most respects. When they look at fearful facial expressions, however, they often have difficulty recognizing that fear is the emotion being expressed. Or they can?t distinguish between expressions of mild fright and sheer terror.Some evidence suggests that faulty signaling by the amygdala may be involved in autism, a disorder that affects, in part, verbal and nonverbal communication abilities. Likewise, too much activity by this structure may play a role in anxiety disorders, according to Paul Whalen of the University of Wisconsin at Madison.Because fear can help you avoid danger, it?s no surprise that the amygdala is ?Fear Central? for the brain. It?s involved in other emotions too, but it?s crucial for feeling fear and priming the body to respond physically to danger.If an object comes flying at your head, ?you don?t need to know that it?s a brick or a tennis ball, you need to start buckling your knees,? said Whalen.Whalen also thinks the amygdala is important for learning to avoid dangerous situations in the future, which would help explain why it?s so highly attuned to fearful and surprised facial expressions.In response to these expressions, the amygdala basically says, ?That person has detected an important event close by and I should really find out what it is.?The amygdala on autopilotWhalen described the amygdala as somewhat like an idiot savant: It?s neither versatile nor flexible, but it does its job exceedingly well. It?s so good at its task, in fact, that it senses fear in others and responds automatically, perhaps even before parts of the brain involved in conscious thought have figured out what?s going on. ?People automate thousands of things, like driving. Sensing fear is important enough that we?ve automated it,? Whalen said. Scientists have been investigating the automatic nature of the amygdala using a technology called functional magnetic resonance imaging, or fMRI, to measure the blood flow in specific regions of the brain.

Whalen and his colleagues, for example, have found that volunteers? amygdalae responded to images of fearful faces ? even though the faces appeared too quickly for the volunteers to be aware of them.The reason for showing the images so quickly was to eliminate conscious thoughts that would confuse the results. With this subliminal-image approach, known as ?backward masking,? the researchers could be sure that the fMRI device wasn?t recording the brain activity associated with the volunteers? thinking ?Hey, that face looks like my Uncle Ed.? As with other fMRI studies, more research will be needed to uncover the actual nature of the brain activity that the device picks up. The notion that the amygdala might be part of a circuit that could bypass detailed processing through the cortex in humans remains controversial.The whites tell the storyFacial expressions can be extremely complex and subtle. Often it?s difficult to tell just what part of a face is sending a particular message, though the eye region of the face seems to be most important.In order to determine what part of a fearful face was stimulating the amygdala, Whalen and his colleagues considered other facial expressions. Angry faces don?t activate the amygdala as strongly, for example, but surprised faces do. The eyes ? specifically, the whites of the eyes ? were indeed the key. For example, a pair of fearful or surprised eyes has larger whites than eyes from other expressions. Happy expressions tend to have eyes showing the smallest amount of white, according to Whalen. In their new Science study, Whalen?s team examined whether eye whites alone were sufficient to trigger the amygdala. They used the backward-masking approach on their volunteers, interspersing pictures of fearful or happy eye whites for a split-second in between longer images of neutral faces. The eye white stimuli were derived from actual photographs of actors making afraid and happy faces, developed by Paul Ekman at the University of California at San Francisco.The volunteers? amygdalae responded only to the fearful eyes, the results suggested. Thus, instead of having to sort through many subtle variations in facial expression to locate a fear signal, the brain can just use information about eye-white size.?We suspected that the amygdala was not all that bright. The eye-white finding offers a simple rule that the amygdala could handle,? Whalen said.

While some experts might be tempted to say this automatic fear-sensing ability is something we?re born with, Whalen said it?s too early to tell for sure. Just because a brain response is automatic doesn?t mean it can?t be something we learn how to do with experience. Understanding facial expressions is essential for human communication, so Whalen thinks we may learn to do this very early on, even as infants staring into our parents? faces.Interestingly, most monkeys and apes don?t have lighter-colored scleras (or ?eye whites?). So if this type of fear sensing is hard-wired in humans, it must have evolved relatively recently.

Robert J. Bunker & John P. SullivanGangs and Iraqi insurgents, militias, and other non-state groups share common origins based on tribalism, and therefore, it is expected that they will exhibit similar structures and behaviors. It is our belief that further insight into Iraq’s present situation and future prospects may be derived from a perspective utilizing 3rd generation gang (3 GEN Gangs) studies which present lessons learned from the emergence and spread of gangs within the United States, and other parts of the world, over roughly the last four decades. (1) Basically, from a 3 GEN Gangs perspective, three generations of gangs have been found to exist: turf based, drug based, and mercenary based. The first generation gangs, comprising the vast majority, focus on protecting their turf. These gangs, the least developed of the three generational forms, provide both protection and identity to their members and little more. While some drug dealing is evident, it tends with these gangs to be a sideline activity.

The more evolved second and third generation gangs provide more tangible economic- and, later, political- based rewards to their members. Far fewer second generation gangs exist in relation to first generation gangs and, in turn, an even smaller number of third generation gangs exist in relation to second generation gangs—at least with regard to gangs found in the Americas. Second generation gangs focus on drug market development and exploitation and are far more sophisticated than turf based gangs. Third generation gangs are the most politicized, international in reach, and sophisticated of the gang generational forms. They will readily engage in mercenary endeavors and actively seek political power and financial gain from their activities. Certain terrorist groups (such as the Red Brigades in Italy), drug cartels, and local warlords all have attributes and organizational structures akin to third generation gangs. (2)

From a 3 GEN Gangs perspective, Iraq has been essentially overrun by 3rd generation gangs and their criminal-soldier equivalents. This is reminiscent of the nightmare scenario for the US already starting to develop in Central and South America (and, to a lesser extent, within the US) with the emergence, growth, and expansion of Mara Salvatrucha (MS-13) and other Maras. In many ways, the ‘Gangs of Iraq’ are a prelude to the ‘Gangs of the Americas’ that we will be increasingly facing in the Western Hemisphere.

Gangs emerge, prosper, and solidify their position as a viable social organizational form in housing projects, neighborhoods, prisons, slums, cities, urban regions, and even entire countries that have undergone (or are undergoing) varying forms of societal failure. The rise of newer forms of tribalism leading to gang emergence may be derived from combinations that include lack of jobs, high levels of poverty and drug abuse, low educational levels, an absence of functional families, along with high levels of crime and lawlessness, including that generated by domestic internal strife, which result in a daily threat of bodily injury. Further, newer forms of tribalism may readily mingle with older pre-existing forms of tribalism based on kinship, clan, and other extended family groupings.

Iraq’s current situation, at least for the middle and southern sections, is far from hopeful. Currently some where between 1,000 and 5,000 people are now being killed throughout Iraq each month because of sectarian violence, gang wars, and rampant criminal activity. Total post-invasion deaths in Iraq taking place during the American and allied stability and support operations (SASO) period ranges anywhere from 50,000 to +100,000. (3) Societal strife generated by ethnic and religious intolerance— derived from older forms of Middle Eastern tribalism— has resulted in neighborhood ethnic cleansing and the emergence of fortified enclaves. Extra-judicial killings and torture (i.e. street justice) have become the norm as have home invasion robberies, carjackings, petty theft, assaults, and kidnappings for ransom. Shifting coalitions of former regime loyalists, foreign Jihadi fighters linked to al Qaeda, Shia and Sunni militiamen tied to local clerics, criminal gangs of numerous types, competing Iraqi ministries and even active military and police units, along with foreign operatives promoting the interests of Iran, Hizballah, and Syria make for a chaotic and ever-changing threat landscape.

Americans, once universally hailed as liberators except by the most hardened former regime loyalists, are now viewed by many Iraqis at best as unwanted foreigners that will hopefully leave soon and at worst as hated crusaders that should be actively singled out, tortured, and killed. The northern Kurd-dominated region of the country is far more stable and supportive of American forces than the two other sections of Iraq but still is not free of sectarian violence in the urban centers and sabotage, improvised explosive device (IED) attacks, suicide bombings, and assassinations occur throughout the region.

Insight can be gained by juxtaposing strife ridden Iraq with the US and other regions of the world, specifically Central and South America, with their high levels of gang emergence and activity. Gangs are very much a social cancer within American society and are a by-product of the new form of tribalism that has emerged nationally—possibly as a partial result of the demise of the older melting pot culture and an overemphasis on cultural relativism and heterogeneity.

As a consequence, gangs have spread at an alarming rate throughout American society. In the US, about 58 cities had gangs in 1960. By 1992, the number of cities with gangs had jumped to 769. (4) Luckily, the vast majority of gangs in the US are composed of the relatively less-evolved Turf gangs—though second generation drug gangs have been common for decades now and third generation mercenary gangs, in the current form of the Maras, have just recently started to appear within our borders.

Still, even though most gangs in the US are Turf-based gangs, gang-related homicides in our country have probably totaled about 100,000 over the last 20 years. This is an educated guess based on an extrapolation of Los Angeles county gang homicide data as no national gang homicide statistics exist. (5) The daily attrition rate on America’s streets due to gang violence has either gone unrecognized or is not yet viewed as a national security threat by our federal government. To its credit, however, FBI led national task forces to contend with the criminal activities and atrocities (e.g. torture and machete attacks) committed by MS-13 and other violent gang members have now been put in place. (6)

In Central and South America, gangs are now nothing less than out of control. Honduras, El Salvador, Nicaragua, and Guatemala are all being directly threatened by the Maras. (7) In addition, Brazilian society was recently brought to its knees by a powerful prison gang that instigated a limited duration state wide insurgency that resulted in numerous civilian and law enforcement deaths and temporarily paralyzed the national economy. ( Mexico, furthermore, is seeing a fusion of its powerful drug cartels and gangs with an ensuing drug war that is resulting in numerous killings and decapitations—much like the ritual Jihadi beheadings witnessed in Iraq. (9) No statistics or even estimates for the number of gang-related homicides that have taken place in Central and South America exist but they must surely be on par, if not far greater, than those that are estimated to have taken place in America over the last twenty years. If this is the case, gang killings for all of the Americas would now number, at the very least, in the low hundreds of thousands for that time span.

Of direct interest is the continuum of environmental modification represented by gang activities in the US at one extreme and in parts of the Americas and Iraq at the other. Even the most basic level US gangs will attempt to culturally influence and modify their surroundings with drive-by shootings, the use of gang graffiti to mark their territory, and the take over of selected public spaces. Iraqi gangs and groups, on the other hand, are engaging in full out ethnic cleansing, neighborhood takeovers, and direct political control of those individuals living within their sphere of influence. Early intervention can prevent gangs from taking over a neighborhood, city, urban region and other environments. However, if allowed to evolve and engage in unchecked activities for too long they promise to replace legitimate political authority. As such, 3 GEN Gangs readily fill the vacuum left by the absence of legitimate authority.

Iraq’s future prospects, given this scenario are bleak. The domination of Iraq by 3 GEN Gangs and other non-state entities (e.g. insurgent and terrorist groups, the militias of the clerics, and renegade police, military, and private security forces) has destroyed any chance of a free and democratically unified country emerging anytime soon, or possibly even for decades to come. The Iraqi operational environment has now seen the total blurring of crime and war. Perhaps, it is now even too far gone to salvage from a traditional policing or military perspective—only time will tell in this regard. (10)

This brings us some measure of concern with regard to the future prospects vis-à-vis the gang situation in the Americas. As more and more 3 GEN Gangs begin to emerge, thrive, and expands their networks in the Western Hemisphere the long term prospects for large regions of the Americas may very well, at some point, also come into question. Currently, 3 GEN Gangs have already take control in slums and other urban no-go zones, prisons, and some provinces and territories of various states including Brazil, Colombia, Honduras, Nicaragua, El Salvador, Guatemala, and Mexico. That such gangs are now starting to emerge within the United States should also give pause for concern. These developments in global context may ultimately cause us to re-examine our policies in the Americas and elevate our concerns over the “Gangs of the Americas” to the same level as that currently afforded the “Gangs of Iraq.”

Notes

1. For an overview and literature survey of this topic see John P. Sullivan and Robert J. Bunker, “Third Generation Gang Studies: An Introduction”, Journal of Gang Research. Forthcoming.2. A perspective on the Red Brigades as a 3 GEN Gang can be found in Max G. Manwaring, “Gangs and Coups D’ Streets in the New World Disorder: Protean Insurgents in Post-Modern War”, Robert J. Bunker, ed., Criminal-States and Criminal-Soldiers, special double issue of Global Crime, Vol. 7. No. 3-4. August/November 2006; for drug cartel and warlord similarities to 3 GEN Gangs see John P. Sullivan and Robert J. Bunker, “Drug Cartels, Street Gangs, and Warlords”, Robert J. Bunker, ed., Non-State Threats and Future Wars, special issue of Small Wars & Insurgencies, Vol. 13. No. 2. Summer 2002, pp. 40-53.3. Actual numbers of Iraqis killed each month and total figures are unknown. Sources are unreliable and typically inflated or deflated in order to benefit the policies or agenda of the group providing the statistics. We can safely say that 1,000 to 2,000 people are being killed each month but the upper limit of 5,000 people is no longer out of the range of possibility given the high levels of violence now generated by the simultaneous insurgency and civil war taking place. Iraqi casualty reports and tracking websites offer total numbers killed upwards from 50,000.4. Malcolm W. Klein, The American Street Gang, Oxford: Oxford University Press, 1995, pp. 92-95.5. Los Angeles County gang homicide information provided by Sgt. Wes McBride, Los Angeles Sheriffs Department, Retired, Safe Streets Bureau.6. Statement of Chris Swecker, Assistant Director, Criminal Investigative Division Federal Bureau of Investigation Before the Subcommittee on the Western Hemisphere House International Relations Committee April 20, 2005.7. See Ana Arana, “How the Street Gangs Took Central America,” Foreign Affairs, Vol. 84, No. 3. May/June 2005, pp. 98-110.8. See Andrew Downie, “Police Are Targeted in Deadly Attacks, Prison Riots in Brazil”, Los Angeles Times, Sunday, May 14, 2006, p. A25; Marcelo Soares and Patrick J. McDonnell, “Inmates Unleash a Torrent of Violence on Brazilian City”, Los Angeles Times, Tuesday, May 16, 2006, pp. A1, A16; and Marcelo Soares and Patrick J. McDonnell, “Death Toll in Sao Paulo Rise to 133; City is Calm”, Los Angeles Times, Wednesday, May 17, 2006, p. A16.9. See Lisa J. Campbell, “The Use of Beheadings by Fundamentalist Islam”, Robert J. Bunker, ed., Criminal-States and Criminal-Soldiers, special double issue of Global Crime, Vol. 7. No. 3-4. August/November 2006.10. The US military seems to think that temporarily raising troop levels in order to neutralize Muqtada al-Sadr’s ‘Mahdi Army’ (Shia militia) and possibly launching an offensive into the Sunni stronghold of Al Anbar province in support of the Iraqi government offer the best hopes for victory. This plan is being debated within the government and already criticized in some quarters. See Julian E. Barnes, “Larger U.S. effort in Iraq is proposed”, Los Angeles Times, Wednesday, December 13, 2006, pp. A1, A16; and Maura Reynolds, “Majority support pullout timeline”, Los Angeles Times, Wednesday, December 13, 2006, pp. A17.

----------

Dr. Robert J. Bunker is CEO of the Counter-OPFOR Corporation. John P. Sullivan is senior research fellow at the Center for Advanced Studies on Terrorism and a lieutenant with the Los Angeles Sheriff's Department.