The conventional explanation for controversy over climate change emphasizes impediments to public understanding: Limited popular knowledge of science, the inability of ordinary citizens to assess technical information, and the resulting widespread use of unreliable cognitive heuristics to assess risk. A large survey of U.S. adults (N = 1540) found little support for this account. On the whole, the most scientifically literate and numerate subjects were slightly less likely, not more, to see climate change as a serious threat than the least scientifically literate and numerate ones. More importantly, greater scientific literacy and numeracy were associated with greater cultural polarization: Respondents predisposed by their values to dismiss climate change evidence became more dismissive, and those predisposed by their values to credit such evidence more concerned, as science literacy and numeracy increased. We suggest that this evidence reflects a conflict between two levels of rationality: The individual level, which is characterized by citizens’ effective use of their knowledge and reasoning capacities to form risk perceptions that express their cultural commitments; and the collective level, which is characterized by citizens’ failure to converge on the best available scientific evidence on how to promote their common welfare. Dispelling this, “tragedy of the risk-perception commons,” we argue, should be understood as the central aim of the science of science communication.

Like this:

Over at the new Law & Mind Blog, several Harvard Law students have been blogging about a chapter (forthcoming inIdeology, Psychology, and Law, edited by Situationist Contributor Jon Hanson) by Mitchell Callan and Situationist Contributor Aaron Kay. In the second post on the topic (copied below), LLM candidate David Simon discusses legal socialization.

* * *

Imagine you and your neighbor share a fence along a common border, part of which demarcates the boundary between both properties and “the wilderness.” The fence benefits both of you because it keeps out the livestock-killing coyotes. One day, a shared and critical part of the fence collapses onto your property, leaving your yard open to coyotes, who may eat your livestock. Without legal recourse, how might you resolve that dispute. Would you work with your neighbor to help reconstruct the fence? Would the solution be cooperative or adversarial? (For more on the resolution of land disputes without the aid of law, see Robert Ellickson, Order Without Law: How Neighbors Settle Disputes.)

Did Jack McCoy's role on Law & Order influence your perception of people as self-interested?

If we introduce law into the equation–say, by inventing a right that allowed you to sue your neighbor–how would the resolution of that dispute change? Might you claim that your neighbor ought to fix the fence herself, even if an unrepaired fence might harm you?

Mitchell J. Callan & Aaron Kay think that the answer to that last question may be “yes”: the law may in fact alter how we think about situations and how we interact (cooperatively or not) with others. This occurs, they argue, through a process called legal socialization: the process by which exposure to law can reinforce conceptions of individuals as self-interested and competitive. (This occurs, for example, by exposure to popular depictions of the legal system, such as those on Law & Order, as Beth describes in her post.) If you’re curious about how they reach this hypothesis, Becky’s blog post explains it for you. But the basic idea is this: if exposure to certain ideas influences how one thinks and acts, exposure to systems embedded with latent ideas might do the same. Because the U.S. legal system conceptualizes people as self-interested and competitive, exposure to it can reinforce notions of people as competitive and self-interested.

Identifying this phenomenon in everyday events is a bit more difficult than it sounds–largely because legal socialization seems to be gradual rather than punctuated. Nevertheless, there are instances where we can view the law as reinforcing certain conceptions of the individual.

SB-1070

Take, for example, Arizona’s recent enactment of SB-1070, one of the strictest immigration laws in recent history. Among other

Humor is one way to diffuse conceptions of people as self-interested and competitive.

things, the law criminalized both attempts by illegal immigrants to work, and attempts by others to solicit work from illegal immigrants (Sec. 13-2928). That provision alone seems to have “competitive” or “self-interest” overtones. In some ways, though, the law might be a product–rather than example–of legal socialization. I.e., the law represents how people perceive others are likely to act. In this case, the law conceptualizes an “outgroup” (immigrants) as a competitive threat, and seeks to neutralize that threat by preserving the “ingroup’s” (Arizona residents) interest. The law is both shaped by, and an embodiment of, visions of individuals as competitive and pursuing selfish aims.

What’s troubling is not so much the characterization of the individual, but the effect it has on social behavior and thinking. It may, for example, engender actual competitiveness where none existed before; that is, it may decompose social relations, rather than strengthen them. Interestingly, some groups have seemed to pick up on this dynamic already. When Arizona signed the law into law,

the Mexican American Legal Defense and Educational Fund . . . predict[ed] that the law would create “a spiral of pervasive fear, community distrust, increased crime and costly litigation, with nationwide repercussions.”

Though potentially exaggerated, those are the kinds of results we would expect given Callan & Kay’s findings. A law that reinforces stereotypes of the individual as competitive and self-interested will strengthen and propagate that stereotype. That, in turn, can have anti-social effects: less cooperation, more stratification, and enhanced hostility between “groups.”

School Board Meetings & the Open Access Law

For those unfamiliar with school board meetings, they can be nasty affairs. Disputes between the board and superintendents often are bitter–the board is an “outsider” to the superintendent, who “runs the school.” Disputes also can arise between the public–which wants to know what the board is discussing–and the board, which wants to run the meeting in a particular way.The latter dispute recently arose in a school board meeting in Oklahoma. The superintendent of schools in the district apparently has a knack for “poor behavior,” such as yelling. But beyond incivility, the author of the hyperlinked editorial is concerned with the law:

No one can force the grown-ups to act as such. But they can and should be compelled to follow the law.

Krushchev would have fit in well with some members of a recent Oklahoma school board meeting.

What law, you ask? Laws ensuring public access to meetings of public bodies–so called “open access laws.” The overarching purpose of such laws is to prevent secrecy among public bodies. Because public schools are accountable to, well, the public, most states have open access laws that govern their board meetings. In Illinois, for example, the Open Meetings Act requires most public school board meetings to be open to the public. When the board conducts closed or private meetings, it must videotape or record them.

Back to the author’s request: “follow the law.” Now, requiring people to follow the law is not an unreasonable request–and, indeed, it may be just the right thing to do. But notice how, in this case, the law itself is being used to quell what the author sees as the school board’s self-interested behavior. What, exactly, was that behavior? The author thinks it was a deliberate attempt to avoid disclosing issues to the public:

Much of the initial ruckus at the meeting involved three potential employees Barresi recommended for hire. But none of the names nor the positions they were to fill were listed on the agenda. Instead, the meeting’s posted agenda listed a “Report on Department personnel changes” as an item on the consent agenda.

I don’t know the board’s motives for issuing such a vague description. Maybe it was trying to be sneaky, or maybe it was just issuing a general topic to be discussed at the meeting. That’s not the point. What matters here is that the author’s sense of the activity (some kind of impropriety) is shaped by the law–more specifically, the author sees a person whose acts conform to the image the law projects.

Let’s see exactly how that is so. The law here is a mechanism to prevent the board from pursuing self-interested ends. Indeed, the law “sets up” this conclusion. The law assumes school boards and administrators as likely to meet in secret–as pursuing self-interest. By trying to prevent certain behavior, the law makes assumptions about how people will behave; in this case, it assumes they will behave in a self-interested fashion.

That assumption may or may not be accurate, but it certainly colors the author’s analysis of the issue. The author assumes–as does the law–that the board was providing vague descriptions because it had self-interested ends. Why? Because that is exactly what the law assumes. The author’s assessment may be correct, but the law’s conception of the individual certainly influences how one thinks about the situation. One might say, “Well, if the law says you have to do X because, if you don’t, your probably pursing self-interested aims, then you likely are pursing self-interested aims when you fail to do X.”

Pushing for people to follow “the law” may not be a bad thing, but when the law leads to perceptions about people’s nature, it can have unintended and potentially harmful consequences. The law presumes the school board will act in self-interested ways–and that may have socialized the author to view the board’s actions as violating the law (i.e., as self-interested) when they are not. Might the situation have been different if no law had existed? Would different norms have developed? Would the situation been viewed the same way?

Callan & Kay aren’t just concerned with cognition, though. Recall that they hypothesize that our conception of the law can actually influence our behavior. Maybe this is just such an instance. Instead of working with the board in a cooperative way, the author seeks legal recourse simply because the law leads the author to see the board’s behavior as self-interested.

A Question

I want to close with some thoughts on the legal socialization hypothesis, which I find interesting. I can’t help but wonder whether the law’s default preference in many cases is necessitated by actual self-interested or competitive behavior. Many times it’s difficult to separate what laws are merely socializing people to competitive and self-interest conceptions of human behavior from those that actually protect people from such behavior. Callan & Kay do note that the influence of the law on cognition and social relations is likely individual-relative–more research needs to be done. Even when controlling for such individual differences, though, I find the distinction a bit fuzzy. Do securities laws (false advertising laws, trademark laws, etc.), for example, protect people from actual (anti)competitive and self-interested behavior or merely reinforce such conceptions of human nature? The answer in that case is probably both, which is not particularly satisfying.

On Monday, October 18th, the HLS Student Association for Law and Mind Sciences (SALMS) and the American Constitution Society (ACS) are hosting a talk by Yale professor Dan Kahan entitled “The Laws of Cultural Cognition, and the Cultural Cognition of Law.

Professor Kahan is the Elizabeth K. Dollard Professor of Law at Yale Law School. A graduate of Harvard Law School, Professor Kahan clerked for both for Justice Thurgood Marshall and Judge Harry T. Edwards of the District of Columbia Circuit United States Court of Appeals.

Professor Kahan is well-known for his work in the area of cultural cognition, or the study of how people assess the degree of risk in a given situation based on their culturally engrained concepts of good behavior. He leads the Cultural Cognition Project, which researches the history and impact of this phenomenon along with its mechanistic underpinnings. His work has had a profound impact upon criminal legal scholarship, particularly in relation to his theory that shame-based penalties should be implemented in criminal law.

Professor Kahan will be speaking in Austin North. Lunch will be provided!

Like this:

A common theme of The Situationist and of the scholarship of Situationist Contributors is the “choice myth” in western culture. Here is a video of Professor Renata Saleci, who employs sociology, psychoanalysis, and philosophy, to offer a slightly different version of that familiar theme.

From Ted Talks: “[Situationist friend] Sheena Iyengar studies how we make choices — and how we feel about the choices we make. At TEDGlobal, she talks about both trivial choices (Coke v. Pepsi) and profound ones, and shares her groundbreaking research that has uncovered some surprising attitudes about our decisions.”

This article, the first of a multipart series, argues that a major rift runs across many of our major policy debates based on our attributional tendencies: the less accurate dispositionist approach, which explains outcomes and behavior with reference to people’s dispositions (i.e., personalities, preferences, and the like), and the more accurate situationist approach, which bases attributions of causation and responsibility on unseen influences within us and around us. Given that situationism offers a truer picture of our world than the alternative, and given that attributional tendencies are largely the result of elements in our situations, identifying the relevant elements should be a major priority of legal scholars. With such information, legal academics could predict which individuals, institutions, and societies are most likely to produce situationist ideas – in other words, which have the greatest potential for developing the accurate attributions of human behavior that are so important to law.

Joe Keohane wrote an outstanding article, “How Facts Backfire: Researchers discover a surprising threat to democracy: our brains,” for the Boston Globe last week. Here are some excerpts.

* * *

It’s one of the great assumptions underlying modern democracy that an informed citizenry is preferable to an uninformed one. “Whenever the people are well-informed, they can be trusted with their own government,” Thomas Jefferson wrote in 1789. . . . Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.

In the end, truth will out. Won’t it?

Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information. It’s this: Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

This bodes ill for a democracy, because most voters — the people making decisions about how the country runs — aren’t blank slates. They already have beliefs, and a set of facts lodged in their minds. The problem is that sometimes the things they think they know are objectively, provably false. And in the presence of the correct information, such people react very, very differently than the merely uninformed. Instead of changing their minds to reflect the correct information, they can entrench themselves even deeper.

“The general idea is that it’s absolutely threatening to admit you’re wrong,” says political scientist Brendan Nyhan, the lead researcher on the Michigan study. The phenomenon — known as “backfire” — is “a natural defense mechanism to avoid that cognitive dissonance.”

These findings open a long-running argument about the political ignorance of American citizens to broader questions about the interplay between the nature of human intelligence and our democratic ideals. Most of us like to believe that our opinions have been formed over time by careful, rational consideration of facts and ideas, and that the decisions based on those opinions, therefore, have the ring of soundness and intelligence. In reality, we often base our opinions on our beliefs, which can have an uneasy relationship with facts. And rather than facts driving beliefs, our beliefs can dictate the facts we chose to accept. They can cause us to twist facts so they fit better with our preconceived notions. Worst of all, they can lead us to uncritically accept bad information just because it reinforces our beliefs. This reinforcement makes us more confident we’re right, and even less likely to listen to any new information. And then we vote.

This effect is only heightened by the information glut, which offers — alongside an unprecedented amount of good information — endless rumors, misinformation, and questionable variations on the truth. In other words, it’s never been easier for people to be wrong, and at the same time feel more certain that they’re right.

“Area Man Passionate Defender Of What He Imagines Constitution To Be,” read a recent Onion headline. Like the best satire, this nasty little gem elicits a laugh, which is then promptly muffled by the queasy feeling of recognition. The last five decades of political science have definitively established that most modern-day Americans lack even a basic understanding of how their country works. In 1996, Princeton University’s Larry M. Bartels argued, “the political ignorance of the American voter is one of the best documented data in political science.”

On its own, this might not be a problem: People ignorant of the facts could simply choose not to vote. But instead, it appears that misinformed people often have some of the strongest political opinions. A striking recent example was a study done in the year 2000, led by James Kuklinski of the University of Illinois at Urbana-Champaign. He led an influential experiment in which more than 1,000 Illinois residents were asked questions about welfare — the percentage of the federal budget spent on welfare, the number of people enrolled in the program, the percentage of enrollees who are black, and the average payout. More than half indicated that they were confident that their answers were correct — but in fact only 3 percent of the people got more than half of the questions right. Perhaps more disturbingly, the ones who were the most confident they were right were by and large the ones who knew the least about the topic. (Most of these participants expressed views that suggested a strong antiwelfare bias.)

Studies by other researchers have observed similar phenomena when addressing education, health care reform, immigration, affirmative action, gun control, and other issues that tend to attract strong partisan opinion. Kuklinski calls this sort of response the “I know I’m right” syndrome, and considers it a “potentially formidable problem” in a democratic system. “It implies not only that most people will resist correcting their factual beliefs,” he wrote, “but also that the very people who most need to correct them will be least likely to do so.”

What’s going on? How can we have things so wrong, and be so sure that we’re right?

* * *

To read the rest of the article, including Keohane‘s answers to those questions, click here.

In 1994, Congress passed legislation stating that Presidents elected to office after January 1, 1997, would no longer receive lifetime Secret Service protection. Such legislation was unremarkable until the first Black President – Barack Obama – was elected. From the outset of his campaign until today, and likely beyond, President Obama has received unprecedented death threats. These threats, we argue, are at least in part tied to critics and commentators’ use of symbols, pictures, and words to characterize the Obama as a primate, in various forms – including cartoonist Sean Delonas’ controversial New York Post cartoon. Against this backdrop and looking to history, cultural critique, federal case law, as well as cognitive and social psychology, we explore how the use of seemingly harmless imagery may still be racially-laden and evoke violence against its object.

Over the past few months, polls show that fewer Americans say they believe humans are making the planet dangerously warmer, and that is despite a raft of scientific reports that say otherwise. And that puzzles many climate scientists, but not social scientists.

As NPR’s Christopher Joyce reports, some of their research suggests that when people encounter new information, facts may not be as important as beliefs.

CHRISTOPHER JOYCE: The divide between climate believers and disbelievers can be as wide as a West Virginia valley, and that’s where two of them squared off recently at a public debate on West Virginia Public Radio.

Mr. DON BLANKENSHIP (CEO, Massey Energy Company): It’s a hoax because clearly anyone that says that they know what the temperature of the earth is going to be in 2020 or 2030 needs to be put in an asylum because they don’t.

Mr. ROBERT KENNEDY JR. (Environmentalist): Ninety-eight percent of the research, climatologists in the world say that global warming is real, that its impacts are going to be catastrophic. There are 2 percent who disagree with that. I have a choice of believing the 98 percent or the 2 percent.

JOYCE: For social scientist and lawyer Don Braman, it’s not surprising that two people can disagree so strongly over science. Braman is on the faculty at George Washington University and a part of a research group called Cultural Cognition.

Professor DON BRAMAN (George Washington University Law School/The Cultural Cognition Project): People tend to conform their factual beliefs to ones that are consistent with their cultural outlook, their worldview.

JOYCE: Braman’s group has conducted several experiments to back that up. First, they ask people to describe their cultural beliefs. Some embrace new technology, authority and free enterprise – the so-called individualistic group. Others are suspicious of authority, or of commerce and industry. Braman calls them communitarians.

In one experiment, Braman then queried his subjects about something unfamiliar: nanotechnology, new research into tiny, molecule-sized objects that could lead to novel products.

Prof. BRAMAN: These two groups start to polarize as soon as you start to describe some of the potential benefits and harms.

JOYCE: The individualists tended to like nanotechnology; the communitarians generally viewed it as dangerous – all based on the same information.

Prof. BRAMAN: It doesn’t matter whether you show them negative or positive information, they reject the information that is contrary to what they would like to believe, and they glom on to the positive information.

JOYCE: So what’s going on here?

Professor DAN KAHAN (Yale University Law School/The Cultural Cognition Project): Basically, the reason that people react in a close-minded way to information is that the implications of it threaten their values.

JOYCE: That’s Dan Kahan, a law professor at Yale University and a member of Cultural Cognition. He says people test new information against their preexisting view of how the world should work.

Prof. KAHAN: If the implication, the outcome, can affirm your values, you think about it in a much more open-minded way.

JOYCE: And if the information doesn’t, you tend to reject it.

In another experiment, people read a United Nations’ study about the dangers of global warming. Then the researchers said, okay, the solution is to regulate pollution from industry. Many in the individualistic group then rejected the climate science. But when more nuclear power was offered as the solution…

Prof. BRAMAN: They said, you know, it turns out global warming is a serious problem.

JOYCE: And for the communitarians, climate danger seemed less serious if the only solution was more nuclear power.

Then there’s the Messenger Effect. In an experiment dealing with the dangers versus benefits of a vaccine, the scientific information came from several people. They ranged from a rumpled and bearded expert to a crisply business-like one. And people tended to believe the message that came from the person they considered to be more like them – which brings us back to climate.

Prof. BRAMAN: If you have people who are skeptical of the data on climate change, you can bet that Al Gore is not going to convince them at this point.

Why do members of the public disagree – sharply and persistently – about facts on which expert scientists largely agree? We designed a study to test a distinctive explanation: the cultural cognition of scientific consensus. The “cultural cognition of risk” refers to the tendency of individuals to form risk perceptions that are congenial to their values. The study presents both correlational and experimental evidence confirming that cultural cognition shapes individuals’ beliefs about the existence of scientific consensus, and the process by which they form such beliefs, relating to climate change, the disposal of nuclear wastes, and the effect of permitting concealed possession of handguns. The implications of this dynamic for science communication and public policy-making are discussed.

As part of my new commitment to posting more of my work on SSRN, I’ve just put up another forthcoming article that may be of interest to some readers. It offers a law and mind sciences (situationist / critical realist) perspective on Yale Law School’s Cultural Cognition Project (CCP) using a great recent article by CCP scholars Dan M. Kahan, David A. Hoffman, and Donald Braman as a case study. That article has been referenced in two recent New York Times pieces (including one that listed it as among the most important ideas of 2009).

If your interest is not yet piqued, I should also mention that the new SSRN post also has police chases and scandalous pictures of Angelina Jolie . . . or, well, at least one of those things.

The Cultural Cognition Project (CCP) at Yale Law School and the Project on Law and Mind Sciences (PLMS) at Harvard Law School draw on similar research and share a similar goal of uncovering the dynamics that shape risk perceptions, policy beliefs, and attributions underlying our laws and legal theories. Nonetheless, the projects have failed to engage one another in a substantial way. This Article attempts to bridge that gap by demonstrating how the situationist approach taken by PLMS scholars can crucially enrich CCP scholarship. As a demonstration, the Article engages the case of Scott v. Harris, 127 S. Ct. 1769 (2007), the subject of a recent CCP study.

In Scott, the Supreme Court relied on a videotape of a high-speed police chase to conclude that an officer did not commit a Fourth Amendment violation when he purposefully caused the suspect’s car to crash by ramming the vehicle’s back bumper. Challenging the Court’s conclusion that “no reasonable juror” could see the motorist’s evasion of the police as anything but extremely dangerous, CCP Professors Dan M. Kahan, David A. Hoffman, and Donald Braman showed the video to 1,350 people and discovered clear rifts in perception based on ideological, cultural, and other lines.

Despite the valuable contribution of their research in uncovering the influence of identity-defining characteristics and commitments on perceptions, Kahan, Hoffman, and Braman failed to engage what may well be a more critical dynamic shaping the cognitions of their subjects and the members of the Supreme Court in Scott: the role of situational frames in guiding attributions of causation, responsibility, and blame. As social psychologists have documented—and as PLMS scholars have emphasized—while identities, experiences, and values matter, their operation and impact is not stable across cognitive tasks, but rather is contingent on the way in which information is presented and the broader context in which it is processed.

In large part, the Scott video is treated—both by the Supreme Court and by Kahan, Hoffman, and Braman—as if it presents a neutral, unfiltered account of events. This is incorrect. Studies of viewpoint bias suggest that the fact that the video offers the visual and oral perspective of a police officer participating in the chase—rather than that of the suspect or a neutral third party—likely had a significant effect on both the experimental population and members of the Court.

Had the Supreme Court watched a different video of the exact same events taken from inside the suspect’s car, this case may never have been taken away from the jury. Any discussion of judicial “legitimacy”—in both the descriptive and normative sense—must start here. The real danger for our justice system may not ultimately be the “visible fiction” of a suspect’s version of events, as Justice Scalia would have it, or cognitive illiberalism as Kahan, Hoffman, and Braman would, but the invisible influence of situational frames systematically prejudicing those who come before our courts.

Situationist Contributor Dan Kahan was recently interviewed for the National Science Foundation website. In the interview, which you can watch the on the video below, Kahan discusses how people’s values shape perceptions of the HPV vaccine. Here’s the abstract.

* * *

The “cultural cognition thesis” argues that individuals form risk perceptions based on often-contested personal views about what makes a good society. Now, Yale University Law professor Dr. Dan Kahan and his colleagues reveals how people’s values shape their perceptions of one of the most hotly debated health care proposals in recent years: vaccinating elementary-school girls, ages 11-12, against human papillomavirus (HPV), a widespread sexually transmitted disease.

The Rosenhan experiment was a famous experiment into the validity of psychiatric diagnosis conducted by psychologist David Rosenhan in 1973. It was published in the journal Science under the title “On being sane in insane places.” The study is considered an important and influential criticism of psychiatric diagnosis.

Rosenhan’s study consisted of two parts. The first part involved the use of healthy associates or “pseudopatients” who briefly simulated auditory hallucinations in an attempt to gain admission to 12 different psychiatric hospitals in five different states in various locations in the United States. All were admitted and diagnosed with psychiatric disorders. After admission, the pseudopatients acted normally and told staff that they felt fine and had not experienced any more hallucinations. Hospital staff failed to detect a single pseudopatient, and instead believed that all of the pseudopatients exhibited symptoms of ongoing mental illness. Several were confined for months. All were forced to admit to having a mental illness and agree to take antipsychotic drugs as a condition of their release.

The second part involved asking staff at a psychiatric hospital to detect non-existent “fake” patients. The staff falsely identified large numbers of genuine patients as impostors.

The study concluded, “It is clear that we cannot distinguish the sane from the insane in psychiatric hospitals” and also illustrated the dangers of depersonalization and labeling in psychiatric institutions. It suggested that the use of community mental health facilities which concentrated on specific problems and behaviors rather than psychiatric labels might be a solution and recommended education to make psychiatric workers more aware of the social psychology of their facilities.

Claudia Hammond revisits . . . David Rosenhan’s Pseudo-Patient Study, gaining access to his unpublished personal papers to discover how it changed our understanding of the human mind, and its impact 40 years on.

After Rosenhan published On Being Sane in Insane Places in the journal Science in 1973, the psychiatric profession went on the defensive to protest its diagnostic competence. The study struck at the heart of their attempts to medicalise psychiatry and be accepted as proper doctors. Its impact was felt when the third edition of the profession’s bible, the Diagnostic and Statistical Manual, came out in 1980: changes had been made which brought more rigour to the diagnostic process.

However, as Claudia discovers from Rosenhan’s unpublished papers, for him the study was less an experiment of diagnostic efficacy than an anthropological survey of psychiatric wards. In a chapter of the book he never finished, she reads his poignant account of his own first admission, and his sense that “minimal attention was paid to my presence, as if I hardly existed.”

Now suffering ill health and unable to speak, Rosenhan delegates his friends and colleagues professor of social psychology at Stanford University Lee Ross and clinical psychologist Florence Keller to speak to Claudia and show her the box containing previously unpublished material which throws new light on one of the most controversial and famous psychology experiments.

Americans, particularly if they are of a certain leftward-leaning, college-educated type, worry about our country’s blunders into other cultures. In some circles, it is easy to make friends with a rousing rant about the McDonald’s near Tiananmen Square, the Nike factory in Malaysia or the latest blowback from our political or military interventions abroad. For all our self-recrimination, however, we may have yet to face one of the most remarkable effects of American-led globalization. We have for many years been busily engaged in a grand project of Americanizing the world’s understanding of mental health and illness. We may indeed be far along in homogenizing the way the world goes mad.

This unnerving possibility springs from recent research by a loose group of anthropologists and cross-cultural psychiatrists. Swimming against the biomedical currents of the time, they have argued that mental illnesses are not discrete entities like the polio virus with their own natural histories. These researchers have amassed an impressive body of evidence suggesting that mental illnesses have never been the same the world over (either in prevalence or in form) but are inevitably sparked and shaped by the ethos of particular times and places. In some Southeast Asian cultures, men have been known to experience what is called amok, an episode of murderous rage followed by amnesia; men in the region also suffer from koro, which is characterized by the debilitating certainty that their genitals are retracting into their bodies. Across the fertile crescent of the Middle East there is zar, a condition related to spirit-possession beliefs that brings forth dissociative episodes of laughing, shouting and singing.

The diversity that can be found across cultures can be seen across time as well. In his book “Mad Travelers,” the philosopher Ian Hacking documents the fleeting appearance in the 1890s of a fugue state in which European men would walk in a trance for hundreds of miles with no knowledge of their identities. The hysterical-leg paralysis that afflicted thousands of middle-class women in the late 19th century not only gives us a visceral understanding of the restrictions set on women’s social roles at the time but can also be seen from this distance as a social role itself — the troubled unconscious minds of a certain class of women speaking the idiom of distress of their time.

“We might think of the culture as possessing a ‘symptom repertoire’ — a range of physical symptoms available to the unconscious mind for the physical expression of psychological conflict,” Edward Shorter, a medical historian at the University of Toronto, wrote in his book “Paralysis: The Rise and Fall of a ‘Hysterical’ Symptom.” “In some epochs, convulsions, the sudden inability to speak or terrible leg pain may loom prominently in the repertoire. In other epochs patients may draw chiefly upon such symptoms as abdominal pain, false estimates of body weight and enervating weakness as metaphors for conveying psychic stress.”

In any given era, those who minister to the mentally ill — doctors or shamans or priests — inadvertently help to select which symptoms will be recognized as legitimate. Because the troubled mind has been influenced by healers of diverse religious and scientific persuasions, the forms of madness from one place and time often look remarkably different from the forms of madness in another.

That is until recently.

For more than a generation now, we in the West have aggressively spread our modern knowledge of mental illness around the world. We have done this in the name of science, believing that our approaches reveal the biological basis of psychic suffering and dispel prescientific myths and harmful stigma. There is now good evidence to suggest that in the process of teaching the rest of the world to think like us, we’ve been exporting our Western “symptom repertoire” as well. That is, we’ve been changing not only the treatments but also the expression of mental illness in other cultures. Indeed, a handful of mental-health disorders — depression, post-traumatic stress disorder and anorexia among them — now appear to be spreading across cultures with the speed of contagious diseases. These symptom clusters are becoming the lingua franca of human suffering, replacing indigenous forms of mental illness.

* * *

What is being missed, . . .[some doctors] have suggested, is a deep understanding of how the expectations and beliefs of the sufferer shape their suffering. “Culture shapes the way general psychopathology is going to be translated partially or completely into specific psychopathology. . . . When[, for example,] there is a cultural atmosphere in which professionals, the media, schools, doctors, psychologists all recognize and endorse and talk about and publicize eating disorders, then people can be triggered to consciously or unconsciously pick eating-disorder pathology as a way to express that conflict.”

* * *

THE IDEA THAT our Western conception of mental health and illness might be shaping the expression of illnesses in other cultures is rarely discussed in the professional literature. Many modern mental-health practitioners and researchers believe that the scientific standing of our drugs, our illness categories and our theories of the mind have put the field beyond the influence of endlessly shifting cultural trends and beliefs. After all, we now have machines that can literally watch the mind at work. We can change the chemistry of the brain in a variety of interesting ways and we can examine DNA sequences for abnormalities. The assumption is that these remarkable scientific advances have allowed modern-day practitioners to avoid the blind spots and cultural biases of their predecessors.

Modern-day mental-health practitioners often look back at previous generations of psychiatrists and psychologists with a thinly veiled pity, wondering how they could have been so swept away by the cultural currents of their time. The confident pronouncements of Victorian-era doctors regarding the epidemic of hysterical women are now dismissed as cultural artifacts. Similarly, illnesses found only in other cultures are often treated like carnival sideshows. . . .

* * *

Of course, we can become psychologically unhinged for many reasons that are common to all, like personal traumas, social upheavals or biochemical imbalances in our brains. Modern science has begun to reveal these causes. Whatever the trigger, however, the ill individual and those around him invariably rely on cultural beliefs and stories to understand what is happening. . . . It means that a mental illness is an illness of the mind and cannot be understood without understanding the ideas, habits and predispositions — the idiosyncratic cultural trappings — of the mind that is its host.

* * *

CROSS-CULTURAL psychiatrists have pointed out that the mental-health ideas we export to the world are rarely unadulterated scientific facts and never culturally neutral. “Western mental-health discourse introduces core components of Western culture, including a theory of human nature, a definition of personhood, a sense of time and memory and a source of moral authority. None of this is universal,” Derek Summerfield of the Institute of Psychiatry in London observes. He has also written: “The problem is the overall thrust that comes from being at the heart of the one globalizing culture. It is as if one version of human nature is being presented as definitive, and one set of ideas about pain and suffering. . . . There is no one definitive psychology.”

* * *

* * *

To read the entirety of Waters’s fascinating article (including an illuminating discussion of how the “brain disease” concept of mental illness may have increased, not decreased, the stigma of mental illness and thus hurt the very people it was supposed to help), click here. (Thanks to Situationist friend, Andrew Perlman for suggesting this article to us.)

Situationist Contributor Dan Kahan posted his recent paper, “Cultural Cognition as a Conception of the Cultural Theory of Risk,” on SSRN. Here’s the abstract.

* * *

Cultural cognition refers to the tendency of individuals to form beliefs about societal dangers that reflect and reinforce their commitments to particular visions of the ideal society. Cultural cognition is one of a variety of approaches designed to empirically test the cultural theory of risk associated with Mary Douglas and Aaron Wildavsky. This commentary discusses the distinctive features of cultural cognition as a conception of cultural theory, including its cultural worldview measures; its emphasis on social psychological mechanisms that connect individuals’ risk perceptions to their cultural outlooks; and its practical goal of enabling self-conscious management of popular risk perceptions in the interest of promoting scientifically sound public policies that are congenial to persons of diverse outlooks.

John F. McCarthy, Carl A. Scheraga, and Donald E. Gibson, recently posted their interesting paper, titled “Culture, Cognition and Conflict: How Neuroscience Can Help to Explain Cultural Differences in Negotiation and Conflict Management” on SSRN. Here’s the abstract.

* * *

In negotiation and conflict management situations, understanding cultural patterns and tendencies is critical to whether a negotiation will accomplish the goals of the involved parties. While differences in cultural norms have been identified in the current literature, what is needed is a more fine-grained approach that examines differences below the level of behavioral norms. Drawing on recent social neuroscience approaches, we argue that differing negotiating styles may not only be related to differing cultural norms, but to differences in underlying language processing strategies in the brain, suggesting that cultural difference may influence neuropsychological processes. If this is the case, we expect that individuals from different cultures will exhibit different neuropsychological tendencies. Consistent with our hypothesis, using EEG measured responses, native German-speaking German participants took significantly more time to indicate when they understood a sentence than did native English-speaking American participants. This result is consistent with the theory that individuals from different cultures develop unique language processing strategies that affect behavior. A deliberative cognitive style used by Germans could account for this difference in comprehension reaction time. This study demonstrates that social neuroscience may provide a new way of understanding micro-processes in cross-cultural negotiations and conflict resolution.

Situationist friend Dan Gilbert, who will be speaking today at Harvard Law School (details here), recently completed another fascinating TedTalk. Here is their summary: “Dan Gilbert presents research and data from his exploration of happiness — sharing some surprising tests and experiments that you can also try on yourself. Watch through to the end for a sparkling Q&A with some familiar TED faces.” Here’s the video.

At the Third Annual Law and Mind Sciences Conference at Harvard Law School, titled “The Free Market Mindset: History, Psychology, and Consequences,” (March 7, 2009) Christine Desan‘s presentation was titled “Legal Categories of Thought.” Desan is a Professor of Law at Harvard Law School, where she has taught since 1992. Her areas of interest include American constitutional history, legal and political thought, civil procedure, and statutory interpretation.

In her presentation, Professor Desan describess the rich variety of ways that the law categorizes different kinds of liquidity — including coin, banknotes, bonds, dollars, and securities, and explores some of the ways that legal doctrine has disciplined our thought, including our assumptions about money and the way it is made, about public and private, and about free choice in the marketplace. Below you can watch her talk in three videos (roughly 9 minutes each).

* * *

* * *

* * *

* * *

To watch similar videos, visit the video libraries on The Project on Law and Mind Sciences Website (here) or visit PLMSTube.

If you’re craving a quick hit of optimism, reading a news magazine is probably not the best way to go about finding it. As the life coaches and motivational speakers have been trying to tell us for more than a decade now, a healthy, positive mental outlook requires strict abstinence from current events in all forms. Instead, you should patronize sites like Happynews.com, where the top international stories of the week include “Jobless Man Finds Buried Treasure” and “Adorable ‘Teacup Pigs’ Are Latest Hit with Brits.”

Or of course you can train yourself to be optimistic through sheer mental discipline. Ever since psychologist Martin Seligman crafted the phrase “learned optimism” in 1991 and started offering optimism training, there’s been a thriving industry in the kind of thought reform that supposedly overcomes negative thinking. You can buy any number of books and DVDs with titles like Little Gold Book of YES! Attitude, in which you will learn mental exercises to reprogram your outlook from gray to the rosiest pink: “affirmations,” for example, in which you repeat upbeat predictions over and over to yourself; “visualizations” in which you post on your bathroom mirror pictures of that car or boat you want; “disputations” to refute any stray negative thoughts that may come along. If money is no object, you can undergo a three-month “happiness makeover” from a life coach or invest $3,575 for three days of “optimism training” on a Good Mood Safari on the coast of New South Wales. . . .

* * *

Americans have long prided themselves on being “positive” and optimistic — traits that reached a manic zenith in the early years of this millennium. Iraq would be a cakewalk! The Dow would reach 36,000! Housing prices could never decline! Optimism was not only patriotic, it was a Christian virtue, or so we learned from the proliferating preachers of the “prosperity gospel,” whose God wants to “prosper” you. In 2006, the runaway bestseller The Secret promised that you could have anything you wanted, anything at all, simply by using your mental powers to “attract” it. The poor listened to upbeat preachers like Joel Osteen and took out subprime mortgages. The rich paid for seminars led by motivational speakers like Tony Robbins and repackaged those mortgages into securities sold around the world. . . .

* * *

Below are some excerpts from the introduction of her new book explaining that, optimism notwithstanding, Americans are not necessarily better off.

* * *

Surprisingly, when psychologists undertake to measure the relative happiness of nations, they routinely find that Americans are not, even in prosperous times and despite our vaunted positivity, very happy at all. A recent meta-analysis of over a hundred studies of self-reported happiness worldwide found Americans ranking only twenty-third, surpassed by the Dutch, the Danes, the Malaysians, the Bahamians, the Austrians, and even the supposedly dour Finns. In another potential sign of relative distress, Americans account for two-thirds of the global market for antidepressants, which happen also to be the most commonly prescribed drugs in the United States. To my knowledge, no one knows how antidepressant use affects people’s responses to happiness surveys: do respondents report being happy because the drugs make them feel happy or do they report being unhappy because they know they are dependent on drugs to make them feel better? Without our heavy use of antidepressants, Americans would likely rank far lower in the happiness rankings than we currently do.

When economists attempt to rank nations more objectively in terms of “well-being,” taking into account such factors as health, environmental sustainability, and the possibility of upward mobility, the United States does even more poorly than it does when only the subjective state of “happiness” is measured. The Happy Planet Index, to give just one example, locates us at 150th among the world’s nations.

* * *

But of course it takes the effort of positive thinking to imagine that America is the “best” or the “greatest.” Militarily, yes, we are the mightiest nation on earth. But on many other fronts, the American score is dismal, and was dismal even before the economic downturn that began in 2007. Our children routinely turn out to be more ignorant of basic subjects like math and geography than their counterparts in other industrialized nations. They are also more likely to die in infancy or grow up in poverty. Almost everyone acknowledges that our health care system is “broken” and our physical infrastructure crumbling. We have lost so much of our edge in science and technology that American companies have even begun to outsource their research and development efforts. Worse, some of the measures by which we do lead the world should inspire embarrassment rather than pride: We have the highest percentage of our population incarcerated, and the greatest level of inequality in wealth and income. We are plagued by gun violence and racked by personal debt.

While positive thinking has reinforced and found reinforcement in American national pride, it has also entered into a kind of symbiotic relationship with American capitalism. There is no natural, innate affinity between capitalism and positive thinking. In fact, one of the classics of sociology, Max Weber’s Protestant Ethic and the Spirit of Capitalism, makes a still impressive case for capitalism’s roots in the grim and punitive outlook of Calvinist Protestantism, which required people to defer gratification and resist all pleasurable temptations in favor of hard work and the accumulation of wealth.

But if early capitalism was inhospitable to positive thinking, “late” capitalism, or consumer capitalism, is far more congenial, depending as it does on the individual’s hunger for more and the firm’s imperative of growth. The consumer culture encourages individuals to want more — cars, larger homes, television sets, cell phones, gadgets of all kinds — and positive thinking is ready at hand to tell them they deserve more and can have it if they really want it and are willing to make the effort to get it. Meanwhile, in a competitive business world, the companies that manufacture these goods and provide the paychecks that purchase them have no alternative but to grow. If you don’t steadily increase market share and profits, you risk being driven out of business or swallowed by a larger enterprise. Perpetual growth, whether of a particular company or an entire economy, is of course an absurdity, but positive thinking makes it seem possible, if not ordained.

In addition, positive thinking has made itself useful as an apology for the crueler aspects of the market economy. If optimism is the key to material success, and if you can achieve an optimistic outlook through the discipline of positive thinking, then there is no excuse for failure. The flip side of positivity is thus a harsh insistence on personal responsibility: if your business fails or your job is eliminated, it must because you didn’t try hard enough, didn’t believe firmly enough in the inevitability of your success. As the economy has brought more layoffs and financial turbulence to the middle class, the promoters of positive thinking have increasingly emphasized this negative judgment: to be disappointed, resentful, or downcast is to be a “victim” and a “whiner.”

* * *

You can read more about the book and purchase it here. You can listen to an excellent, half-hour Talk of the Nation interview of Barbara Ehrenreich about the book here.

Majorie Florestal recently posted her intriguing article, “Is a Burrito a Sandwich? Exploring Race, Class and Culture in Contracts” (14 Michigan Journal of Race and Law (2008)) on SSRN.

* * *

A superior court in Worcester, Massachusetts, recently determined that a burrito is not a sandwich. Surprisingly, the decision sparked a firestorm of media attention. Worcester, Massachusetts, is hardly the pinnacle of the culinary arts – so why all the interest in the musings of one lone judge on the nature of burritos and sandwiches? Closer inspection revealed the allure of this otherwise peculiar case: Potentially thousands of dollars turned on the interpretation of a single word in a single clause of a commercial contract. Judge Locke based his decision on ‘common sense’ and a single definition of sandwich – ‘two thin pieces of bread, usually buttered, with a thin layer (as of meat, cheese, or savory mixture) spread between them.’ The only barrier to the burrito’s entry into the sacred realm of sandwiches is an additional piece of bread? What about the one-slice, open-face sandwich? Or the club sandwich, typically served as a double-decker with three pieces of bread? What about wraps? The court’s definition lacked subtlety, complexity or nuance; it was rigid, not allowing for the possibility of change and evolution. It was a decision couched in the ‘primitive formalism’ Judge Cardozo derided nearly ninety years ago when he said ‘[t]he law has outgrown its primitive stage of formalism when the precise word was a sovereign talisman, and every slip was fatal. It takes a broader view today.’ Does it? Despite the title of this piece, my goal is not to determine with any legal, scientific or culinary specificity whether a burrito is a sandwich. Rather, I explore what lies beneath the ‘primitive formalism’ or somewhat smug determination of the court that common sense answers the question for us. I suggest Judge Locke’s gut-level understanding that burritos are not sandwiches actually masks an unconscious bias. I explore this bias by examining the determination of this case and the impact of race, class and culture on contract principles.

This paper uses the theory of cultural cognition to examine the debate over rape-law reform. Cultural cognition refers to the tendency of individuals to conform their perceptions of legally consequential facts to their defining group commitments. Results of an original experimental study (N = 1,500) confirmed the impact of cultural cognition on perceptions of fact in a controversial acquaintance-rape case. The major finding was that a hierarchical worldview, as opposed to an egalitarian one, inclined individuals to perceive that the defendant reasonably understood the complainant as consenting to sex despite her repeated verbal objections. The effect of hierarchy in inclining subjects to favor acquittal was greatest among women; this finding was consistent with the hypothesis that hierarchical women have a distinctive interest in stigmatizing rape complainants whose behavior deviates from hierarchical gender norms. The study also found that cultural predispositions have a much larger impact on outcome judgments than do legal definitions, variations in which had either no or a small impact on the likelihood subjects would support or oppose conviction. The paper links date-rape reform to a class of controversies in law that reflect symbolic status competition between opposing cultural groups, and addresses the normative implications of this conclusion.