Blog Stats

Archive for November, 2008

Tamara Piety has an interesting new article, titled “Against Freedom of Commercial Expression,” in 29 Cardozo Law Review 2583 (2008), which you can download on SSRN. Here’s the abstract.

* * *

Preservation of freedom of expression is properly understood as one of the bulwarks of our constitutional liberty. Yet the prohibition on government regulation of expression has never been considered absolute. One area of less than absolute protection is found in the commercial speech doctrine. Government may regulate commercial speech for its truth where such regulation advances a substantial governmental interest which is advanced by the regulation and there is a good fit between regulation’s aims and the regulation itself. Some argue that even this intermediate level of scrutiny is too much and that commercial speech should be fully protected. Alternatively, proponents of a more protective standard for commercial speech would have the doctrine’s application limited to traditional advertising. In this article Prof. Piety argues that both of these arguments should be rejected in light of experience and of the interests the First Amendment is commonly supposed to further. Since, by definition, all speech by a for-profit corporation is commercial, because the corporation is a fictitious entity created by law it is not an appropriate rights holder under the First Amendment. For these reasons the term commercial speech should be interpreted broadly to encompass all expression by for-profit corporations. Commercial expression is enormously influential and so when it is false it can be enormously costly. Government may be the only institution powerful enough to preserve the rights and powers of individual citizens against increasingly powerful private interests.

Daniel Libit of Politico has an interesting piece on how the rigors and demands of the 2008 campaign trail led many McCain and Obama staffers, as well as the journalists who reported on them, down a road of poor diet and lack of exercise. We excerpt the article below.

* * *

This is life after the protracted adrenaline high that is the presidential campaign: no more bag calls at 6:30 a.m. (or earlier). No more sniffling for weeks straight before a check-up at the doctor. For reporters, no more eating at “the file center,” catching cat naps on the plane and working into the early morning hours.

A few days before the election, Time’s Karen Tumulty blogged about counting calories during a day on the Obama campaign plane:

“So what are we talking about?” Tumulty wrote. “Seven full meals plus multiple snacks? 50,000 calories? And the only real exercise I got all day was unloading my bag from the plane, our weird little ritual at the end of the day.”

“You have to remind yourself that a campaign is followed by a transition,” Tumulty says, “which is essentially the same amount of work with none of the travel.”

* * *

“The problem with presidential campaigns is, win or lose, you’re a wreck afterward,” says Mark McKinnon, a former media consultant for McCain and, earlier, for President George W. Bush. “When your system hits the brakes after being on the campaign rocket sled going full tilt for a couple years, most people are a mess. I know I was. [It] just takes a couple months for your brain and body to readjust without the constant adrenaline rush.”

McKinnon says he’s taken diametric approaches to it: Once he biked around the South Island of New Zealand. The other time, he spent two weeks “passed out” on an island off the coast of Zanzibar.

* * *

For the rest of the article, click here. For other Situationist posts on food and drug issues, click here.

From Youtube: Jared Diamond’s journey of discovery began on the island of Papua New Guinea. There, in 1974, a local named Yali asked Diamond a deceptively simple question:

“Why is it that you white people developed so much cargo, but we black people had little cargo of our own?”

Diamond realized that Yali’s question penetrated the heart of a great mystery of human history — the roots of global inequality.

Why were Europeans the ones with all the cargo? Why had they taken over so much of the world, instead of the native people of New Guinea? How did Europeans end up with what Diamond terms the agents of conquest: guns, germs and steel? It was these agents of conquest that allowed 168 Spanish conquistadors to defeat an Imperial Inca army of 80,000 in 1532, and set a pattern of European conquest which would continue right up to the present day.

Diamond knew that the answer had little to do with ingenuity or individual skill. From his own experience in the jungles of New Guinea, he had observed that native hunter-gatherers were just as intelligent as people of European descent — and far more resourceful. Their lives were tough, and it seemed a terrible paradox of history that these extraordinary people should be the conquered, and not the conquerors.

To examine the reasons for European success, Jared realized he had to peel back the layers of history and begin his search at a time of equality — a time when all the peoples of the world lived in exactly the same way.

Two years after Exxon was hit with a $5 billion punitive damages award for the Exxon Valdez disaster, Prof. William R. Freudenburg’s phone rang. The call propelled him, the professor said the other day, into “an ethical quagmire of the bottomless pit variety.”

The caller was an Exxon engineer who wanted to pay the professor to conduct a study taking a dim view of punitive damages. The Exxon Valdez case would eventually reach the Supreme Court, the engineer said, and the study would be useful in convincing the court that punitive damages make little sense, especially if it was published in a prestigious academic journal.

Professor Freudenburg, who now teaches sociology at the University of California, Santa Barbara, took Exxon’s money and conducted preliminary research. Exxon stopped supporting the study when the early findings did not point in a direction helpful to the company. But Exxon did help pay for several studies critical of punitive damages that appeared in places like The Yale Law Journal and The Columbia Law Review.

As the engineer predicted, the case did reach the Supreme Court. In a 5-to-3 decision in June, the court said the appropriate punishment for dumping 11 million gallons of crude oil into Prince William Sound in Alaska in 1989 was no more than about $500 million, a tenth of what the jury had awarded.

But the court also addressed the aggressive effort to reshape the academic debate over punitive damages. “Because this research was funded in part by Exxon,” Justice David H. Souter wrote in a footnote that has rocked the legal academy, “we decline to rely on it.”

* * *

Science and the law have always had a tricky relationship, but it gets especially tangled in the appeals courts.

Whatever may be said about questionable scientific evidence submitted at trial, often accompanied by dueling expert witnesses, it is at least subject to the scrutiny of the adversary system, including cross-examination. Studies merely cited in footnotes to appellate briefs may represent the worst of both worlds — suspect science and untested evidence — which helps explain why Justice Souter was skeptical of them. But focusing on financing rather than quality is only a partial solution.

People who conduct empirical legal research say their work should be considered on the merits. Others accuse Justice Souter of being disingenuous, noting that the court largely adopted the approach advocated by the Exxon studies, disclaimer or no. Still others say the court mishandled the studies it cited with approval.

But Terry N. Gardner, the engineer who called Professor Freudenburg and coordinated the Exxon project, expressed satisfaction. “My feeling was that they seemed to have an obligation to say that,” Mr. Gardner said of the footnote. “Yet the arguments the justices used in part reflected the conclusions of the studies.”

* * *

“The opinion reads like a bad joke,” said Jeffrey J. Rachlinski, a law professor at Cornell. “They say they know of no study showing punitive damages are orderly in any way, and yet they cite” a study by Theodore Eisenberg, a prominent empirical legal studies scholar at Cornell, “showing punitive damages are pretty orderly.”

Professor Eisenberg struggled to stay respectful about the court’s approach to his work, saying he had been flattered to be cited at all. He finally settled on this phrase: “I believe the court went seriously astray” in concluding that his work supported a reduced award.

* * *

Before Exxon cut off his financing, Professor Freudenburg said, one of his tentative conclusions had been that corporate transparency encourages responsible corporate behavior. That did not go over well with Exxon’s legal department.

* * *

The Supreme Court’s decision in the Exxon case, Professor Freudenburg said, had caused him to come to a reluctant conclusion. “The legal system and the scientific method,” he said, “co-exist in a way that is really hard on truth.”

Thanksgiving has many associations — struggling Pilgrims, crowded airports, autumn leaves, heaping plates, drunken uncles, blowout sales, and so on. At its best, though, Thanksgiving is associated with, well, thanks giving. The holiday provides a moment when many otherwise harried individuals leading hectic lives decelerate just long enough to muster some gratitude for their harvest. Giving thanks — acknowledging that we, as individuals, are not the sole determinants of our own fortunes seems an admirable, humble, and even situationist practice, worthy of its own holiday.

But I’m interested here in the potential downside to the particular way in which many people go about giving thanks.

Situationist contributor John Jost and his collaborators have studied a process that they call “system justification” — loosely the motive to defend and bolster existing arrangements even when doing so seems to conflict with individual and group interests. Jost, together with Situationist contributor Aaron Kay and several other co-authors, recently summarized the basic tendency to justify the status quo this way (pdf):

Whether because of discrimination on the basis of race, ethnicity, religion, social class, gender, or sexual orientation, or because of policies and programs that privilege some at the expense of others, or even because of historical accidents, genetic disparities, or the fickleness of fate, certain social systems serve the interests of some stakeholders better than others. Yet historical and social scientific evidence shows that most of the time the majority of people—regardless of their own social class or position—accept and even defend the legitimacy of their social and economic systems and manage to maintain a “belief in a just world” . . . . As Kinder and Sears (1985) put it, “the deepest puzzle here is not occasional protest but pervasive tranquility.” Knowing how easy it is for people to adapt to and rationalize the way things are makes it easer to understand why the apartheid system in South Africa lasted for 46 years, the institution of slavery survived for more than 400 years in Europe and the Americas, and the Indian Caste system has been maintained for 3000 years and counting.

Manifestations of the system-justification motive pervade many of our cognitions, ideologies, and institutions. This post reflects my worry that the Thanksgiving holiday might also manifest that powerful implicit motive. No doubt, expressing gratitude is generally a healthy and appropriate practice. Indeed, my sense is that Americans too rarely acknowledge the debt they owe to other people and other influences. There ought to be more thanks giving.

Nonetheless, the norm of Thanksgiving seems to be to encourage a particular kind of gratitude — a generic thankfulness for the status quo. Indeed, when one looks at what many describe as the true meaning of the holiday, the message is generally one of announcing that current arrangements — good and bad — are precisely as they should be.

“Now therefore I do recommend and assign Thursday the 26th day of November next to be devoted by the People of these States to the service of that great and glorious Being, who is the beneficent Author of all the good that was, that is, or that will be—That we may then all unite in rendering unto Him our sincere and humble thanks—for His kind care and protection of the People of this Country . . . for the signal and manifold mercies, and the favorable interpositions of his Providence which we experienced in the tranquility, union, and plenty, which we have since enjoyed . . . and also that we may then unite in most humbly offering our prayers and supplications to the great Lord and Ruler of Nations and beseech him to pardon our national and other transgressions . . . . To promote the knowledge and practice of true religion and virtue, and the increase of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.”

Existing levels of prosperity, by this account, reflect the merciful and omniscient blessings of the “beneficent Author” of all that is good.

“In the four centuries since the founders . . . first knelt on these grounds, our nation has changed in many ways. Our people have prospered, our nation has grown, our Thanksgiving traditions have evolved — after all, they didn’t have football back then. Yet the source of all our blessings remains the same: We give thanks to the Author of Life who granted our forefathers safe passage to this land, who gives every man, woman, and child on the face of the Earth the gift of freedom, and who watches over our nation every day.”

The faith that we are being “watched over” and that our blessings and prosperity are the product of a gift-giving force is extraordinarily affirming. All that “is,” is as that “great and glorious Being” intended.

Fom such a perspective, giving thanks begins to look like a means of assuring ourselves that our current situation was ordained by some higher, legitimating force. To doubt the legitimacy of existing arrangements is to be ungrateful.

A cursory search of the internet for the “meaning of Thanksgiving” reveals many similar recent messages. For instance, one blogger writes, in a post entitled “Teaching Children the Meaning of Thanksgiving,” that:

your goal should be to move the spirit of Thanksgiving from a one-day event to a basic life attitude. . . . This means being thankful no matter what our situation in life. Thankfulness means that we are aware of both our blessings and disappointments but that we focus on the blessings. . . . Are you thankful for your job even when you feel overworked and underpaid?”

Another piece, entitled “The Real Meaning of Thanksgiving” includes this lesson regarding the main source of the Pilgrim’s success: “It was their devotion to God and His laws. And that’s what Thanksgiving is really all about. The Pilgrims recognized that everything we have is a gift from God – even our sorrows. Their Thanksgiving tradition was established to honor God and thank Him for His blessings and His grace.”

If we are supposed to be thankful for our jobs even when we are “overworked and underpaid,” should we also be thankful for unfairness or injustice? And if we are to be grateful for our sorrows, should we then be indifferent toward their earthly causes?

A third article, “The Productive Meaning of Thanksgiving” offers these “us”-affirming, guilt-reducing assurances: “The deeper meaning is that we have the capacity to produce such wealth and that we live in a country that affords us our right to exercise the virtue of productivity and to reap its rewards. So let’s celebrate wealth and the power in us to produce it; let’s welcome this most wonderful time of the year and partake without guilt of the bounty we each have earned.”

That advice seems to mollify any sense of injustice by giving something to everyone. Those with bountiful harvests get to enjoy their riches guiltlessly. Those with meager harvests can be grateful for the fact that they live in a country where they might someday enjoy richer returns from their individual efforts.

[M]aybe you are unsatisfied with your home or job? Would you be willing to trade either with someone who has no hope of getting a job or is homeless? Could you consider going to Africa or the Middle East and trade places with someone that would desperately love to have even a meager home and a low wage paying job where they could send their children to school without the worry of being bombed, raped, kidnapped or killed on a daily basis?

* * *

No matter how bad you think you have it, there are people who would love to trade places with you in an instant. You can choose to be miserable and pine for something better. You could choose to trade places with someone else for all the money they could give you. You could waste your gift of life, but that would be the worst mistake to make. Or you can rethink about what makes your life great and at least be happy for what you have then be patient about what you want to come to you in the future.

If your inclination on Thanksgiving is to give thanks, I do not mean to discourage you. My only suggestion is that you give thanks, not for the status quo, but for all of the ways in which your (our) own advantages and privileges are the consequence of situation, and not simply your individual (our national) disposition. Further, I’d encourage you to give thanks to all those who have gone before you who have doubted the status quo and who have identified injustice and impatiently fought against it.

After eight years under the same president, our country is on the verge of some major changes. This is an exciting time. The election of a new president encourages us to take a collective look in the mirror and it throws the spotlight on the distinctive characteristics of the person we’ve elected. Whom we choose as president says a great deal about us – who we are, what we want, and how we have changed in the past eight years.

It is beyond doubt that Barack Obama’s intelligence, his policy positions, and his remarkable temperament will play a crucial role in the next chapter of world history. At the same time, both the full meaning of this election and its likely impact on the next four years are more difficult to ascertain than we might like to admit.

The idea of the election as a direct choice between the policies of Obama and McCain would also fit into a clean, dispositionist narrative of American politics. But what if voters ultimately made their decisions based on other factors? For example, Douglas Schoen of the Wall Street Journal argues that the results of this election are “not a mandate for Democratic policies” because voters acted primarily out of a desire to reject Bush and the Republicans. What about other factors such as the economy or the personal attributes of the candidates?

Though the economy was hardly a centerpiece of Obama’s campaign, his first concrete lead appeared shortly after the advent of the current financial crisis. How do we read the tea leaves? Does the important role of the economy in Obama’s victory hint that his policies couldn’t gain acceptance under normal circumstances, or did the crisis simply prove that that the American people trust Obama’s judgment? What about age, race, or any of the other factors that might have influenced the election? Does Obama’s charisma strengthen or weaken the rationale for electing him? What are the implications for an Obama presidency if the election represented something other than a direct up or down referendum on the president-elect’s policies?

In the context of dispositionist Enlightenment values, elections present the highest, purest forum for individuals to exercise rational choice. By choosing between various candidates and platforms, we communicate our preferences to the government, in turn providing our rulers with a mandate for the choices they make. It’s clear that voting is important and that our choice of a given candidate expresses a preference, but it’s not clear how much of that preference derives from stable views or strictly rational evaluation of qualifications and policy positions. Voters’ perceptions of issues are susceptible to the influence of emotion and identity appeals. Changes in situational factors such as political climate, economic stability, and “October surprises” affect support for candidates without necessarily altering their positions or qualifications. And it’s widely understood that politicians don’t reliably follow through on their campaign promises (for example, even before this election, the bailout made both candidates’ existing proposals unfeasible). What, then, is the nature of the connection between a vote based on proposals from the campaign season and the mandate for the action a new president actually takes?

Even to the extent that we vote based on conscious policy decisions, it is easy to overestimate the degree to which a president’s innate qualities and preferences determine how events unfold during his or her time in office. Our dispositionist assumptions emphasize a view of the chief executive primarily as an independent decision-making actor – the president as “the decider.”

But even the deepest convictions and policy positions of a president-elect are not determinative of what the country experiences in the following four years. No initial mandate can render a president immune to political forces. Preexisting conditions (such as our current economic and military challenges) can complicate or preclude efforts to enact new policy. And every president faces historic changes in global and domestic circumstances that come to define his or her term in office. Good judgment is crucial when meeting such challenges, but ultimately the president’s choices represent only one of many factors shaping the course of events.

Barack Obama’s election has inspired millions and ignited hope around the globe. Given the historic shift in power we’re experiencing, it’s tempting to jump to conclusions about what we’ve proven by electing Obama and what the world will look like with him as president of the United States. But in the end, we support candidates for many different reasons and the results of this presidential election don’t unambiguously define the country. Likewise, President Obama may go on to accomplish many things, but it’s unwise to assume – for better or for worse – that the fate of our country lies in his hands. The full meaning of Obama’s presidential victory will take time to emerge. For now, the best first step we can take into the Yes We Can era would be to remember the limitations we all have as individuals and not rely on President Obama to single-handedly change the world.

“It is now widely believed that people’s moral judgments can affect their causal judgments, but a great deal of confusion remains about precisely why this effect arises. . . . Our hypothesis draws on the idea that people’s causal judgments are based on counterfactual reasoning.” Read more . . .

“New research by the University of Warwick reveals that many credit card customers become fixated on the level of minimum payments given on credit card bills. The mere presence of a minimum payment is enough to reduce the actual amount many people choose to pay on their bills, leading to further interest payments.” Read more . . .

“How do you make sense of Barack Obama and John McCain? The odds are that you judge them mainly on two dimensions: warm/cold and (in)competence. Depending on your experience of them, you may judge one of them as both warm and competent, evoking your admiration and pride; and perhaps the other as neither warm nor competent, which triggers a sense of contempt and disgust. Or perhaps you view one as warm but not competent, which generates pity and sympathy; or finally, you could judge one of them as cold but competent, leading to feelings of resentment and even envy. All the media hoopla boils down to these two dimensions, which determine the outcomes of Presidential campaigns, but also our ordinary perceptions of other people as individuals or as group members.” Read more . . .

“Legitimacy. It’s a word much bandied about by students of the law. “Bush v. Gore was an illegitimate decision.” “The Supreme Court’s implied fundamental rights jurisprudence lacks legitimacy.” “The invasion of Iraq does not have a legitimate basis in international law.” We’ve all heard words like these uttered countless times, but what do they mean? Can we give an account of “legitimacy” that makes that concept meaningful and distinctive? Is “legitimacy” one idea or is it several different notions, united by family resemblance rather than an underlying conceptual structure.” Read more . . .

“Psychology Today journalist Matthew Hutson covers some fascinating experiments just published in this week’s Science that found that reducing participants’ control increase the tendency for magical thinking and the perception of illusory meaning in random or patternless visual scenes.” Read more . . .

* * *

For previous installments of “Situationism on the Blogosphere,” click on the “Blogroll” category in the right margin.

Recently, John Tierney who writes a Science column in the New York Times has shown great skepticism about the concept of implicit bias, how it might be measured (through the Implicit Association Test), and whether it predicts real-world behavior. See, e.g., Findingscolumn (Nov. 17, 2008). I write to make provide praise, critique, and cultural commentary.

First, praise. I praise Tierney’s skepticism, which is fundamental to critical inquiry generally and good science especially. Serious, critical inquiry is why most of us got into academics, and it’s why you the reader are reading this blog.

Second, critique. But skepticism should not be one-sided. Tierney’s columns suggest that one side is just asking for good, skeptical science, whereas the other side is pushing along a politically correct agenda recklessly. That is hardly fair and balanced. To take one example, Tierney gives prominent weight to Prof. Phil Tetlocks’ criticisms of the implicit bias research. But let’s probe further. In an article by Tetlock and Prof. Gregory Mitchell (UVA Law) attacking the science, the authors suggest that one of the reasons that Whites may perform worse on the Black-White IAT is because of a phenomenon called “stereotype threat”. They write that Whites “react to the identity threat posed by the IAT by choking under stress–and performing even worse on the IAT, thus confirming the researchers’ original stereotype of them.” 67 Ohio St. L.J. 1023, 1079 (2006).

For this “choke under threat” explanation, Tetlock and Mitchell cite a single study. Moreover, they do not turn their powerful skepticism against this body of work, launched by Prof. Claude Steele at Stanford, which explains why negative stereotypes can depress test performance. This body of work, if taken as seriously as Tetlock and Mitchell do in a throw-away line, challenges the use of standardized examinations in university admissions. But I doubt that that’s what Tetlock and Mitchell would call for, as a matter of policy. So why not be methodologically pure and go after the stereotype threat work with equal vigor and skepticism? Instead, they deploy “stereotype threat” science without raising an eyebrow, since it fits their arsenal of critique of the “implicit bias” science.

The general point is that it’s facile to think that one side has the scientific purists — just seeking good data and good science, and the other side has the political hacks. And self-serving reasoning no doubt infects us all, on both sides. This is why we should trust long-run scientific equilibrium and be skeptical of both aggressive claims and their backlashes.

Third, cultural commentary. The readers’ comments to the Tierney articles are fascinating because they largely give no deference to scientific expertise. From the large N of 1, those who have taken the IAT conclude that the test must be nonsense and raise myriad confounds (without bothering to read the FAQs that explain how stimuli are randomized, etc.) If geneticists were debating the meaning of some expressed sequence tags or if astrophysicists were debating new evidence of dark matter, I wonder if readers would bother to chime in aggressively with their views. “I have plenty of genes, and that view about inheritability is nonsense!” “I’ve seen stars, and if I can’t see ‘em they must not exist!”

I suggest that we feel so personally connected to race and to gender (most of the comments focus on race) and are so personally invested in not being biased that we feel compelled toward such participation. Again, if some “coffee increases likelihood of ulcers” study came out, would people write in: “I drink coffee, and I don’t have an ulcer!!!” I don’t think so. What does that say about our current cultural moment? Perhaps it reveals a sort of intellectual prejudice­-a proclivity not to take race research seriously, as nothing more than personal opinion, regardless of the scientific and statistical bona fides.

* * *

Look, science always involves conflict. And in the long run, there’s no reason to think that this controversy won’t be resolved through the traditional scientific method and reach a long-run equilibrium consensus. But getting there has already been rocky and will continue to be. Maybe the implicit bias work, which is far more extensive than just the implicit association test (IAT), will turn out to be nothing more than “intelligent design”–just ideology (in that case religious) wrapped up in pseudo-science. Or, and I think this is far more likely, it will be another inconvenient truth that is established, as global warming ultimately was: We are not as colorblind as we hope to be, and on the margins, implicit associations in our brain alter our behavior in ways that we would rather they not. Certainly the balance of peer-reviewed studies in number and quality point in that direction.

In the end, time truly will tell. The real question is which side will maintain its scientific integrity when the results come in.

Full disclosure: I’m a co-author of Mahzarin Banaji, whose work is discussed in Tierney’s pieces. You can read my implicit bias work at:

“Evolutionary psychology prides itself on being a valid, scientific account of human psychology (and behaviour) by tying itself to the scientific theory of natural evolution. But evolution is an explanation of physical, anatomical traits . . . The plausibility of evolutionary psychology rests on the question of whether psychological attributes (patriotism, altruism, romantic love, aesthetic judgments, logical reasoning, recollecting your grandmother’s birthday, and studying to get into college) are analogous to anatomical structures in their origins and in their functioning. If they are not analogous, then it is a mistake to explain them in terms of evolutionary theory which explains physical, anatomical features determined by biological mechanisms.” Read more . . .

“America can pull through the current economic crisis with a dose of political maturity and a bit of luck. Success will mean the end of the Reagan era, of an ideology that has brought the country to its knees.” Read more . . .

“Love and hate are intimately linked within the human brain, according to a study that has discovered the biological basis for the two most intense emotions.” . . . “Scientists studying the physical nature of hate have found that some of the nervous circuits in the brain responsible for it are the same as those that are used during the feeling of romantic love – although love and hate appear to be polar opposites.” Read more . . .

“U.S. presidential candidates have been stumping for nearly two years with their every move being analyzed and reported ad nauseum. Logically, voters should be able to tap into lots of information when they make their decisions come November. But it turns out there’s a lot more going on when we step behind the curtain to cast our ballot.” Read more . . .

“Republican vice presidential candidate Sarah Palin received a media lashing last week when word trickled out that her makeup artist snagged $22,800 in the first half of October. Pundits warned that such royal treatment might undermine her “down home” persona, but the makeover may have been a savvy move: New research adds more weight to the idea that voters value attractiveness more than competence in the faces of female politicians.” Read more . . .

“When you cut away its many layers, our fixation on popular culture reflects an intense interest in the doings of other people; this preoccupation with the lives of others is a by-product of the psychology that evolved in prehistoric times to make our ancestors socially successful. Thus, it appears that we are hardwired to be fascinated by gossip.” Read more . . .

Imagine that you have responded to an advertisement in the New Haven newspaper seeking subjects for a study of memory. A researcher whose serious demeanor and laboratory coat convey scientific importance greets you and another applicant at your arrival at a Yale laboratory in Linsly-Chittenden Hall. You are here to help science find ways to improve people’s learning and memory through the use of punishment. The researcher tells you why this work may have important consequences. The task is straightforward: one of you will be the “teacher” who gives the “learner” a set of word pairings to memorize. During the test, the teacher will give each key word, and the learner must respond with the correct association. When the learner is right, the teacher gives a verbal reward, such as “Good” or “That’s right.” When the learner is wrong, the teacher is to press a lever on an impressive-looking apparatus that delivers an immediate shock to punish the error.

The shock generator has 30 switches, starting from a low level of 15 volts and increasing by 15 volts to each higher level. The experimenter tells you that every time the learner makes a mistake, you have to press the next switch. The control panel shows both the voltage of each switch and a description. The tenth level (150 volts) is “Strong Shock”; the 17th level (255 volts) is “Intense Shock”; the 25th level (375 volts) is “Danger, Severe Shock.” At the 29th and 30th levels (435 and 450 volts) the control panel is marked simply with an ominous XXX: the pornography of ultimate pain and power.

You and another volunteer draw straws to see who will play each role; you are to be the teacher, and the other volunteer will be the learner. He is a mild-mannered, middle-aged man whom you help escort to the next chamber. “Okay, now we are going to set up the learner so he can get some punishment,” the experimenter tells you both. The learner’s arms are strapped down and an electrode is attached to his right wrist. The generator in the next room will deliver the shocks. The two of you communicate over an intercom, with the experimenter standing next to you. You get a sample shock of 45 volts — the third level, a slight tingly pain — so you have a sense of what the shock levels mean. The researcher then signals you to start.

Initially, your pupil does well, but soon he begins making errors, and you start pressing the shock switches. He complains that the shocks are starting to hurt. You look at the experimenter, who nods to continue. As the shock levels increase in intensity, so do the learner’s screams, saying he does not think he wants to continue. You hesitate and question whether you should go on. But the experimenter insists that you have no choice.

In 1949, seated next to me in senior class at James Monroe High School in the Bronx, New York, was my classmate, Stanley Milgram. We were both skinny kids, full of ambition and a desire to make something of ourselves, so that we might escape life in the confines of our ghetto experience. Stanley was the little smart one who we went to for authoritative answers. I was the tall popular one, the smiling guy other kids would go to for social advice.

I had just returned to Monroe High from a horrible year at North Hollywood High School, where I had been shunned and friendless (because, as I later learned, there was a rumor circulating that I was from a New York Sicilian Mafia family). Back at Monroe, I would be chosen “Jimmy Monroe” — most popular boy in Monroe High School’s senior class. Stanley and I once discussed how that transformation could happen. We agreed that I had not changed; the situation was what mattered.

Situational psychology is the study of the human response to features of our social environment, the external behavioral context, above all to the other people around us. Stanley Milgram and I, budding situationists in 1949, both went on to become academic social psychologists. We met again at Yale in 1960 as beginning assistant professors — him starting out at Yale, me at NYU. Some of Milgram’s new research was conducted in a modified laboratory that I had fabricated a few years earlier as a graduate student — in the basement of Linsly-Chittenden, the building where we taught Introductory Psychology courses. That is where Milgram was to conduct his classic and controversial experiments on blind obedience to authority.

Milgram’s interest in the problem of obedience came from deep personal concerns about how readily the Nazis had obediently killed Jews during the Holocaust. His laboratory paradigm, he wrote years later, “gave scientific expression to a more general concern about authority, a concern forced upon members of my generation, in particular upon Jews such as myself, by the atrocities of World War II.”

As Milgram described it, he hit upon the concept for his experiment while musing about a study in which one of his professors, Solomon Asch, had tested how far subjects would conform to the judgment of a group. Asch had put each subject in a group of coached confederates and asked every member, one by one, to compare a set of lines in order of length. When the confederates all started giving the same obviously false answers, 70 percent of the subjects agreed with them at least some of the time.

Milgram wondered whether there was a way to craft a conformity experiment that would be “more humanly significant” than judgments about line length. He wrote later: “I wondered whether groups could pressure a person into performing an act whose human import was more readily apparent; perhaps behaving aggressively toward another person, say by administering increasingly severe shocks to him. But to study the group effect . . . you’d have to know how the subject performed without any group pressure. At that instant, my thought shifted, zeroing in on this experimental control. Just how far would a person go under the experimenter’s orders?”

How far up the scale do you predict that you would go under those orders? Put yourself back in the basement with the fake shock apparatus and the other “volunteer” — actually the experimenter’s confederate, who always plays the learner because the “drawing” is rigged — strapped down in the next room. As the shocks proceed, the learner begins complaining about his heart condition. You dissent, but the experimenter still insists that you continue. The learner makes errors galore. You plead with your pupil to concentrate; you don’t want to hurt him. But your concerns and motivational messages are to no avail. He gets the answers wrong again and again. As the shocks intensify, he shouts out, “I can’t stand the pain, let me out of here!” Then he says to the experimenter, “You have no right to keep me here!” Another level up, he screams, “I absolutely refuse to answer any more! You can’t hold me here! My heart’s bothering me!”

Obviously you want nothing more to do with this experiment. You tell the experimenter that you refuse to continue. You are not the kind of person who harms other people in this way. You want out. But the experimenter continues to insist that you go on. He reminds you of the contract, of your agreement to participate fully. Moreover, he claims responsibility for the consequences of your shocking actions. After you press the 300-volt switch, you read the next keyword, but the learner doesn’t answer. “He’s not responding,” you tell the experimenter. You want him to go into the other room and check on the learner to see if he is all right. The experimenter is impassive; he is not going to check on the learner. Instead he tells you, “If the learner doesn’t answer in a reasonable time, about five seconds, consider it wrong,” since errors of omission must be punished in the same way as errors of commission — that is a rule.

As you continue up to even more dangerous shock levels, there is no sound coming from your pupil’s shock chamber. He may be unconscious or worse. You are truly disturbed and want to quit, but nothing you say works to get your exit from this unexpectedly distressing situation. You are told to follow the rules and keep posing the test items and shocking the errors.

Now try to imagine fully what your participation as the teacher would be. If you actually go all the way to the last of the shock levels, the experimenter will insist that you repeat that XXX switch two more times. I am sure you are saying, “No way would I ever go all the way!” Obviously, you would have dissented, then disobeyed and just walked out. You would never sell out your morality. Right?

Milgram once described his shock experiment to a group of 40 psychiatrists and asked them to estimate the percentage of American citizens who would go to each of the 30 levels in the experiment. On average, they predicted that less than 1 percent would go all the way to the end, that only sadists would engage in such sadistic behavior, and that most people would drop out at the tenth level of 150 volts. They could not have been more wrong.

In Milgram’s experiment, two of every three (65 percent) of the volunteers went all the way up to the maximum shock level of 450 volts. The vast majority of people shocked the victim over and over again despite his increasingly desperate pleas to stop. Most participants dissented from time to time and said they did not want to go on, but the researcher would prod them to continue.

Over the course of a year, Milgram carried out 19 different experiments, each one a different variation of the basic paradigm. In each of these studies he varied one social psychological variable and observed its impact. In one study, he added women; in others he varied the physical proximity or remoteness of either the experimenter-teacher link or the teacher-learner link; had peers rebel or obey before the teacher had the chance to begin; and more.

In one set of experiments, Milgram wanted to show that his results were not due to the authority power of Yale University. So he transplanted his laboratory to a run-down office building in downtown Bridgeport, Connecticut, and repeated the experiment as a project ostensibly of a private research firm with no connection to Yale. It made hardly any difference; the participants fell under the same spell of this situational power.

The data clearly revealed the extreme pliability of human nature: depending on the situation, almost everyone could be totally obedient or almost everyone could resist authority pressures. Milgram was able to demonstrate that compliance rates could soar to over 90 percent of people continuing to the 450-volt maximum or be reduced to less than 10 percent — by introducing just one crucial variable into the compliance recipe.

Want maximum obedience? Make the subject a member of a “teaching team,” in which the job of pulling the shock lever to punish the victim is given to another person (a confederate), while the subject assists with other parts of the procedure. Want resistance to authority pressures? Provide social models — peers who rebel. Participants also refused to deliver the shocks if the learner said he wanted to be shocked; that’s masochistic, and they are not sadists. They were also reluctant to give high levels of shock when the experimenter filled in as the learner. They were more likely to shock when the learner was remote than in proximity.

In each of the other variations on this diverse range of ordinary American citizens, of widely varying ages and occupations and of both genders, it was possible to elicit low, medium, or high levels of compliant obedience with a flick of the situational switch. Milgram’s large sample — a thousand ordinary citizens from varied backgrounds — makes the results of his obedience studies among the most generalizable in all the social sciences. His classic study has been replicated and extended by many other researchers in many countries.

Recently, Thomas Blass of the University of Maryland-Baltimore County [author of The Man Who Shocked The World and creator of the terrific website StanleyMilgram.Com] analyzed the rates of obedience in eight studies conducted in the United States and nine replications in European, African, and Asian countries. He found comparably high levels of compliance in all. The 61 percent mean obedience rate found in the U.S. was matched by the 66 percent rate found across all the other national samples. The degree of obedience was not affected by the timing of the studies, which ranged from 1963 to 1985.

Other studies based on Milgram’s have shown how powerful the obedience effect can be when legitimate authorities exercise their power within their power domains. In one study, most college students administered shocks to whimpering puppies when required to do so by a professor. In another, all but one of 22 nurses flouted their hospital’s procedure by obeying a phone order from an unknown doctor to administer an excessive amount of a drug (actually a placebo); that solitary disobedient nurse should have been given a raise and a hero’s medal. In still another, a group of 20 high school students joined a history teacher’s supposed authoritarian political movement, and within a week had expelled their fellows from class and recruited nearly 200 others from around the school to the cause.

Now we ask the question that must be posed of all such research: what is its external validity, what are real-world parallels to the laboratory demonstration of authority power?

In 1963, the social philosopher Hannah Arendt published what was to become a classic of our times, Eichmann in Jerusalem: A Report on the Banality of Evil. She provides a detailed analysis of the war crimes trial of Adolf Eichmann, the Nazi figure who personally arranged for the murder of millions of Jews. Eichmann’s defense of his actions was similar to the testimony of other Nazi leaders: “I was only following orders.” What is most striking in Arendt’s account of Eichmann is all the ways in which he seemed absolutely ordinary: half a dozen psychiatrists had certified him as “normal.” Arendt’s famous conclusion: “The trouble with Eichmann was precisely that so many were like him, and that the many were neither perverted nor sadistic, that they were, and still are, terribly and terrifyingly normal.”

Arendt’s phrase “the banality of evil” continues to resonate because genocide has been unleashed around the world and torture and terrorism continue to be common features of our global landscape. A few years ago, the sociologist and Brazil expert Martha Huggins, the Greek psychologist and torture expert Mika Haritos-Fatouros, and I interviewed several dozen torturers. These men did their daily dirty deeds for years in Brazil as policemen, sanctioned by the government to get confessions by torturing “subversive” enemies of the state.

The systematic torture by men of their fellow men and women represents one of the darkest sides of human nature. Surely, my colleagues and I reasoned, here was a place where dispositional evil would be manifest. The torturers shared a common enemy: men, women, and children who, though citizens of their state, even neighbors, were declared by “the System” to be threats to the country’s national security — as socialists and Communists. Some had to be eliminated efficiently, while others, who might hold secret information, had to be made to yield it up by torture, confess and then be killed.

Torture always involves a personal relationship; it is essential for the torturer to understand what kind of torture to employ, what intensity of torture to use on a certain person at a certain time. Wrong kind or too little — no confession. Too much — the victim dies before confessing. In either case, the torturer fails to deliver the goods and incurs the wrath of the senior officers. Learning to determine the right kind and degree of torture that yields up the desired information elicits abounding rewards and flowing praise from one’s superiors. It took time and emerging insights into human weaknesses for these torturers to become adept at their craft.

What kind of men could do such deeds? Did they need to rely on sadistic impulses and a history of sociopathic life experiences to rip and tear the flesh of fellow beings day in and day out for years on end?

In a recent study of 400 al-Qaeda members, 90% came from caring, intact families.

We found that sadists are selected out of the training process by trainers because they are not controllable. They get off on the pleasure of inflicting pain, and thus do not sustain the focus on the goal of extracting confessions. From all the evidence we could muster, torturers were not unusual or deviant in any way prior to practicing their new roles, nor were there any persisting deviant tendencies or pathologies among any of them in the years following their work as torturers and executioners. Their transformation was entirely explainable as being the consequence of a number of situational and systemic factors, such as the training they were given to play this new role; their group camaraderie; acceptance of the national security ideology; and their learned belief in socialists and Communists as enemies of their state.

Amazingly, the transformation of these men into violence workers is comparable to the transformation of young Palestinians into suicide bombers intent on killing innocent Israeli civilians. In a recent study, the forensic psychiatrist Marc Sageman [at the Solomon Asch Center] found evidence of the normalcy of 400 al-Qaeda members. Three-quarters came from the upper or middle class. Ninety percent came from caring, intact families. Two-thirds had gone to college; two-thirds were married; and most had children and jobs in science and engineering. In many ways, Sageman concludes, “these are the best and brightest of their society.”

Israeli psychologist Ariel Merari, who has studied this phenomenon extensively for many years, outlines the common steps on the path to these explosive deaths. First, senior members of an extremist group identify young people who, based on their declarations at a public rally against Israel or their support of some Islamic cause or Palestinian action, appear to have an intense patriotic fervor. Next, they are invited to discuss how seriously they love their country and hate Israel. They are asked to commit to being trained. Those who do then become part of a small secret cell of three to five youths. From their elders, they learn bomb making, disguise, and selecting and timing targets. Finally, they make public their private commitment by making a videotape, declaring themselves to be “the living martyr” for Islam. The recruits are also told the Big Lie: their relatives will be entitled to a high place in Heaven, and they themselves will earn a place beside Allah. Of course, the rhetoric of dehumanization serves to deny the humanity and innocence of their victims.

The die is cast; their minds have been carefully prepared to do what is ordinarily unthinkable. In these systematic ways a host of normal, angry young men and women become transformed into true believers. The suicide, the murder, of any young person is a gash in the fabric of the human family that we elders from every nation must unite to prevent. To encourage the sacrifice of youth for the sake of advancing the ideologies of the old must be considered a form of evil that transcends local politics and expedient strategies.

A host of normal, angry young men and women become transformed into true believers.

Our final extension of the social psychology of evil from artificial laboratory experiments to real-world contexts comes to us from the jungles of Guyana. There, on November 28, 1978, an American religious leader persuaded more than 900 of his followers to commit mass suicide. In the ultimate test of blind obedience to authority, many of them killed their children on his command.

Jim Jones, the pastor of Peoples Temple congregations in San Francisco and Los Angeles, had set out to create a socialist utopia in Guyana. But over time Jones was transformed from the caring, spiritual “father” of a large Protestant congregation into an Angel of Death. He instituted extended forced labor, armed guards, semistarvation diets, and daily punishments amounting to torture for the slightest breach of any of his many rules. Concerned relatives convinced a congressman and media crew to inspect the compound. But Jones arranged for them to be murdered as they left. He then gathered almost all the members at the compound and gave a long speech in which he exhorted them to take their lives by drinking cyanide-laced Kool-Aid.

Jones was surely an egomaniac; he had all of his speeches and proclamations, even his torture sessions, tape-recorded — including his final suicide harangue. In it Jones distorts, lies, pleads, makes false analogies, appeals to ideology and to transcendent future life, and outright insists that his orders be followed, all while his staff is efficiently distributing deadly poison to the hundreds gathered around him. Some excerpts from that last hour convey a sense of the death-dealing tactics he used to induce total obedience to an authority gone mad:

Please get us some medication. It’s simple. It’s simple. There’s no convulsions with it. [Of course there are, especially for the children.] . . . Don’t be afraid to die. You’ll see, there’ll be a few people land[ing] out here. They’ll torture some of our children here. They’ll torture our people. They’ll torture our seniors. We cannot have this. . . . Please, can we hasten? Can we hasten with that medication? . . . We’ve lived — we’ve lived as no other people lived and loved. We’ve had as much of this world as you’re gonna get. Let’s just be done with it. (Applause.). . . . Who wants to go with their child has a right to go with their child. I think it’s humane. . . . Lay down your life with dignity. Don’t lay down with tears and agony. There’s nothing to death. . . . It’s just stepping over to another plane. Don’t be this way. Stop this hysterics. . . . Look, children, it’s just something to put you to rest. Oh, God. (Children crying.). . . . Mother, Mother, Mother, Mother, Mother, please. Mother, please, please, please. Don’t — don’t do this. Don’t do this. Lay down your life with your child.

And they did, and they died for “Dad.”

A fitting conclusion comes from psychologist Mahrzarin Banaji: “What social psychology has given to an understanding of human nature is the discovery that forces larger than ourselves determine our mental life and our actions — chief among these forces [is] the power of the social situation.”

The most dramatic instances of directed behavior change and “mind control” are not the consequence of exotic forms of influence such as hypnosis, psychotropic drugs, or “brainwashing.” They are, rather, the systematic manipulation of the most mundane aspects of human nature over time in confining settings. Motives and needs that ordinarily serve us well can lead us astray when they are aroused, amplified, or manipulated by situational forces that we fail to recognize as potent. This is why evil is so pervasive. Its temptation is just a small turn away, a slight detour on the path of life, a blur in our sideview mirror, leading to disaster.

Milgram crafted his research paradigm to find out what strategies can seduce ordinary citizens to engage in apparently harmful behavior. Many of these methods have parallels to compliance strategies used by “influence professionals” in real-world settings, such as salespeople, cult and military recruiters, media advertisers, and others. Below are ten of the most effective.

1

Prearranging some form of contractual obligation, verbal or written, to control the individual’s behavior in pseudo-legal fashion. In Milgram’s obedience study, subjects publicly agreed to accept the tasks and the procedures.

Presenting basic rules to be followed that seem to make sense before their actual use but can then be used arbitrarily and impersonally to justify mindless compliance. The authorities will change the rules as necessary but will insist that rules are rules and must be followed (as the researcher in the lab coat did in Milgram’s experiment).

4

Altering the semantics of the act, the actor, and the action — replacing unpleasant reality with desirable rhetoric, gilding the frame so that the real picture is disguised: from “hurting victims” to “helping the experimenter.” We can see the same semantic framing at work in advertising, where, for example, bad-tasting mouthwash is framed as good for you because it kills germs and tastes like medicine.

5

Creating opportunities for the diffusion of responsibility or abdication of responsibility for negative outcomes, such that the one who acts won’t be held liable. In Milgram’s experiment, the authority figure, when questioned by a teacher, said he would take responsibility for anything that happened to the learner.

6

Starting the path toward the ultimate evil act with a small, seemingly insignificant first step, the easy “foot in the door” that swings open subsequent greater compliance pressures. In the obedience study, the initial shock was only a mild 15 volts. This is also the operative principle in turning good kids into drug addicts with that first little hit or sniff.

7

Having successively increasing steps on the pathway that are gradual, so that they are hardly noticeably different from one’s most recent prior action. “Just a little bit more.”

8

Gradually changing the nature of the authority figure from initially “just” and reasonable to “unjust” and demanding, even irrational. This tactic elicits initial compliance and later confusion, since we expect consistency from authorities and friends. Not acknowledging that this transformation has occurred leads to mindless obedience. And it is part of many date rape scenarios and a reason why abused women stay with their abusing spouses.

9

Making the exit costs high and making the process of exiting difficult; allowing verbal dissent, which makes people feel better about themselves, while insisting on behavioral compliance.

10

Offering a “big lie” to justify the use of any means to achieve the seemingly desirable, essential goal. (In Milgram’s research the justification was that science will help people improve their memory by judicious use of reward and punishment.) In social psychology experiments, this is known as the “cover story”; it is a cover-up for the procedures that follow, which do not make sense on their own. The real-world equivalent is an ideology. Most nations rely on an ideology, typically “threats to national security,” before going to war or suppressing political opposition. When citizens fear that their national security is being threatened, they become willing to surrender their basic freedoms in exchange. Erich Fromm’s classic analysis in Escape from Freedom made us aware of this trade-off, which Hitler and other dictators have long used to gain and maintain power.

Procedures like these are used when those in authority know that few would engage in the endgame without being prepared psychologically to do the unthinkable. But people who understand their own impulses to join with a group and to obey an authority may be able also to withstand those impulses at times when the mandate from outside comes into conflict with their own values and conscience. In the future, when you are in a compromising position where your compliance is at issue, thinking back to these ten stepping-stones to mindless obedience may enable you to step back and not go all the way down the path — their path. A good way to avoid crimes of obedience is to assert one’s personal authority and to always take full responsibility for one’s actions. Resist going on automatic pilot, be mindful of situational demands on you, engage your critical thinking skills, and be ready to admit an error in your initial compliance and to say, “Hell, no, I won’t go your way.”

* * *

Below you can find several excellent videos of the Jonestown massacre and the circumstances leading up to it.

From PBS, here is a (fuzzy) 84-minute video “Jonestown: The Life And Death Of Peoples Temple.”

Here is a briefer but clearer 45-minute, video “Jonestown: The Final Report.”

Kevin Carlsmith is an assistant professor of psychology at Colgate University doing great work of significant interest to our readers. Today, he published fascinating commentary, “Torture’s Attraction Is Not Information — It’s Retribution,” in Nieman Watchdog. Here are some excerpts.

* * *

How did the United States go from a champion of human rights to a state that condones and practices torture on detainees? The present administration’s first line of defense is one of semantics: The United States has a policy against torture, ergo, actions taken in its name cannot be “torture.” Its second line of defense invokes the utilitarian argument of expediency: It was necessary to obtain mission-critical information from combatants who would only divulge secrets under extreme duress. . . .

And yet, interrogation experts make clear that torture is a terrible way to obtain information. Not merely from a moral perspective, but from a utilitarian perspective as well. Torture victims will always talk, regardless of whether they actually have any information. That is, the obtained information is generally useless, and when it does have value, it is mixed in with so much false information that there is no reliable way to separate the true from the false. We are left, then, with a puzzle: Given the near unanimity that torture is immoral, and the expert agreement that it serves no intelligence function, how do we explain the broad support for enhanced interrogation techniques within the administration and within large segments of the population?

To understand this puzzle, one must first understand something about the psychology of punishment. There are many justifications for punishment. One can punish a perpetrator to administer his “just deserts,” or punish to deter him and others from behaving similarly in the future. Punishment can rehabilitate a person, coerce him into providing information, or simply change the cost-benefit analysis so that he will be inhibited from certain behaviors. People are generally aware of these different justifications, and they like them all. Indeed, numerous surveys reveal that people endorse all of the reasons, and if forced to choose, they generally split evenly between punishing someone because they “deserve” it and because it will serve some utilitarian purpose.

Psychologists have shown that there is a sharp divide between the reasons people express for punishment and the reasons that actually determine punishment. That is, there is a discrepancy between what people say and what people do. It turns out that people are highly attuned to the factors that determine whether a person is deserving of punishment. These are things like having the intent to harm, knowing right from wrong, and the severity of the harm. At the same time, people largely ignore factors that would affect the utility of the punishment. So people don’t increase recommended prison sentences when they learn that the perpetrator is likely to commit future crimes, or that the sentence is likely to deter other potential perpetrators. When you examine behaviorally the reasons that people punish, it is all about trying to give someone what they deserve.

Torture operates along a very similar set of principles. People treat the decision to torture in the same way that they treat the decision to punish. They approve increasingly harsh techniques as their perception of the target’s moral culpability increases. Put simply, if they perceive the target to be a “bad guy” who deserves to be punished, then they will approve of torture. But if they perceive him to be a good guy (or, at least, innocent of wrong doing) then they generally won’t approve of torture. It is only at the margins that people pay attention to the potential utility of the torture. That is, the likelihood that a given target will divulge useful information under torture has far less impact on the decision to torture than does the perception of whether or not that target “deserves” to be tortured.

How do we know this? The basics are laid out in any introductory text on social psychology. The specifics, however, come from a series of experiments I conducted with my colleague Avani Sood (a fellow social psychologist who is also an attorney), some of which will be published shortly in the Journal of Experimental Social Psychology [available on SSRN, here].

* * *

[To read about the study, go to the commentary here or the underlying paper here.]

* * *

What we learned was that people did not distinguish sharply between torture and punishment. Indeed, the response to the punishment question and torture question . . . were largely interchangeable. Moreover, the decision to torture was closely related to [the recipient’s] prior bad acts and largely independent of the likelihood that he possessed any useful information.

So, what does all of this mean? First, it reveals that people use the same psychological process to form judgments about torture as they do for punishment. Second, those processes revolve largely around retribution – the desire to give someone what they deserve – rather than the potential utility of the action. Third, people are not aware of this process and have quite limited insight into the principles that guide their decision making. Hence, they will often claim to be operating on the basis of utility, when utility in fact has little to do with it.

For those who are opposed to torture and to the practices the U.S. engages in, it is common to wonder how the practitioners and supporters can sleep at night. After all, torture of another person is an ugly business, both practically and morally. This analysis, though, can help us to understand why such practices seemingly yield so little dissonance. For those who endorse torture, they are doing so because they believe the recipient is morally culpable and thus deserving of the mistreatment. Their ostensible justification is utility, and this goal is in no way threatened by deontological concerns because only the deserving get tortured.

The greatest perversion of all comes from the “end-game” of this process. Psychologists have identified a powerful and pervasive defense mechanism called the “just world” phenomenon . . . . [which contributes to] a human proclivity to “blame the victim.” If something bad happened to this person, then surely they did something to deserve it.

Now apply this concept to torture. When we hear of a person being tortured, it is common (if wrong) to assume that the person has probably done something to merit the torture. The alternative, that the target was truly innocent in all ways, is too upsetting to contemplate. The belief that the person is deserving solves the tension, and thus is much more readily accepted.

In summary then, we find that people support torture on the basis that it gives bad people what they deserve. When confronted with information that seemingly defies that belief – such as torturing the wrong person who merely shared a common name with an actual bad guy – they are unconsciously motivated to believe that the innocent is in fact not-so-innocent, and thus able to maintain their erroneous belief that only the guilty are tortured.

“Chen-Bo Zhong and Geoffrey Leonardelli told a college student volunteers that they would be participating in a set of unrelated experiments. First they were asked to recall a time when they felt either socially excluded or included. Then a research assistant told them that the lab maintenance staff was working on the heating system for the room, and asked them to estimate the room’s temperature. You guessed it: the participants’ temperature estimates corresponded to whether they had been prompted to think about social exclusion or inclusion.” Read more . . .

“New financial rules in the U.K. and elsewhere mean that credit card companies have to take a monthly minimum payment from card-holders who have an outstanding balance. It’s a protective measure that’s intended to stop card-holders’ debt from spiralling out of control. However, new research by Neil Stewart shows that, thanks to a decision-making bias known as “anchoring”, printed information about compulsory minimum payments leads many card-holders to actually pay off less of their balance than they would have done, thus costing them significantly in the long-run.” Read more . . .

“Success at mental arithmetic isn’t purely a question of mathematical skill and knowledge – people’s belief in their own ability, known as “self-efficacy”, plays a key part too. Bobby Hoffman and Alexandru Spatariu who made the new finding say their research is the “first study that we know of to demonstrate the effect of self-efficacy on problem-solving efficiency when controlling for background knowledge.”Read more . . .

“Do you think repeat offenders should be punished more harshly than first-time offenders for the same crime? Does it make any difference if I make it clear that the specific hypothetical crime in question has caused the same harm in each case? If you still think the repeat offender should be punished more harshly, you’re not alone. In fact this approach to justice is written into law in many countries. First time offenders can expect leniency whereas repeat offenders can expect severe punishment.” Read more . . .

“It is quite likely that the same reward system provides the positive feedback necessary for us to learn and to continue wanting to learn. The pleasure of a thought is what propels us forward; imagine trying to write a novel or engage in a long-term scientific experiment without getting such rewards. Fortunately, the brain has provided us with a wide variety of subjective feelings of reward ranging from hunches, gut feelings, intuitions, suspicions that we are on the right track to a profound sense of certainty and utter conviction. And yes, these feelings are qualitatively as powerful as those involved in sex and gambling. One need only look at the self-satisfied smugness of a “know it all” to suspect that the feeling of certainty can approach the power of addiction.” Read more . . .

* * *

For previous installments of “Situationism on the Blogosphere,” click on the “Blogroll” category in the right margin.

Like this:

From Ted Talks: Martin Seligman talks about psychology — as a field of study and as it works one-on-one with each patient and each practitioner. As it moves beyond a focus on disease, what can modern psychology help us to become?

Martin Seligman founded the field of positive psychology in 2000, and has devoted his career since then to furthering the study of positive emotion, positive character traits, and positive institutions. It’s a fascinating field of study that had few empirical, scientific measures — traditional clinical psychology focusing more on the repair of unhappy states than the propagation and nurturing of happy ones. In his pioneering work, Seligman directs the Positive Psychology Center at the University of Pennsylvania, developing clinical tools and training the next generation of positive psychologists.

* * *

For other Situationist posts discussing research on positive psychology, click here.

Emily Dupraz wrote a nice summary (for the front page of the Harvard Law website) of Situationist contributor Jon Hanson’s recent lecture at Harvard Law School. Here are some excerpts (as well as a link to the webcast of the lecture).

* * *

Individual free choice, an idea that permeates common sense and legal theory, assumes that actions reflect the stable preferences of individual actors. Individuals are responsible for their actions (that is, their preference-driven choices), and laws can therefore be designed on that assumption.

But if that assumption is wrong, says Harvard Law School Professor Jon Hanson, then laws built upon it may not be advancing the ends they purport to serve. And Hanson’s view, steeped in interdisciplinary study in the mind sciences, is that the assumption of indvidual free choice is faulty. The decisions people make, he says, are often determined by influential factors in the situation—that is, non-salient and often invisible external and internal factors that influence, not only how we behave, but also how we make sense of our behavior.

This is what Hanson calls “situationism.” Using social psychology and related mind sciences to help understand law and legal theory, he suggests that people, ideologies, and laws are shaped by forces quite different from those they imagine. While varying versions of the individual-choice model have served as the basis for most laws, policies, and mainstream legal theories, Hanson says, social psychology, social cognition, cognitive neuroscience, and related social scientific fields have uncovered many ways in which that model is detrimentally incorrect.

Hanson explored some of those ideas in an October 29th lecture at HLS entitled, “The Human Animal, Ideology, the Law, and other Situational Characters.” The lecture commemorated his appointment as the Alfred Smart Professor of Law. (Watch the webcast). The chair was endowed by the Smart Family Foundation, in honor of Alfred Smart, a 20th-century publishing entrepreneur who helped launch Esquire magazine and other publications.

Among those in the audience was Hanson’s colleague, Professor Alan Stone, a psychiatrist and interdisciplinary expert in law and the mind sciences and a renowned film critic. When Stone retires from the HLS faculty, the Smart chair will become the Alan Stone Professorship. Alfred Smart was Stone’s father-in-law.

In his lecture, Hanson incorporated several movie clips to help illustrate his arguments and to recognize Alan Stone’s love of cinema. Referring to a famous dream sequence from Federico Fellini’s “8 ½,” Hanson said: “Our ideology is like a car—it’s an instrument or tool that enables us to get around and gives us the feeling that we’re in control, in the driver’s seat of our lives. [But] we are far more situationally constrained than we ever imagined. Consequently, we’re unable to get where we want to go . . . . In my view, a necessary condition for making better laws is first understanding who we are, what moves us, and what purposes and subconscious motives our ideologies are actually serving.”

Hanson said that the mind sciences behind “situationism” have complex implications for the legal system, and that the law-making process must do a better job of taking into account how individuals make decisions.

Hanson joined the HLS faculty in 1992 after receiving a law degree from Yale. Hanson is is an expert in tort and corporate law with a background in law and economics. He is co-director and creator of the Project on Law and Mind Sciences at HLS and contributes regularly to a blog that discusses the legal implications of human decision-making – The Situationist. Hanson received the 1999 Sacks-Freund Award for excellence in teaching and was a finalist in both 2000 and 2006.

In her introduction of Hanson, Dean Elena Kagan ’86 said: “His deep appreciation and concern for the students of this law school knows no real bounds. He is teacher, mentor, and friend to so many students – possibly to more than any other faculty member at this law school.”

In honor of Alan Stone’s interdisciplinary expertise, the chaired professorship is to be given to someone who has “distinction in scholarship related to the intersection of legal studies and one of the following: psychiatry, psychology, medicine, and literary and film studies.”

Stone is currently the Touroff-Glueck Professor of Law and Psychiatry, a joint appointment held with HLS and the Harvard Medical School. Kagan hailed him as “one of the earliest and one of the best interdisciplinary scholars at Harvard,” having forged his interdisciplinary paths among law and psychiatry and law and film studies for almost four decades.

In addition to his work exploring the intersection of law and psychiatry, Stone is a movie critic for the Boston Review, and he recently published a book entitled, “Movies and the Moral Adventure of Life.”

* * *

To watch the webcast of the video, click here. To read a Harvard Law Record article summarizing the lecture, click here.

Rick Montgomery and Scott Canon have an interesting piece in the Kansas City Star on how attitudes towards race may change as a result of the first African-American becoming President of the United States. We excerpt the piece below.

* * *

Barack Obama’s ascension to the White House raises anew the question of whether color lines, however faded, still exist in America.

They do — just look at housing patterns, church congregations and prison populations. But will the election of a biracial president — with the blessings of much, but not yet a majority, of white America — find the country concluding that it has finally overcome its race thing?

“One reaction might be, see, a black man can reach the top — why can’t the others?” said Christian Crandall, a social psychologist at the University of Kansas. “There’s no way to avoid that problem,” he said. “It’s the price of success.”

Obama’s election may offer evidence to some white Americans that doors of opportunity once barred by racial bias have swung open.

“I wouldn’t buy the argument because I tend to measure things more by statistics” on persistent economic disparities between races, said Roderick Harrison, a sociologist at the Joint Center for Political and Economic Studies, which focuses on black issues. “It underestimates just how exceptional Obama is and the culmination of being the right person at the right time.”

* * *

Research has shown that people of all races become more accepting of racial and ethnic minorities in positions of power through experience. Someone who has already worked for a black boss, for instance, or had important Latino clients, is less skeptical of their capabilities.

In that way, Harrison said, Obama’s mere presence could have an almost subconscious effect on how Americans view the capabilities of minorities.

For Charles McKinney of Rhodes College in Memphis, Tenn., the thought of “my children growing up seeing a brother as the leader of the free world, that’s priceless.”

But opportunities for better racial understanding can backfire.

As much as white audiences admired TV’s Huxtables of “The Cosby Show,” research in the early 1990s revealed many viewers held the family up as a model to which other black families should — but wouldn’t — aspire.

“Bill Cosby’s white audience didn’t want to be reminded of America’s racial past,” said Sut Jhally, who wrote Enlightened Racism after conducting the studies for the University of Massachusetts at Amherst. “They loved the blackness, but only a certain kind of blackness…

“The unintended message was, ‘If you work hard, you can succeed … and if you fail, the fault must be yours,’ ” said Jhally. “In terms of overall race relations, it was one step forward and two steps back.”

* * *

In a national survey, half the country agreed with the statement that “Irish, Italians, Jewish and other minorities overcame prejudice and worked their way up, blacks should do the same without special favors.” Almost two in five agreed that blacks would be as well off as whites if they “would only try harder.”

A third of whites said that blacks are responsible for most or all of the country’s racial tension. And given a set of words to describe blacks, at least one in five whites agreed with “violent,” “boastful” and “complaining.” One in 10 agreed blacks are “lazy” or “irresponsible.”

So while white America can’t assume all is forgiven, neither can those who have battled racism expect that the mere presence of a black family in the White House protects black shoppers from the glares of sales clerks or racial profiling by police.

“Obama’s victory says we have the ability to transcend race, but we still have a huge opportunity to move ahead and do all that must be done next,” said Gwen Grant, president and chief executive officer of the Urban League of Greater Kansas City. “We can’t just relax.”

This Article examines the strength of arguments concerning the causal connection between racial stigma and affirmative action. In so doing, this article reports and analyzes the results of a survey on internal stigma (feelings of dependency, inadequacy, or guilt) and external stigma (the burden of others’ resentment or doubt about one’s qualifications) for the Class of 2009 at seven public law schools, four of which employed race-based affirmative action policies when the Class of 2009 was admitted and three of which did not use such policies at that time.

Specifically, this Article examines and presents survey findings of 1) minimal, if any, internal stigma felt by minority law students, regardless of whether their schools practiced race-based affirmative action; 2) no statistically significant difference in internal stigma between minority students at affirmative action law school and non-affirmative action law schools; and 3) no significant impact from external stigma.