For those with the time, we recommend the entire piece — for the rest of you, we have posted some of the most intriguing excerpts.

* * *

“To a neuroscientist, you are your brain; nothing causes your behavior other than the operations of your brain,” [Joshua] Greene says. “If that’s right, it radically changes the way we think about the law. The official line in the law is all that matters is whether you’re rational, but you can have someone who is totally rational but whose strings are being pulled by something beyond his control.” In other words, even someone who has the illusion of making a free and rational choice between soup and salad may be deluding himself, since the choice of salad over soup is ultimately predestined by forces hard-wired in his brain. Greene insists that this insight means that the criminal-justice system should abandon the idea of retribution — the idea that bad people should be punished because they have freely chosen to act immorally — which has been the focus of American criminal law since the 1970s, when rehabilitation went out of fashion. Instead, Greene says, the law should focus on deterring future harms. In some cases, he supposes, this might mean lighter punishments. “If it’s really true that we don’t get any prevention bang from our punishment buck when we punish that person, then it’s not worth punishing that person,” he says. (On the other hand, Carter Snead, the Notre Dame scholar, maintains that capital defendants who are not considered fully blameworthy under current rules could be executed more readily under a system that focused on preventing future harms.)

“You can have a horrendously damaged brain where someone knows the difference between right and wrong but nonetheless can’t control their behavior,” says Robert Sapolsky, a neurobiologist at Stanford. “At that point, you’re dealing with a broken machine, and concepts like punishment and evil and sin become utterly irrelevant. Does that mean the person should be dumped back on the street? Absolutely not. You have a car with the brakes not working, and it shouldn’t be allowed to be near anyone it can hurt.”

Even as these debates continue, some skeptics contend that both the hopes and fears attached to neurolaw are overblown. “There’s nothing new about the neuroscience ideas of responsibility; it’s just another material, causal explanation of human behavior,” says Stephen J. Morse, professor of law and psychiatry at the University of Pennsylvania. “How is this different than the Chicago school of sociology,” which tried to explain human behavior in terms of environment and social structures? “How is it different from genetic explanations or psychological explanations? The only thing different about neuroscience is that we have prettier pictures and it appears more scientific.”

Morse insists that “brains do not commit crimes; people commit crimes” — a conclusion he suggests has been ignored by advocates who, “infected and inflamed by stunning advances in our understanding of the brain . . . all too often make moral and legal claims that the new neuroscience . . . cannot sustain.” He calls this “brain overclaim syndrome” and cites as an example the neuroscience briefs filed in the Supreme Court case Roper v. Simmons to question the juvenile death penalty. “What did the neuroscience add?” he asks. If adolescent brains caused all adolescent behavior, “we would expect the rates of homicide to be the same for 16- and 17-year-olds everywhere in the world — their brains are alike — but in fact, the homicide rates of Danish and Finnish youths are very different than American youths.” Morse agrees that our brains bring about our behavior — “I’m a thoroughgoing materialist, who believes that all mental and behavioral activity is the causal product of physical events in the brain” — but he disagrees that the law should excuse certain kinds of criminal conduct as a result. “It’s a total non sequitur,” he says. “So what if there’s biological causation? Causation can’t be an excuse for someone who believes that responsibility is possible. Since all behavior is caused, this would mean all behavior has to be excused.”

Still, Morse concedes that there are circumstances under which new discoveries from neuroscience could challenge the legal system at its core. “Suppose neuroscience could reveal that reason actually plays no role in determining human behavior,” he suggests tantalizingly. “Suppose I could show you that your intentions and your reasons for your actions are post hoc rationalizations that somehow your brain generates to explain to you what your brain has already done” without your conscious participation. If neuroscience could reveal us to be automatons in this respect, Morse is prepared to agree with Greene and Cohen that criminal law would have to abandon its current ideas about responsibility and seek other ways of protecting society.

Some scientists are already pushing in this direction. In a series of famous experiments in the 1970s and ’80s, Benjamin Libet measured people’s brain activity while telling them to move their fingers whenever they felt like it. Libet detected brain activity suggesting a readiness to move the finger half a second before the actual movement and about 400 milliseconds before people became aware of their conscious intention to move their finger. Libet argued that this leaves 100 milliseconds for the conscious self to veto the brain’s unconscious decision, or to give way to it — suggesting, in the words of the neuroscientist Vilayanur S. Ramachandran, that we have not free will but “free won’t.”

Morse is not convinced that the Libet experiments reveal us to be helpless automatons. But he does think that the study of our decision-making powers could bear some fruit for the law. “I’m interested,” he says, “in people who suffer from drug addictions, psychopaths and people who have intermittent explosive disorder — that’s people who have no general rationality problem other than they just go off.” In other words, Morse wants to identify the neural triggers that make people go postal. “Suppose we could show that the higher deliberative centers in the brain seem to be disabled in these cases,” he says. “If these are people who cannot control episodes of gross irrationality, we’ve learned something that might be relevant to the legal ascription of responsibility.” That doesn’t mean they would be let off the hook, he emphasizes: “You could give people a prison sentence and an opportunity to get fixed.”

***

The experiments, conducted by Elizabeth Phelps, who teaches psychology at New York University, combine brain scans with a behavioral test known as the Implicit Association Test, or I.A.T., as well as physiological tests of the startle reflex. The I.A.T. flashes pictures of black and white faces at you and asks you to associate various adjectives with the faces. Repeated tests have shown that white subjects take longer to respond when they’re asked to associate black faces with positive adjectives and white faces with negative adjectives than vice versa, and this is said to be an implicit measure of unconscious racism. Phelps and her colleagues added neurological evidence to this insight by scanning the brains and testing the startle reflexes of white undergraduates at Yale before they took the I.A.T. She found that the subjects who showed the most unconscious bias on the I.A.T. also had the highest activation in their amygdalas — a center of threat perception — when unfamiliar black faces were flashed at them in the scanner. By contrast, when subjects were shown pictures of familiar black and white figures — like Denzel Washington, Martin Luther King Jr. and Conan O’Brien — there was no jump in amygdala activity.

The legal implications of the new experiments involving bias and neuroscience are hotly disputed. Mahzarin R. Banaji, a psychology professor at Harvard [and Situationist Contributor] who helped to pioneer the I.A.T., has argued that there may be a big gap between the concept of intentional bias embedded in law and the reality of unconscious racism revealed by science. When the gap is “substantial,” she and the U.C.L.A. law professor [and Situationist Contributor] Jerry Kang have argued, “the law should be changed to comport with science” — relaxing, for example, the current focus on intentional discrimination and trying to root out unconscious bias in the workplace with “structural interventions,” which critics say may be tantamount to racial quotas. One legal scholar has cited Phelps’s work to argue for the elimination of peremptory challenges to prospective jurors — if most whites are unconsciously racist, the argument goes, then any decision to strike a black juror must be infected with racism. Much to her displeasure, Phelps’s work has been cited by a journalist to suggest that a white cop who accidentally shot a black teenager on a Brooklyn rooftop in 2004 must have been responding to a hard-wired fear of unfamiliar black faces — a version of the amygdala made me do it.

Phelps herself says it’s “crazy” to link her work to cops who shoot on the job and insists that it is too early to use her research in the courtroom. “Part of my discomfort is that we haven’t linked what we see in the amygdala or any other region of the brain with an activity outside the magnet that we would call racism,” she told me. “We have no evidence whatsoever that activity in the brain is more predictive of things we care about in the courtroom than the behaviors themselves that we correlate with brain function.” In other words, just because you have a biased reaction to a photograph doesn’t mean you’ll act on those biases in the workplace. Phelps is also concerned that jurors might be unduly influenced by attention-grabbing pictures of brain scans. “Frank Keil, a psychologist at Yale, has done research suggesting that when you have a picture of a mechanism, you have a tendency to overestimate how much you understand the mechanism,” she told me. Defense lawyers confirm this phenomenon. “Here was this nice color image we could enlarge, that the medical expert could point to,” Christopher Plourd, a San Diego criminal defense lawyer, told The Los Angeles Times in the early 1990s. “It documented that this guy had a rotten spot in his brain. The jury glommed onto that.”

Other scholars are even sharper critics of efforts to use scientific experiments about unconscious bias to transform the law. “I regard that as an extraordinary claim that you could screen potential jurors or judges for bias; it’s mind-boggling,” I was told by Philip Tetlock, professor at the Haas School of Business at the University of California at Berkley. Tetlock has argued that split-second associations between images of African-Americans and negative adjectives may reflect “simple awareness of the social reality” that “some groups are more disadvantaged than others.” He has also written that, according to psychologists, “there is virtually no published research showing a systematic link between racist attitudes, overt or subconscious, and real-world discrimination.” (A few studies show, Tetlock acknowledges, that openly biased white people sometimes sit closer to whites than blacks in experiments that simulate job hiring and promotion.) “A light bulb going off in your brain means nothing unless it’s correlated with a particular output, and the brain-scan stuff, heaven help us, we have barely linked that with anything,” agrees Tetlock’s co-author, Amy Wax of the University of Pennsylvania Law School. “The claim that homeless people light up your amygdala more and your frontal cortex less and we can infer that you will systematically dehumanize homeless people — that’s piffle.”

* * *

As the new technologies proliferate, even the neurolaw experts themselves have only begun to think about the questions that lie ahead. Can the police get a search warrant for someone’s brain? Should the Fourth Amendment protect our minds in the same way that it protects our houses? Can courts order tests of suspects’ memories to determine whether they are gang members or police informers, or would this violate the Fifth Amendment’s ban on compulsory self-incrimination? Would punishing people for their thoughts rather than for their actions violate the Eighth Amendment’s ban on cruel and unusual punishment? However astonishing our machines may become, they cannot tell us how to answer these perplexing questions. We must instead look to our own powers of reasoning and intuition, relatively primitive as they may be. As Stephen Morse puts it, neuroscience itself can never identify the mysterious point at which people should be excused from responsibility for their actions because they are not able, in some sense, to control themselves. That question, he suggests, is “moral and ultimately legal,” and it must be answered not in laboratories but in courtrooms and legislatures. In other words, we must answer it ourselves.

Like this:

In Parts I, II, and III of his recent posts on the Situational Sources of Evil, Phil Zimbardo makes the case that we too readily attribute to an evil person or group what should be, at least in part, attributed to situation. This was a key lesson of Milgram’s obedience experiments as well as Zimbardo’s Stanford Prison Experiment. And that lesson, unfortunately, seems similarly evident in far too many real-world atrocities.

There are numerous reasons, some of which those earlier posts highlighted, why the situationist lesson is an unpopular one. This post suggests another.

Think for a moment about the sort of evil that is so grotesquely apparent right now in The Sudan and Uganda, both of which are in the midst of civil wars–wars that have featured indescribably horrific acts, such as villages ravaged by soldiers who chop off limbs of children. Perhaps most harrowingly, the “evil-doers” are often children themselves, many of whom are kidnapped and then conscripted into bands of mutilating marauders.

Joseph Kony’s Lord Resistance Army, for example, is comprised mainly of abducted children who roam northern Uganda, where “many families have lost a child through abduction, or their village . . . [have been] attacked and destroyed, families burned out and/or killed, and harvests destroyed by . . . . the Lord’s Resistance Army.”

The plight of Ochola John, pictured below, exemplifies an all-too-common story: his hands, lips, nose, and ears were cut off by members of the Lord Resistance Army. It is a difficult image to take in (note, we opted against many other more graphic photos).

Such atrocities have led many in Uganda toquestion how children could become evil incarnate:

We don’t understand how Kony could have a child soldier slash a fellow child abductee with a machete or make a group of children bite their agemate with their bare teeth till he bleeds to death.

In searching for answers, some have turned to situationist factors:

It is easy to assume that the person who commits such an atrocity is deranged or even inhuman. Sometimes it is the case. But not always. It is possible for a normal individual to commit an abnormal, sick act just because of the situation s/he finds him/herself in, and the training s/he is exposed to.

How could this happen? Zimbardo’s ten-factor list suggests some of situationist grease that no doubt lubricates the wheels of evil. Kony’s methods and ideology are extreme, to be sure, but they are familiar: saving his country from evil by building a theocracy.

In that way, dispositionism can give way to a weak form of situationism, but only up to a point — a tendency that has elsewhere been called selective situationism or naive situationism. Kony’s evil disposition is the “situation” influencing the impressionable young boys. In the end, we place evil almost exclusively in one or a small number of actors — usually human, but sometimes supernatural. No doubt, Kony is immensely blameworthy, so much so that we, the authors, can scarcely bring ourselves even to suggest that the horrors might have multiple origins, beyond the gruesome actions of the most salient actors involved.

By locating evil ultimately in a person or group, we avoid a disconcerting possibility that there is more to the situation beyond the bad individuals. When evil comes packaged within a few human bodies, it is rendered more tractable, identifiable, and perhaps, in a way, less threatening — very “them,” and very “other.” Such a conception undermines the unsettling possibility that, because of the situation, there may be more “evil actors” behind those that we currently face. Get rid of the bad apples, we imagine, and the rest of the batch will be fine. Perhaps more important, it permits us to ignore the possibility that the barrel may be contaminating. We need not confront any apprehensions that our systems are unjust, the groups we identify with are contributing to or benefitting from that injustice, or that we individually play some causal role in it.

Joseph Kony is said to have abducted 20,000 kids in the last 20 years. But he has done so with minimal resistance from Uganda’s government, and with virtually no intervention from foreign powers.

Is there any line at which we non-salient bystanders of the world, including Americans, begin to bear some share of responsibility for suffering such as that endured by Ochola John? Maybe the answer is “no,” as most of us apparetly presume. But maybe it is “yes,” and maybe that line has already been crossed.

We are not making a foreign policy recommendation here. We are simply highlighting a form of blindness that we suspect influences all policy. That is, dispositionism (and motivated attributions generally) helps us push that line of responsibility toward, if not all the way to, the vanishing point — even if it does little to reduce the atrocities themselves. Dispositionism helps us to see the apple, or perhaps the tree, and to miss the orchard and the forest and, perhaps, ourselves.

There are other examples of that tendency of allowing our attributions toward salient (and often despicable) individuals to eclipse any possibility of a more complex, far-reaching causal story. Our criminal justice system is partially built upon it. Consider, also, the widespread response to Susan Sontag’s infamous New Yorker essay, in which she described the of 9/11 terrorism not as

a “cowardly” attack on “civilization” or “liberty” or “humanity” or “the free world” but an attack on the world’s self-proclaimed super-power, undertaken as a consequence of specific American alliances and actions. . . . And if the word “cowardly” is to be used, it might be more aptly applied to those who kill from beyond the range of retaliation, high in the sky, than to those willing to die themselves in order to kill others. In the matter of courage (a morally neutral virtue): whatever may be said of the perpetrators of Tuesday’s slaughter, they were not cowards.

Regardless of the veracity of Sontag’s claims, many Americans did not want to hear such a non-affirming interpretation in the wake of the terror. She not only implicated American policies but suggested that perhaps the attackers were not as “beneath us” as many had portrayed.

As one of us summarized in another article (with Situationist contributors Adam Benforado and David Yosifon), many conservative commentators responded to Sontag and her claims with predictable rage and disgust (while most moderates and liberals took cover in the safety of silence).

Charles Krauthammer called Sontag “morally obtuse,” while Andrew Sullivan labeled her “deranged.”John Podhoretz claimed that she exemplified the “hate-America crowd,” that out-group of Americans who are “dripping with contempt for the nation’s politics, its leaders, its economic system and for their foolish fellow citizens.” And Rod Dreher really drove home the point saying that he wanted“to walk barefoot on broken glass across the Brooklyn Bridge, up to that despicable woman’s apartment, grab her by the neck, drag her down to ground zero and force her to say that to the firefighters.”

We see ourselves as “just,” and don’t like being “implicated” by clear injustice, a discomfort that is often assuaged by looking for the Evil Actor. But when evil continues, even after the evil individuals have been stopped, perhaps we glimpse one reason why, as George Santayana famously put it, “those who cannot remember the past are condemned to repeat it.”

Like this:

Yesterday’s New York Times included an article by Mireya Navarroon Asian stereotypes in entertainment.The article, titled “Trying To Crack the Top 100,” contained several interesting anecdotes revealing some of the positive and negative effects of Asian stereotypes. Portions of the article are pasted below.

* * *

As a child of Detroit, Harlemm Lee says soulful music runs through his veins. Lee has sung R&B in talent shows, in musicals at Disney World and even on an album he recorded in the 1980s as he pursued a music career after high school.

Then in 2003 he won the NBC reality show “Fame,” gaining national attention and another record contract. Lee thought it was his big break, but he is about to turn 40 this year and is still working as a secretary, still waiting to make it as a singer.

Of all the factors that have shaped his career in a fickle industry, Lee said he is sure about the one that has hurt him most: looking Chinese.

“In terms of finding an advocate in the industry, the Asian thing has been the critical factor,” said Lee, who is of Chinese and Filipino descent. “You don’t fit.”

There are Asian-American stars in sports, movies, television and classical music. But the “Asian thing” is what Lee and many other aspiring Asian-American singers say largely accounts for the lack of Asian-American pop stars.

People in the music industry, including some executives, have no ready explanation, but Asian-American artists and scholars say that racial stereotypes — the image of the studious geek, the perception that someone who looks Asian must be a foreigner — clash with the coolness and born-in-the-U.S.A. authenticity required for pop stardom.

Asian-Americans may be expected to play the violin or know kung fu, some artists and scholars say, but not necessarily to sound like Kanye West or Madonna, or sell like them. The issue came to the fore most recently on “American Idol,” where a Korean- American contestant, Paul Kim, 24, said he was giving music one last shot after many disappointments.

Kim, who sang ballads for the show, was praised by the judges for his “range” and “tonal quality,” but he was among the first four contestants to be voted off by viewers after the first round. While on the show, Kim wrote on his MySpace.com page that “I was told over and over again by countless label execs that if it weren’t for me being Asian, I would’ve been signed yesterday.”

* * *

“There are very talented Asian- Americans out there,” said Michael Hong, founder and chief executive of ImaginAsian Entertainment, a multimedia company that features Asian- American artists. “The only problem is nobody is signing them.”

Some are being signed, but the roster tilts heavily toward mixed-race Asians whose looks are racially ambiguous, like Cassie, an R&B singer of Filipino and African-American descent whose song “Me & U” was one of last year’s hottest summer hits, some Asian-Americans artists noted.

* * *

Some artists say so much is percolating in the underground that more Asian-American talent is bound to start bubbling up soon.

Natalise, a 22-year-old pop singer of Burmese and Chinese descent whose single “Love Goes On” was a local radio hit in 2002 while she attended Stanford University, has turned her forays into YouTube [see example below], MySpace and her own Web site into bigger exposure. She has had some of her original songs featured on local commercial radio and MTV shows like “Next” and “My Super Sweet 16.”

“I feel that we’re on the brink of something huge and it’s just a matter of time and effort,” said Natalise, who lives in Los Angeles and is recording her third album on her own label.

A talent executive with a major label, who spoke on the condition of anonymity because he was not authorized to speak for his company, said he knew of no “inherent bias” against singers of Asian descent and said he was at a loss to explain why so few make it to the top.

“It’s a matter of who contacts you, who gets representation, who builds a following, who’s out there playing clubs that people hear about,” he said.

* * *

But Asian-American artists face other challenges. Making up only 4 percent of the country’s population, they are too small a market, and too fragmented in language and nationalities, to offer a solid springboard for its aspiring stars the way other ethnic groups have done, said Oliver Wang, a music journalist who teaches about race and popular culture at California State University in Long Beach.

Similarly, there are limited marketing mechanisms at their disposal.

“We don’t have BET,” said Hong of ImaginAsian, referring to the Black Entertainment Television channel. “We don’t have Telemundo, to have these artists be taken seriously.”

Like this:

Today, Phil Zimbardo posted here on ten effective methods for leading a person to “engage in apparently harmful behavior” — strategies that he said “have parallels to compliance strategies used by ‘influence professionals’ in real-world settings, such as salespeople, cult and military recruiters, media advertisers, and others.” Today on NPR, there was a story on All Things Considered, in which Debbie Elliot interviewed the Vice Fund’s portfolio manager, Charles Norton.

The coincidence of those events led The Situationist Staff to wonder: Could Zimbardo’s list of strategies help explain how Mr. Norton and others justify investment strategies that would otherwise seem immoral? We leave that to our readers to decide.

According to Norton, although the Vice Fund concentrateson “the Alcoholic beverages sector, tobacco, gaming, and aerospace and defense,” “[w]e’re not advocating these activities.” In fact, as Mr. Norton explained, “I don’t smoke, I drink only on occasion, [and] I rarely gamble.”

So why invest in just those products and not others?:

There are . . . five common threads that tie investments in these sectors together.One is there’s unvarying demand fortheir goods and services regardless of economic activity.They are global in nature – you know, people smoke and drink and gamble all over the world.They’re extremely profitable; there are high barriers to entry in these businesses.In the tobacco industry, advertising for cigarettes is illegal in most markets, so there is a powerful advantage to brands that have been around . . . .And one of the most important things that we like about these is that the government is a large beneficiary, particularly in gaming and tobacco. . . . The government has a financial incentive to make sure that these industries flourish.

Ms. Elliot returned several times to the question of why Mr. Norton felt no compunction regarding the harm caused by the products he was helping to finance. His responses included the following:

“The fact of the matter is that whether a company is selling a sneaker, a hamburger, or any other good, if it’s legally manufactured and sold, my job is just to analyze it – just to wear my analyst hat and look at the fundamentals.”

“[A]ll we try to do” “is what is in the best interests of . . . [our] shareholders.”

“When you’re a serious investor, you have to check your emotions at the door.Emotions are the enemy when it comes to making sound investment decisions.So, we don’t come at this with any personal biases.We come at this as a purely objective analyst.And, in our perspective, those types of judgments have no place n the investment process.”

Asked if there was anything that he would not invest in, Norton found no place for scruples in his work.“I wouldn’t invest in companies that don’t have strong fundamentals. . . . It’ll be based on the financials and not my moral judgment . . . .”

Asked one more time how he managed to place moral concerns so completely aside, Norton responded: “That’s what I’m trained to do.I’m a professional money manager.I have to remain objective at all times, because emotions will interfere with making a smart investment decision.”

For the last three years, the Vice Fund has earned an average return of roughly 19%, compared to the 10% return of the S&P. Although the Vice Fund does not advocate “vices,” the following images can be found on their website’s home page.

My first post summarized Stanley Milgram’s famous obedience experiments and some of the other related, more recent studies that it inspired. The second post summarized some of the real-world parallels to Milgram’s findings. This post describes ten lessons from the Milgram studies.

* * *

Milgram crafted his research paradigm to find out what strategies can seduce ordinary citizens to engage in apparently harmful behavior. Many of these methods have parallels to compliance strategies used by “influence professionals” in real-world settings, such as salespeople, cult and military recruiters, media advertisers, and others. Below are ten of the most effective.

1

Prearranging some form of contractual obligation, verbal or written, to control the individual’s behavior in pseudo-legal fashion. In Milgram’s obedience study, subjects publicly agreed to accept the tasks and the procedures.

Presenting basic rules to be followed that seem to make sense before their actual use but can then be used arbitrarily and impersonally to justify mindless compliance. The authorities will change the rules as necessary but will insist that rules are rules and must be followed (as the researcher in the lab coat did in Milgram’s experiment).

4

Altering the semantics of the act, the actor, and the action — replacing unpleasant reality with desirable rhetoric, gilding the frame so that the real picture is disguised: from “hurting victims” to “helping the experimenter.” We can see the same semantic framing at work in advertising, where, for example, bad-tasting mouthwash is framed as good for you because it kills germs and tastes like medicine.

5

Creating opportunities for the diffusion of responsibility or abdication of responsibility for negative outcomes, such that the one who acts won’t be held liable. In Milgram’s experiment, the authority figure, when questioned by a teacher, said he would take responsibility for anything that happened to the learner.

6

Starting the path toward the ultimate evil act with a small, seemingly insignificant first step, the easy “foot in the door” that swings open subsequent greater compliance pressures. In the obedience study, the initial shock was only a mild 15 volts. This is also the operative principle in turning good kids into drug addicts with that first little hit or sniff.

7

Having successively increasing steps on the pathway that are gradual, so that they are hardly noticeably different from one’s most recent prior action. “Just a little bit more.”

8

Gradually changing the nature of the authority figure from initially “just” and reasonable to “unjust” and demanding, even irrational. This tactic elicits initial compliance and later confusion, since we expect consistency from authorities and friends. Not acknowledging that this transformation has occurred leads to mindless obedience. And it is part of many date rape scenarios and a reason why abused women stay with their abusing spouses.

9

Making the exit costs high and making the process of exiting difficult; allowing verbal dissent, which makes people feel better about themselves, while insisting on behavioral compliance.

10

Offering a “big lie” to justify the use of any means to achieve the seemingly desirable, essential goal. (In Milgram’s research the justification was that science will help people improve their memory by judicious use of reward and punishment.) In social psychology experiments, this is known as the “cover story”; it is a cover-up for the procedures that follow, which do not make sense on their own. The real-world equivalent is an ideology. Most nations rely on an ideology, typically “threats to national security,” before going to war or suppressing political opposition. When citizens fear that their national security is being threatened, they become willing to surrender their basic freedoms in exchange. Erich Fromm’s classic analysis in Escape from Freedom made us aware of this trade-off, which Hitler and other dictators have long used to gain and maintain power.

Procedures like these are used when those in authority know that few would engage in the endgame without being prepared psychologically to do the unthinkable. But people who understand their own impulses to join with a group and to obey an authority may be able also to withstand those impulses at times when the mandate from outside comes into conflict with their own values and conscience. In the future, when you are in a compromising position where your compliance is at issue, thinking back to these ten stepping-stones to mindless obedience may enable you to step back and not go all the way down the path — their path. A good way to avoid crimes of obedience is to assert one’s personal authority and to always take full responsibility for one’s actions. Resist going on automatic pilot, be mindful of situational demands on you, engage your critical thinking skills, and be ready to admit an error in your initial compliance and to say, “Hell, no, I won’t go your way.”

Like this:

By now, many folks are familiar with the implicit social cognition work of Anthony Greenwald and Situationist contributors Mahzarin Banaji and Brian Nosek. The concept of “implicit bias”, which can be measured by reaction time instruments such as the implicit association test (IAT), has already had a substantial impact on the way that we think about race, racism, and race relations. The data reveal that we generally have more biases than we think we do.

Predictably, a backlash of sorts is forming against this work — after all, it’s always disturbing to think that we are not so colorblind or gender-blind after all. But certain complaints are slightly surprising, especially given where they are coming from.

Joe Biden got his presidential campaign off on the wrong foot this week when he called one of his competitors for the Democratic nomination, Barack Obama, “the first mainstream African-American who is articulate and bright and clean and a nice-looking guy.” . . .

So, is Biden a bigot, and should we care in the context of his presidential candidacy (assuming it survives)? Federal laws offer a rationale for concluding that the answer is, not much.

Civil rights law doesn’t prohibit racism by employers. It prohibits discrimination on the basis of race—tangible actions taken by employers, not their bad attitudes. . . .

Yet there are plenty of new tools for the thought police. For instance, the Implicit Association Test, developed by psychologists Mahzarin Banaji and Anthony Greenwald, seeks to identify not just hidden biases, but even unconscious biases.

There’s a lot to respond to here. But for now, I want to focus on only one point– the suggestion that implicit bias researchers are interested in thought control. That is nonsense, a strawman. It is Psychology 101 to separate out mental states from action. For lawyers not familiar with the standard typology, psychologists tend to distinguish attitudes from beliefs, sometimes calling negative attitudes “prejudice” and negative beliefs “stereotypes.” More important, they distinguish both attitudes and beliefs from action based on those mental constructs, e.g., “discrimination.” So, when Ford writes that we should be focusing on behavior as most important, no one is in disagreement. It is misleading to suggest otherwise. This is precisely why so many scientific resources are being poured into predictive validty and malleability studies.

Patterson and Authenticity

Here’s another example from Orlando Patterson, sociology professor at Harvard. In an article titled “Our Overrated Inner Self,” in the New York Times, he suggests that we are too absorbed with an “authentic” self. And that implicit bias scientists are pursuing a “gotcha psychology” that contends that implicit measures are the only “authentic” measures that should matter.

Again, a strawman. No responsible implicit bias scholar contends that implicit bias measures are somehow the only real, true attitudes and beliefs. (This interest in “true” or “authentic” also sounds a bit dispositionist, which is another issue.) For example, predictive validity studies suggest that explicit self-reports better predict behavior generally, but in domains influenced by negative attitudes and beliefs, implicit bias measures outperform as predictors.

At bottom, this strawman may be an odd sort of projection. This criticism might come from those who believe that explicit self-reports alone measure “true” attitudes and beliefs, that these reports are the only “true” predictors of behavior, and that these reports are the only “true” bases for moral, social, and legal evaluation. I doubt any serious psychologist or legal scholar thinks this, and the best science suggests that our lives, minds, and behaviors are much too complicated for such a simple diagnosis. Privileging explicit self-reports–even if entirely sincere–as the only thing that matters is not warranted by the best evidence we have. We need to be more behaviorally realistic (I hope to post soon about “behavioral realism”).

A Call for Care and Self-Critical Engagement

Talking about race and justice are difficult enough without strawmen being built, then torn down. I have always been leery of blogs because they can encourage soundbites and oversimplifications. I’m trying to do just the opposite here. On matters of implicit bias, there’s lots to learn and think through on both empirical and normative fronts. Further, most of the interesting questions are genuinely difficult, without the strawmen. My request among all scientists, legal academics, lawyers, and policy makers working in this domain is to earnestly avoid strawmen.

And if I’ve been guilty of the same in my work (see, e.g., Trojan Horses of Race, Harvard 2005; Fair Measures, California 2006 (with M. Banaji))—and given what Yale psychologist David Armor has shown us about the illusion of objectivity, I’m sure I have—point it out, and I’ll own up to it, as well as commit to avoiding it in the future.

In Part I of this series, we described how Americans pursue happiness by watching heroes like Rocky Balboa and Chris Gardner pursue theirs. We spend $10 per ticket and $8 per popcorn bucket to watch more-or-less fictional stories of the downtrodden rise up through sheer force of will and good life choices. Feels very satisfying.

We have an insatiable appetite for such stories, in part because they tell us what we want to hear: anyone in this country can go from the bottom to the top. The Horatio Alger story continues to sell and sell and sell, because, to paraphrase P.T. Barnum, there is a dispositionist situational character born every minute.

But does this sort of entertainment influence how we perceive law and policy? Absolutely.

The basic scripts for Rocky Balboa and Pursuit of Happyness—just like those for Rudy, Radio, Racing Stripes, Race the Sun, Raise Your Voice, and that’s just the “r”s—are the also the foundational scripts employed by most influential policymakers and legal theorists today. Laws, we’ve been told, particularly since Ronald Reagan occupied the Oval Office, should facilitate choice – placing the individual in charge, making the consumer sovereign, and letting power and responsibility fall to the person, while minimizing the role of the collectivist, paternalistic, and intermeddling “regulator” or “social program.” When the state and its laws simply facilitate individual choice, we can be confident that those among us who are holding the long straw drew well, while those stuck with the short straw chose badly.

So how does dispositionism explain inequality, poverty, and the disappearing middle class? Easy: the less equal lack the will, the commitment, the character, the drive, and the heart, of a champion. The more equal pursue their happiness with the eye of the tiger. What about credit problems that seem increasingly to plague so many Americans? No problem: people lack the financial discipline to spend wisely. If they would stop wasting their paycheck on plasma televisions and $150 sneaker, maybe they’d have enough to pay their rent. Okay, but what about the increasing national girth and the ill-health effects associated with the obesity epidemic. Again, the answer can be found in “choice” — specifically the good choices of the thin (but not too thin) and the bad choices of the couch potatoes, video game players, and everyone else too lazy to choose healthy.

Take any inequity or social problem, ask a dispostionist to explain its existence, and you will almost certainly receive a straightforward, pleasantly simplistic, choice-based explanation that attributes most of the blame to the disadvantaged individual — or his parents. And it is this perception of the person that has propelled much of the late twentieth century’s policy scripts of more markets and less regulation, more freedom and lower taxes, more individualism and less collectivism and state.

This person-schema/law-schema connection is explicit in both Rocky and The Pursuit of Happyness. Consider that Rocky Balboa’s biggest single obstacle isn’t his age, or his willingness to train, or even the sincere doubts of his loved ones. No, it’s those pesky government bureaucrats who, at least initially, deny him his license to fight and thus his “right” to pursue his idiosyncratic version of personal happiness. In a verbal counterpunch that draws as many cheers from theater-goers as the actual (fake) fighting does, Rocky delivers these policy-oriented “hurtin’ bombs”:

Rocky Balboa: Yo, don’t I got some rights? Boxing Commissioner: What rights do you think you’re referring to? Rocky Balboa: Rights, like in that official piece of paper they wrote down the street there? Boxing Commissioner: That’s the Bill of Rights. Rocky Balboa: Yeah, yeah. Bill of Rights. Don’t it say something about going after what makes you happy? Boxing Commissioner: No, that’s the pursuit of happiness. But what’s your point? Rocky Balboa: My point is I’m pursuing something and nobody looks too happy about it. Boxing Commissioner: But . . . we’re just looking out for your interests. Rocky Balboa: I appreciate that, but maybe you’re looking out for your interests just a little bit more. . . . I mean maybe you’re doing your job but why you gotta stop me from doing mine? Cause if you’re willing to go through all the battling you got to go through to get where you want to get, who’s got the right to stop you? I mean maybe some of you guys got something you never finished, something you really want to do, something you never said to someone, something . . . and you’re told no, even after you paid your dues? Who’s got the right to tell you that, who? Nobody! It’s your right to listen to your gut, it ain’t nobody’s right to say no after you earned the right to be where you want to be and do what you want to do! . . . You know, the older I get the more things I gotta leave behind, that’s life. The only thing I’m asking you guys to leave on the table . . . is what’s right.

Booya!

Chris Gardner is even more pro-individual and anti-state. Unsurprisingly, given the movie’s title, Gardner weaves the dispostionist language of the Declaration of Independence throughout his autobiographical voice-overs. At one point he declares:

It was right then that I started thinking about Thomas Jefferson, the declaration of independence, and our right to life, liberty, and the pursuit of happiness, and I remember thinking; how did he know to put the pursuit part in there. That maybe happiness is something we can only pursue, and maybe actually we can never have it, no matter what. How did he know that?

And what is Gardner’s biggest challenge in his personal, private pursuit? Is it when his wife, the mother of his young son, leaves him? Nope. Is it when he has no place to sleep and spends the night on the floor of the subway men’s room with his son’s head on his lap and his meager possessions around them? Try again. Is it when he shows up to a job interview with Dean Witter disheveled and dirty, after spending a night in jail? Uh uh. When he is hit by a car? Not even close—just the opposite, actually, we watch him intrepidly bounce right back up in the middle of traffic telling the despondent car driver who hit him to not worry.

No, Chris Gardner’s announces that his lowest point is when the I.R.S. seizes $600 of “my money!” from “my bank account” for taxes long unpaid. “How can they do that?,” he asks.

The answer to that rhetorical question can be found in the movie’s core value: don’t begrudge the wealthy for their wealth, accept that it is earned and deserved, and go pursue it for yourself at full speed; when you face obstacles, as everyone must, don’t make excuses and don’t ask the government to bail you out. In that vein, the film’s stark contrasts between extravagance and squalor, between smiling and squabbling, between “me being stupid” and “happyness” are not intended to raise questions about whether there is something wrong in the system. It is intended to assure us that the system is fine. The question is whether the individual wants something bad enough. Period.

Not convinced? Just consider how the movie’s “keep yo hands out my pockets” anti-taxation sensibility is never reconciled with the clear absence of shelters for the homeless (at least two of whom are portrayed as mighty blameless), with the public transportation system, deficient as it may be, on which Gardner and son so heavily rely for both transport and shelter, and with the absence of social welfare programs that might have saved a marriage and subsidized the budget-busting childcare.

No, The Pursuit of Happyness is about recognizing that the rich and the poor are equally deserving of their condition. That’s true even though Gardner doesn’t mind stealing a $20 fare from a desperate taxi driver but knows never ever to ask for a single penny from the wealthy businessman who actually accrued the fare. Similarly, Gardner empties his wallet to loan the senior partner in the Dean Witter office, Martin Frohm, $5. The wealth-dripping boss has no trouble asking for money, but Gardner understands that he must be silent about the fact that the loan will break him. Meanwhile, Gardner is repeatedly singled out by his immediate supervisor to fetch coffee and doughnuts, park his car, and tend to those menial tasks suitable for the the only black intern in the program. Again, Gardner understands the implicit rules of success: don’t complain, don’t even flinch, in fact, don’t even notice; just work that much harder. Getting the job means getting along. Getting along means going along.

“You want something? Go get it. Period.”

* * *

The next post in this series looks at a few of the situationist lessons of Rocky and The Pursuit of Happyness.

Like this:

Here are the opening paragaphs in an AP story out of Santa Rosa California yesterday:

When a few classmates razzed Rebeka Rice about her Mormon upbringing with questions such as, “Do you have 10 moms?” she shot back: “That’s so gay.”

Those three words landed the high school freshman in the principal’s office and resulted in a lawsuit that raises this question: When do playground insults used every day all over America cross the line into hate speech that must be stamped out?

Stereotyping, hate speech, bullying, the influence of situation (why does so much of this happen on the playground?), personal responsibility, parental responsibility, institutional responsibility: This story touches on all of them.