Blog Stats

Archive for the ‘Conflict’ Category

Paul Bloom, Professor of Psychology and Cognitive Science at Yale University and contributing author of the 2012 Annual Review of Psychology, talks about his article “Religion, Morality, Evolution.” How did religion evolve? What effect does religion have on our moral beliefs and moral actions? These questions are related, as some scholars propose that religion has evolved to enhance altruistic behavior toward members of one’s group. But, Bloom argues, while religion has powerfully good moral effects and powerfully bad moral effects, these are due to aspects of religion that are shared by other human practices. There is surprisingly little evidence for a moral effect of specifically religious beliefs.

We know a lot about how the Germans carried out the Holocaust. We know much less about how they felt and what they thought as they did it, how they were affected by what they did, and what made it possible for them to do it. In fact, we know remarkably little about the ordinary Germans who made the Holocaust happen — not the desk murderers in Berlin, not the Eichmanns and Heydrichs, and not Hitler and Himmler, but the tens of thousands of conscripted soldiers and policemen from all walks of life, many of them middle-aged, who rounded up millions of Jews and methodically shot them, one by one, in forests, ravines and ditches, or stuffed them, one by one, into cattle cars and guarded those cars on their way to the gas chambers.

In his finely focused and stunningly powerful book, “Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland,” Christopher R. Browning tells us about such Germans and helps us understand, better than we did before, not only what they did to make the Holocaust happen but also how they were transformed psychologically from the ordinary men of his title into active participants in the most monstrous crime in human history. In doing so he aims a penetrating searchlight on the human capacity for utmost evil and leaves us staring at his subject matter with the shock of knowledge and the lurking fear of self-recognition.

* * *

In the end, what disturbs the reader more than the policemen’s escape from punishment is their capacity — as the ordinary men they were, as men not much different from those we know or even from ourselves — to kill as they did.

Battalion 101’s killing wasn’t, as Mr. Browning points out, the kind of “battlefield frenzy” occasionally seen in all wars, when soldiers, having faced death, and having seen their friends killed, slaughter enemy prisoners or even civilians. It was, rather, the cold-blooded fulfillment of German national policy, and involved, for the policemen, a process of accommodation to orders that required them to do things they would never have dreamed they would ever do, and to justify their actions, or somehow reinterpret them, so that they would not see themselves as evil people.

Mr. Browning’s meticulous account, and his own acute reflections on the actions of the battalion members, demonstrate the important effect that the situation had on those men: the orders to kill, the pressure to conform, and the fear that if they didn’t kill they might suffer some kind of punishment or, at least, damage to their careers. In fact, the few who tried to avoid killing got away with it; but most believed, or at least could tell themselves, that they had little choice.

But Mr. Browning’s account also illustrates other factors that made it possible for the battalion’s ordinary men not only to kill but, ultimately, to kill in a routine, and in some cases sadistic, way. Each of these factors helped the policemen feel that they were not violating, or violating only because it was necessary, their personal moral codes.

One such factor was the justification for killing provided by the anti-Semitic rationales to which the policemen had been exposed since the rise of Nazism, rationales reinforced by the battalion’s officers. The Jews were presented not only as evil and dangerous but also, in some way, as responsible for the bombing deaths of German women and children. Another factor was the process of dehumanization: abetted by Nazi racial theories that were embraced by policemen who preferred not to see themselves as killers, Jews were seen as less than people, as creatures who could be killed without the qualms that would be provoked in them were they to kill fellow Germans or even Slavs. It was particularly when the German policemen came across German Jews speaking their own language, especially those from their own city, that they felt a human connection that made it harder to kill them.

The policemen were also helped by the practice of trying not to refer to their activities as killing: they were involved in “actions” and “resettlements.” Moreover, the responsibility wasn’t theirs; it belonged to the authorities — Major Trapp as well as, ultimately, the leaders of the German state — whose orders they were merely carrying out. Indeed, whatever responsibility they did have was diffused by dividing the task into parts and by sharing it with other people and processes. It was shared, first of all, by others in the battalion, some of whom provided cordons so that Jews couldn’t escape and some of whom did the shooting. It was shared by the Trawnikis, who were brought in to do the shooting whenever possible so that the battalion could focus on the roundups. And it was shared, most effectively, by the death camps, which made the men’s jobs immensely easier, since stuffing a Jew into a cattle car, though it sealed his fate almost as surely as a neck shot, left the actual killing to a machine-like process that would take place far away, one for which the battalion members didn’t need to feel personally responsible.

CLEARLY, ordinary human beings are capable of following orders of the most terrible kinds. What stands between civilization and genocide is the respect for the rights and lives of all human beings that societies must struggle to protect. Nazi Germany provided the context, ideological as well as psychological, that allowed the policemen’s actions to happen. Only political systems that recognize the worst possibilities in human nature, but that fashion societies that reward the best, can guard the lives and dignity of all their citizens.

Like this:

From a very good 2011 NYTimes article by Benedict Carey, here are a few excerpts on some of the psychological dynamics behind cheating:

[P]aradoxically, it’s often an obsession with fairness that leads people to begin cutting corners in the first place.

“Cheating is especially easy to justify when you frame situations to cast yourself as a victim of some kind of unfairness,” said Dr. Anjan Chatterjee, a neurologist at the University of Pennsylvania who has studied the use of prescription drugs to improve intellectual performance. “Then it becomes a matter of evening the score; you’re not cheating, you’re restoring fairness.”

The boilerplate tale of a good soul gone wrong is well known. It begins with small infractions — illegally downloading a few songs, skimming small amounts from the register, lies of omission on taxes — and grows by increments. The experiment becomes a hobby that becomes a way of life. In a recent interview with New York magazine, Bernard Madoff said his Ponzi scheme grew slowly from an investment advisory business that he began as a sideline for certain clients.

This slippery-slope story obscures the process of moving to the dark side; namely, that people subconsciously seek shortcuts more than they realize — and make a deliberate decision when they begin to cheat in earnest.

In a series of recent studies, Dan Ariely of Duke University and his colleagues gave college students opportunities to cheat on a general knowledge test. In one, students were instructed to transfer their answers onto a form with color-in bubbles, to register their official score. Some received bubble sheets with the correct answers seemingly inadvertently shaded in gray, and changed about 20 percent of their answers. A follow-up study demonstrated that they were unaware of the magnitude of their dishonesty. They were cheating without being fully aware of it.

Yet the behavior changes once a clear rule is in place. “If you specifically tell people in these studies not to use the answer key and just sign their name,” said Zoe Chance, a doctoral student at Harvard who worked on some of the experiments, “they won’t look at it.”

David DeSteno, a psychologist at Northeastern University in Boston and co-author of the . . . book “Out of Character,” about deception and other misbehavior, said: “With all of these kinds of decisions there’s a battle between short- and long-term gains, a tension between the more virtuous choice and the less virtuous one. And of course there are outside factors that can sway that arrow to one side or another.”

That is, low-level cheating may be natural and even productive in some situations; the brain naturally seeks useful shortcuts. But most people tend to follow rules they accept as fair, even when they have the opportunity and a strong incentive to break them.

In short, the move from small infractions to a deliberate pattern of deception or fraud is less an incremental slide than a deliberate strategy. And in most people it takes shape for personal, and often very emotional, reasons, psychologists say.

One of the most obvious of these is resentment of an authority or a specific rule. The evidence of this is easy enough to see in everyday life, with people flouting laws about cellphone use, smoking, the wearing of helmets. In studies of workplace behavior, psychologists have found that in situations where bosses are abusive, many employees withhold the unpaid extras that help an organization, like being courteous to customers or helping co-workers with problems.

Yet perhaps the most powerful urge to cheat stems from a deep sense of unfairness, psychologists say. As people first begin to compete and compare themselves with others, as early as middle school, they also begin to learn of others’ hidden advantages. Private tutors. Family money. Alumni connections. A regular golf game with the boss. Against a competitor with such advantages, taking credit for other people’s work at the office is not only easier, it can seem only fair.

Once the cheating starts, it’s natural to impute it to others. “When it comes to negative characteristics, we tend to overestimate how much others have in common with us,” said David Dunning, a psychologist at Cornell University.

That is to say: A corner cutter often begins to think everyone else is cheating after he has started cheating, not before.

“And if they are subsequently rewarded for the extra productivity, they tend to internalize the feeling of pride and view their success as due to inherent ability and not something else they were using,” said Dr. DeSteno.

Finally, in the winner-take-all environment that characterizes many competitive fields, cheating feels like a hedge against that most degrading sensation: being a chump. The fear of finishing out of the money and hearing someone say, “Wait, you mean to tell me you could have and you didn’t?” Psychologists argue that the sensation of being duped — anger, self-blame, bitterness — is such a singular cocktail that it forces an uncomfortable kind of self-awareness.

How much of a fool am I? How did I not see this?

It happens every day to people who resist cheating. Nothing fair about it.

Like this:

From Situationist friend and Harvard Law School 3L, Kate Epstein, an essay about Monday’s tragedy:

As I hear reactions to the bombings at the marathon on Monday, I find myself agreeing with Glenn Greenwald’s column in The Guardian, titled “The Boston bombing produces familiar and revealing reactions: As usual, the limits of selective empathy, the rush to blame Muslims, and the exploitation of fear all instantly emerge.” Particularly interesting to me are our cognitive limits, as humans, when it comes to empathy. Greenwald writes:

The widespread compassion for yesterday’s victims and the intense anger over the attacks was obviously authentic and thus good to witness. But it was really hard not to find oneself wishing that just a fraction of that compassion and anger be devoted to attacks that the US perpetrates rather than suffers. These are exactly the kinds of horrific, civilian-slaughtering attacks that the US has been bringing to countries in the Muslim world over and over and over again for the last decade, with very little attention paid.

I felt the same way in the aftermath of Monday’s events, but I can also empathize with those who do care more–or at least feel it in a more real way–when the victims of a random act of violence are white, close to home, and so obviously innocent. “They, unlike the countless non-white, non-American casualties of the War on Terror, are– for me and many around me–part of our in-group, and our minds actually function in a way that makes us much more easily empathize with them.”

Studies have shown that parts of our brain associated with empathy and emotion are more likely to be activated when we observe someone of our own race, as opposed to an out-group member, in pain. This makes sense given research on unconscious bias using implicit association tests, which have been shown to predict real-life behavior outside of the lab.

The good news is that our automatic attitudes are sometimes malleable. Awareness of the differences between our egalitarian values and our implicit attitudes can induce emotional reactions that can motivate behavioral changes and help us be the empathetic and altruistic people we hope to be. On the other hand, lack of awareness combined with an inundation of negative images and stereotypes from commercial media and popular culture can reinforce implicit biases, underscoring the need for education and self-awareness.

In a world with so much violence and pain, it makes sense that we simply could not feel deeply empathetic every time a human being is injured or killed. We rightly feel intense moral outrage that someone would senselessly harm innocent people gathered in Boston yesterday, and yet we do not so easily empathize with victims of drone strikes in Pakistan, most of whom see the bombings as just as random and senseless, against victims just as innocent.

We should forgive ourselves for exhibiting these cognitive limits–after all, we are only human. But we should recognize, in these moments when we do so easily feel sorrow, anger, and compassion, those events which do not normally elicit those emotions, and force ourselves to grapple with the consequences of that fact. When we read dry, mundane news reports about human suffering, when we (rarely) hear body counts of the War on Terror (such as the estimated 122,000 violent, civilian deaths in Iraq thus far), when we are made aware of the latest unnamed drone victims in North Waziristan, let’s try to channel the empathy events like this make us feel, and then let’s turn that empathy into action.

Everyone knows that politics is now so divided in our country that not only do the 2 sides disagree on the solutions to the country’s problems, they don’t even agree on what the problems are. It’s 2 versions of the world in collision. This week we hear from people who’ve seen this infect their personal lives. They’ve lost friends. They’ve become estranged from family members. A special pre-election episode of our show.

It turns out nice guys can finish first, and David Rand has the evidence to show it.

Rand, a postdoctoral fellow in Harvard’s Department of Psychology and a lecturer in human evolutionary biology, is the lead author of a new paper, which found that dynamic, complex social networks encourage their members to be friendlier and more cooperative, with the possible payoff coming in an expanded social sphere, while selfish behavior can lead to an individual being shunned from the group and left — literally — on his or her own.

As described this week in the Proceedings of the National Academy of Sciences (PNAS), the research is among the first studies to examine social interaction as a fluid, ever-changing process. Previous studies of complex social networks largely used static snapshots of groups to examine how members were or were not connected. This new approach, Rand said, is the closest scientists have yet come to describing the way the planet’s 7 billion inhabitants interact daily.

“This model is closer to real life; thus the results are closer to real life,” Rand said. “What this is showing is that a key aspect of real-world social networks is the dynamic component. The point of this paper is to say that those networks are always shifting, and they’re not shifting in random ways.

“There are many nasty things that happen between people, but for the most part we are fantastically cooperative,” Rand said. “We do an amazing job of having thousands or even millions of people living in very close quarters in cities all over the world. In a functioning society, things like trade, friendship, even democracy itself require high levels of cooperation, and when everyone does it, you get good collective outcomes.”

“Cooperation is a fascinating topic,” said Sociology Professor and Pforzheimer House Master Nicholas Christakis. “We see cooperation everywhere in the biological and sociological worlds, but it’s actually very hard to explain. Why do creatures, including ourselves, cooperate?

“What our paper shows is that there is a deep relationship between cooperation and social networks. In particular, we found that if you allow people to rewire their social networks, cooperation persists in the population. I believe this paper is the first to show, empirically, how that relationship works. As humans, we do two unique things: We re-shape the social world around us, and in so doing, we create a better place for ourselves by being nice to each other.”

At the outset, Rand said, each player begins with an equal number of points, and is randomly connected with one or more players. As the game progresses, players have the opportunity to be either generous, and pay to give points to each player they are connected with, or be selfish, and do nothing. Following each round, some players are randomly given the opportunity to update their connections, based on whether other players have been generous or selfish.

The findings, Rand said, showed that players re-wired their social networks in intriguing ways that helped both themselves and the group they were in. They were more willing to make new connections or maintain existing connections with those who acted generously, and break connections with those who behaved selfishly.

“Because people have control over who they are interacting with, people are more likely to form connections with people who are cooperative, and much more likely to break those links with people who are not,” Rand said. “Basically, what it boils down to is that you’d better be a nice guy, or else you’re going to get cut off.”

Intriguingly, the study also uncovered a correction mechanism inherent to social groups. Those who were initially noncooperative, Rand said, were found to be twice as likely to become cooperative after being shunned, suggesting that being cut off from the group acts as a sort of internal discipline, ensuring that cooperation remains high within a social network.

“As a result, when you have a network that’s dynamic, you see stable, high levels of cooperation, whereas in a static network you see a steady breakdown of cooperation,” Rand said.

Psychologist Jonathan Haidt asks a simple, but difficult question: why do we search for self-transcendence? Why do we attempt to lose ourselves? In a tour through the science of evolution by group selection, he proposes a provocative answer.

Jonathan Haidt studies how — and why — we evolved to be moral. By understanding more about our moral roots, his hope is that we can learn to be civil and open-minded.

Abstract: Americans are becoming ever more aware of our huge social-class divides, for example in income inequality. Even outside socio-economic status, other forms of status divide us (Fiske, 2011). Status-comparison compels people, even as it stresses, depresses, and divides us. Comparison is only natural, but the collateral damage reveals envy upward and scorn downward, which arguably poison people and their relationships. Based on one of the Stereotype Content Model’s two primary dimensions, status/competence, several experiments-using questionnaire, psychometric, response-time, electro-myographic, and neuroimaging data-illustrate the dynamics of envy up and scorn down. All is not lost, however, as other experiments show how to mitigate the effects of envy and scorn.

Like this:

Earlier this month James Surowiecki wrote an excellent piece, “titled The Fairness Trap,” for the New Yorker. Surowiecki considers some of the ways that the widespread preference for fairness may be contributing to some of the global and local economic woes. Here are a few excerpts from the article.

With Europe’s economic woes dominating the headlines once more, it’s hard not to think of Yogi Berra’s dictum “It’s déjà vu all over again.” As usual, the turmoil centers on Greece, which is in its fifth year of recession and struggling beneath a colossal debt load. This year, in exchange for drastic austerity measures, Greece’s government agreed to an aid package (its second) with the European Union and the International Monetary Fund, totalling $174 billion. But three weeks ago furious Greek voters tossed the ruling parties out of office; attempts to form a coalition government failed, and new elections are scheduled for next month. Now Greek politicians are talking tough about renegotiating, but the E.U., led by Germany, which is the largest contributor to the bailout, says that there will be no more money for Greece if it doesn’t live up to its promises. So policymakers are seriously discussing a so-called Grexit—in which Greece would default on its debts and abandon the euro.

This isn’t an outcome that anyone wants. Even though a devalued currency would make Greece’s exports cheaper and attract tourists, it would do so at a terrible price, destroying huge amounts of wealth and seriously harming the country’s G.D.P. It would be costly for the rest of Europe, too. Greece owes almost half a trillion euros, and containing the damage would likely require the recapitalization of banks, continent-wide deposit insurance (to prevent bank runs), and more aid to Portugal, Spain, and Italy, which seem to be the next countries in line to default. That’s a very high price to pay for getting rid of Greece, and much more expensive than letting it stay.

Rationally, then, this standoff should end with a compromise—relaxing some austerity measures, and giving Greece a little more aid and time to reform. And we may still end up there. But the catch is that Europe isn’t arguing just about what the most sensible economic policy is. It’s arguing about what is fair. German voters and politicians think it’s unfair to ask Germany to continue to foot the bill for countries that lived beyond their means and piled up huge debts they can’t repay. They think it’s unfair to expect Germany to make an open-ended commitment to support these countries in the absence of meaningful reform. But Greek voters are equally certain that it’s unfair for them to suffer years of slim government budgets and high unemployment in order to repay foreign banks and richer northern neighbors, which have reaped outsized benefits from closer European integration. The grievances aren’t unreasonable, on either side, but the focus on fairness, by making it harder to reach any kind of agreement at all, could prove disastrous.

The basic problem is that we care so much about fairness that we are often willing to sacrifice economic well-being to enforce it. Behavioral economists have shown that a sizable percentage of people are willing to pay real money to punish people who are taking from a common pot but not contributing to it. Just to insure that shirkers get what they deserve, we are prepared to make ourselves poorer. Similarly, a famous experiment known as the ultimatum game—one person offers another a cut of a sum of money and the second person decides whether or not to accept—shows that people will walk away from free money if they feel that an offer is unfair. Thus, even when there’s a solution that would leave everyone better off, a fixation on fairness can make agreement impossible.

You can see this in the way the U.S. has dealt with the foreclosure crisis. . . . * * *

The fairness problem is exacerbated by the fact that our definition of what counts as fair typically reflects what the economists Linda Babcock and George Loewenstein call a “self-serving bias” * * * [which] leads us to define fairness in ways that redound to our benefit, and to discount information that might conflict with our perspective. This effect is even more pronounced when bargainers don’t feel that they are part of the same community—a phenomenon that psychologists call “social distance.” * * *

Read the entire article, including a discussion of the U.S. mortgage crisis here.

Why have American politics become so polarized? Maybe they haven’t – maybe it’s just you? New research reveals that partisans, especially those on the extremes, overestimate the amount of polarization that actually exists. The phenomenon, called polarization projection,helps us to understand how it is that people on both ends of the political spectrum mistakenly assume that there is a much wider gap between the two sides than there actually is.

Making the problem worse, people at the political extremes – those who have exaggerated views of how polarized the country is – are also the ones who are most politically active. This can end up translating extreme partisans’ mistaken views into the election of politicians who are more extreme than the people they represent, particularly in the context of intra-party primaries (Nate Silver recently documented this effect among Senate Republicans).

When the gap between the two parties appears to be enormous, compromise becomes difficult. We become less likely to see our political adversaries as having the same basic goals as us (like improving the country and the lives of its citizens) while having different opinions of how to achieve those goals. Instead, they become the enemy. And compromising with the enemy is not pragmatic, it’s disloyal.

“I recognize there are times when our country is incredibly polarized in that political sense. Right now is one of those times. The leadership of the Republican Party and the leadership of the Democratic Party are not going to be able to reach compromise on big issues because they are so far apart in principle. My idea of bipartisanship going forward is to make sure that we have such a Republican majority in the U.S. House and U.S. Senate and in the White House, that if there’s going to be bipartisanship, it’s going to be Democrats coming our way, instead of them trying to pull Republicans their way.”

Dick Lugar’s biggest sin, it seems, is that he was occasionally willing to side with Obama and the Democrats. He worked with then-Senator Obama on a bill that to secure nuclear material abroad, and voted to confirm President Obama’s Supreme Court nominees, Sonia Sotomayor and Elena Kagan. As Obama himself said in a statement released after Lugar’s defeat, “While Dick and I didn’t always agree on everything, I found during my time in the Senate that he was often willing to reach across the aisle and get things done.” A willingness to compromise meant the end of Senator Lugar, or, as Tea Partiers in Indiana liked to refer to him, “Obama’s favorite Republican.” Another moderate Republican Senator, Maine’s Olympia Snowe, also decided not to seek re-election, saying that she does “not realistically expect the partisanship of recent years in the Senate to change over the short term.”

But let’s get back to the research – what’s the evidence that suggests that it’s the extremists that overestimate the amount of political polarization? . . . [continued]

Situationist friend Dave Nussbaum continues to write terrific posts over at, Random Assignments. Below, we have re-blogged portions of his recent post about how President Obama’s support of gay marriage led Republicans to become more opposed to it.

Yesterday, Andrew Sullivan posted a new Washington Post/ABC News poll tracking changes in approval for legalizing same sex marriage. Sullivan noted that following Obama’s announcement this month that his support of equal rights for same sex couples has “evolved” into support for marriage, there has been a rise in support for legalizing gay marriage among Democrats and Independents. Meanwhile, among Republicans the reverse is true:

“As the country as a whole grows more supportive of gay equality, the GOP is headed in the other direction. Republican support for marriage equality has declined a full ten points just this year – a pretty stunning result. Have they changed their mind simply because Obama supports something? In today’s polarized, partisan climate, I wouldn’t be surprised.”

I wouldn’t be surprised either. This is how partisans often react to anything coming from the other side: whatever it is, they don’t like it. Partisans will argue that they are opposed to whatever it is the other side is proposing purely on its merits. We all like to believe that when we evaluate a policy we are responding to the policy’s content, but very often we’re far more influenced by who is proposing it.

For example, in a pair of studies published in 2002, Lee Ross and his colleagues asked Israeli participants to evaluate a peace proposal that was an actual proposal submitted by either the Israeli or the Palestinian side. The trick they played was that, for some participants, they showed them the Israeli proposal and told them it was the Palestinian one, or they showed them the Palestinian proposal and told them it came from the Israeli side (the other half of participants saw a correctly attributed proposal). What they found was that the actual content of the plan didn’t matter nearly as much as whose plan they thought it was. In fact, Israeli participants felt more positively toward the Palestinian plan when they thought came from the Israeli side than they did toward the Israeli plan when they thought it came from the Palestinians. Let me repeat that: when the plans’ authorship was switched, Israelis liked the Palestinian proposal better than the Israeli one.

The same is true when it comes to Democrats and Republicans. In a series of studies published by Geoffrey Cohen in 2003 (PDF), he asked liberals and conservatives to evaluate both a generous and a stringent proposed welfare policy. Although liberals tend to prefer a generous welfare policy and conservatives tend to prefer a more stringent one, the actual content of the policy mattered far less than who proposed it. Not only were liberal participants perfectly happy to support a stringent policy when it was proposed by their own party (while the reverse was true for conservative participants), neither side was aware of the influence of the source of the policy proposal. So even though their partisan affiliations were more important than the content of the policy, both liberal and conservative participants claimed that they were basing their evaluations of the welfare policy strictly on its content. New research by Colin Tucker Smith and colleagues, published in the current issue of the journal Social Cognition (4), suggests that the influence of the policy’s source on our evaluation of the policy’s content happens at an automatic level and can happen without our awareness.

So perhaps it should not be terribly surprising that President Obama’s support for marriage equality has led to increased support among Democrats and more opposition from Republicans. . . . [continued]

When it comes to affirmative action, the argument usually focuses on diversity. Promoting diversity, the Supreme Court ruled in 2003, can justify taking race into account.

But some people say this leads to the admission of less qualified candidates over better ones and creates a devil’s choice between diversity and merit.

Not so, says Stanford psychologist Greg Walton. Diversity and meritocracy are not always at odds.

In fact, sometimes it is only by taking race and gender into account that schools and employers can admit and hire the best candidates, Walton argues in a paper slated for publication in the journal Social Issues and Policy Review with co-authors Steven J. Spencer of the University of Waterloo and Sam Erman of Harvard University.

Walton, an assistant professor of psychology, and Spencer plan to present their findings to the Supreme Court in an amicus brief in Fisher v. University of Texas, a case the justices are scheduled to hear next fall and that many court watchers believe threatens to upend affirmative action. (Supreme Court rules bar Erman, who was a recent Supreme Court clerk, from participating in the brief.)

“People have argued that affirmative action is consistent or is not consistent with meritocracy,” Walton said. “Our argument is not that it’s consistent or inconsistent. Our argument is that you need affirmative action to make meritocratic decisions – to get the best candidates.”

The researchers say that people often assume that measures of merit like grades and test scores are unbiased – that they reflect the same level of ability and potential for all students.

Under this assumption, when an ethnic-minority student and a non-minority student have the same high school grades, they probably have the same level of ability and are likely to do equally well in college. When a woman and a man have the same score on a math test, it’s assumed they have the same level of math ability.

The problem is that common school and testing environments create a different psychological experience for different students. This systematically disadvantages negatively stereotyped ethnic minority students like African Americans and Hispanic Americans, as well as girls and women in math and science.

“When people perform in standard school settings, they are often aware of negative stereotypes about their group,” Walton says. “Those stereotypes act like a psychological headwind – they cause people to perform worse. If you base your evaluation of candidates just on performance in settings that are biased, you end up discriminating.”

The conclusion comes out of research on what is called stereotype threat – the worry people have when they risk confirming a negative stereotype about their group. That worry prevents people from performing as well as they can, hundreds of studies have found.

As a consequence, Walton says, “Grades and test scores assessed in standard school settings underestimate the intellectual ability of students from negatively stereotyped groups and their potential to perform well in future settings.”

Walton gives an example of how stereotype threat relates to preferences in admissions or hiring.

A woman and a man each apply to an elite engineering program, he says. The man has slightly better SAT math scores than the woman. He gets accepted to the program, but she does not.

“If stereotype threat on the SAT undermined the woman’s performance and as a consequence caused her SAT score to underestimate her potential, then by not taking that bias into account, you have effectively discriminated against the woman,” Walton says.

Walton and his colleagues argue that schools need to take affirmative steps to level the playing field and to make meritocratic decisions. If the SAT underestimates women’s math ability or the ability of African American students, taking this into account will help schools both admit better candidates and more diverse ones.

While courts have ruled that diversity justifies taking race into account in admissions decisions, justices have not considered meritocracy as a reason for sorting by race.

“Our argument is that it is only by considering race that you can make meritocratic decisions,” Walton says. “It’s a separate argument from the diversity argument.”

Walton’s research provides the justices with another reason for upholding affirmative action.

But confronting legal questions is only part of the issue.

Walton says remedies need to be found in policy, as well. Environments need to be created that are fair and allow people to do well.

“The first step is for organizations to fix their own houses,” he says.

Testing officials should look at how they administer tests and ask what they can do to mitigate the psychological threats that are present in their settings that cause people to do poorly, Walton says.

Schools and employers, he continues, should look into their own internal environments and ask how they can make those environments safe and secure so everyone can do well and stereotypes are off the table.

But if stereotype threat was present in a prior environment, hiring and admissions decisions need to take that into account.

From WKU Public Radio:

WKU Psychology professor Sam McFarland has long been fascinated by individuals who put their lives–and the lives of loved ones–at risk in order to save people of a different race, ethnicity, or religious group. Dr. McFarland has an article that’s set to be published in a social psychology journal called “All Humanity is My Ingroup: A Measure and Studies of ‘Identification with All Humanity.'”

In his paper, Dr. McFarland describes the idea of “identification with all humanity” as the ability to view all peoples of the world as part of a sort of extended family, and value the lives of those from different backgrounds equally as those from your own background.

Dr. McFarland recently sat down with WKU Public Radio to talk about his research. Here are some excerpts from our interview:

WKU Public Radio: How did you become interested in the subject of having empathy for people who are different from yourself?

Sam McFarland: “First of all, I became familiar with a number of very heroic examples of people who, during the Holocaust, went out of their way to save Jews from the Nazis. When my wife and I were on an anniversary trip, I read a very interesting book by Kristen Monroe called Heart of Altruism, and she was trying to identify the critical characteristics were of those who risked their own lives and sometimes the lives of their family members to save Jews who were in danger of being killed.

“When she did interviews with those people and interview with others, she discovered the critical characteristics seem to be that they had a sense that all humanity is one family. The feelings transcended nationality, religion, ethnic group, and every other distinction we make about human beings.

“Then I became aware that their were psychologists who had talked about that, such as Alfred Adler and Abraham Maslow. They thought that fully mature human beings transcend the ethnocentrisms that are around them. They care about all humanity–past, present, and future.

“But then I realized that psychology had never really studied this, it had never been measured. So I wanted to see if we could build a rational measure of it, and see if that measure predicts the kinds of things we think it ought to predict. “

Your research paper makes reference to a set of interviews that had been previously done with “Holocaust rescuers”–people who had risked their lives to save Jews during WWII. The interviews found a quality present in the rescuers that became known as “extensivity.” What is that, and how does it factor into what you were researching?

“The particular study you’re referring to was done by a man named Sam Oliner, and his wife, Pearl Oliner. They had been longtime professors at Humboldt State University in California.

“Sam was a 10-year-old Jewish child in Poland during the war. And when the Nazis came to his town, his step-mother told him to run away from the Germans. And Sam ran away, and he was taken in by a Christian family who taught him how to act as a Christian, and say the catechism, for example.

“He survived the war. When he tried to find his parents afterwards, he found out they both had died.

“A number of years later, Sam and his wife decided to go back to Poland and interview a large number of people who had been rescuers like the woman who had saved him. They tried to do comparison studies with those who, you might say, just stood by and watched things happen without doing anything.

“He found that those who had rescued Jews felt a sense of responsibility towards all people, and a sense of empathy towards all people. The feelings transcended whether they were Catholic, or atheist, or communist, or any other thing. It was just part of a sense of who they were.”

Are the characteristics you’re talking about, the empathy towards all people—are those innate characteristics? In other words, is this a matter of something you’re either born with, or you’re not? Can a grown person be taught these sorts of things?

“Those are great questions that we do not have answers to. That’s the sort of thing we need to explore and understand.

“Why is it that some people develop a sense of “all humanity is my ingroup”, whereas perhaps the majority of people do not? We don’t really know.

“I think we can point to certain kinds of pre-cursors. For example, excessive punitiveness and excessive parental neglect are things we know can make a person what Alfred Adler called “self-bound”, and much more difficult for that person to care about other people.

“So there are certain things that happen in early childhood that can facilitate (empathy), things like parental affection.

“You raised the question if part of it is possibly just genetics. That’s certainly possible. We just simply don’t know at this point.”

Sam McFarland’s article is to be published in a forthcoming edition of The Journal of Personality and Social Psychology.

Like this:

From Wired:

Jonathan Haidt is a professor in social psychology and author of The Righteous Mind, an examination of the intuitive foundations of morality and its consequences. He has some disgusting stories for you.

Imagine, if you will, a man going to a supermarket, buying a ready-to-cook chicken, taking it home, and having sexual intercourse with it. He then cooks it and eats it.

Or imagine a brother and sister who go on holiday, and end up sleeping together. They feel that it brings them closer, and are very careful with birth control so there’s no absolutely chance of pregnancy.

Don’t worry if you found these stories sick and wrong — most people do. But trying to pin down what exactly is wrong with these stories can be tricky. No one is harmed, the food isn’t wasted, the siblings are happy, yet it’s somehow still wrong. This is “moral dumbfounding’, the strong feeling that something is wrong without clear reasons as to why that is. According to Haidt, this offers a deep insight into human morality, and has profound implications for politics and religion.

Haidt’s studies bear out his message is that for every one of us, however rational we think we are, intuition comes first, and strategic reasoning second. That is, we rationalise our gut instincts, rather than using reason to reach the best conclusion. So, with the chicken story, you’re left scrabbling around for reasons to explain why something is wrong when you just know that it is. For Haidt, this is something that modern thinking has failed to recognise. “In America there was a long period where we were trying to teach kids critical thinking, and you never hear about it anymore because it didn’t work,” says Haidt.

Haidt sees our reasoning mind and intuition as a rider on top of an elephant, with the rider (reason) serving the elephant (intuition). But he doesn’t necessarily see this as a flaw. “You need to learn how to get the rider and elephant to work together properly. Each have their separate skill, and if if you think that the rider is both in charge and deserves to rule, you’re going to find yourself screwing up, and wondering why you keep screwing up. I think maturity and wisdom occur when someone gets good integration between the rider and the elephant — and I picked an elephant rather than a horse because elephants are really big and really smart. If you see a trainer and an elephant working together it’s a beautiful sight.”

Not only do we start with a conclusion and work backwards when making moral judgements, the different moral tenets you use define where you lie on the political spectrum. Broadly, the left makes moral judgements mostly based on harm and fairness, while the right has a broader palette — harm, fairness, loyalty, authority, and sanctity.

So when, for example David Cameron suggests children should be more deferential, Haidt sees this as textbook: “That’s the authority foundation right there. Respect for authority is an offensive idea to people on the left, but it is quite sensible to social conservatives. It’s speaking directly to the elephant. Did he suggest this because he has really long been upset about the decline of authority, or is he maybe doing this to appeal to the more working-class traditionalist voters, those who vote Labour but are socially conservative at heart?”

But isn’t this simply another typical liberal college professor finding yet another way to attack the right? Haidt says that his work into morality has changed his politics, making him less of a liberal, and more of a centrist: “I’ve really become less enamoured of liberalism and more enamoured of conservatism. I think both are important. It’s a yin-yang thing, you need both and if you let either side run things they’re going to screw it up in very predictable ways.”

Our flawed post-hoc reasoning, our cherry-picking of evidence to suit our instincts, makes us poor policy makers, and creates politics that is tribal, confrontational and ill-suited to solving the world’s problems. “Our reasoning is very good as a press agent and lawyer,” says Haidt, “But we’re so biased, no individual can design social policy just using reason. But once you can accept what reasoning is and what it is designed to do, you can start to design groups and institutions that can do a pretty good job of it. When you put people together, you can think of each person as being like a neuron, and if you put us together in the right way then you can get some very good reasoning coming out of it.”

Haidt’s plea is for us to avoid the demonisation of those we see as morally suspect by understanding the way we reach these moral judgements. Like any evolved mechanism, our brain is a hotchpotch of compromises rather than a perfectly designed machine. Our understanding of others starts with understanding ourselves.

“It’s easy to see how flawed and biased and post-hoc everyone else is. If you realise that it’s true about you too, at least you’ll be a little more modest, and if you’re a little more modest then you’ll at least be a little bit more open to the possibility that you might be wrong. There is some wisdom to be found on all sides, because nobody can see the whole problem.”

Five years ago Jon Hanson and Michael McCann wrote and published the following post about Joseph Kony as part of a series on the the situational source of evil. In light of the attention Kony is now getting (see Youtube video, “Kony 2012,” here or at bottom of this post), we thought it might be worth posting again.

* * *

In Parts I, II, and III of his recent posts on the Situational Sources of Evil, Phil Zimbardo makes the case that we too readily attribute to an evil person or group what should be, at least in part, attributed to situation. This was a key lesson of Milgram’s obedience experiments as well as Zimbardo’s Stanford Prison Experiment. And that lesson, unfortunately, seems similarly evident in far too many real-world atrocities.

There are numerous reasons, some of which those earlier posts highlighted, why the situationist lesson is an unpopular one. This post suggests another.

Think for a moment about the sort of evil that is so grotesquely apparent right now in The Sudan and Uganda, both of which are in the midst of civil wars–wars that have featured indescribably horrific acts, such as villages ravaged by soldiers who chop off limbs of children. Perhaps most harrowingly, the “evil-doers” are often children themselves, many of whom are kidnapped and then conscripted into bands of mutilating marauders.

Joseph Kony’s Lord Resistance Army, for example, is comprised mainly of abducted children who roam northern Uganda, where “many families have lost a child through abduction, or their village . . . [have been] attacked and destroyed, families burned out and/or killed, and harvests destroyed by . . . . the Lord’s Resistance Army.”

The plight of Ochola John, pictured below, exemplifies an all-too-common story: his hands, lips, nose, and ears were cut off by members of the Lord Resistance Army. It is a difficult image to take in (note, we opted against many other more graphic photos).

Such atrocities have led many in Uganda toquestion how children could become evil incarnate:

We don’t understand how Kony could have a child soldier slash a fellow child abductee with a machete or make a group of children bite their agemate with their bare teeth till he bleeds to death.

In searching for answers, some have turned to situationist factors:

It is easy to assume that the person who commits such an atrocity is deranged or even inhuman. Sometimes it is the case. But not always. It is possible for a normal individual to commit an abnormal, sick act just because of the situation s/he finds him/herself in, and the training s/he is exposed to.

How could this happen? Zimbardo’s ten-factor list suggests some of situationist grease that no doubt lubricates the wheels of evil. Kony’s methods and ideology are extreme, to be sure, but they are familiar: saving his country from evil by building a theocracy.

In that way, dispositionism can give way to a weak form of situationism, but only up to a point — a tendency that has elsewhere been called selective situationism or naive situationism. Kony’s evil disposition is the “situation” influencing the impressionable young boys. In the end, we place evil almost exclusively in one or a small number of actors — usually human, but sometimes supernatural. No doubt, Kony is immensely blameworthy, so much so that we, the authors, can scarcely bring ourselves even to suggest that the horrors might have multiple origins, beyond the gruesome actions of the most salient actors involved.

By locating evil ultimately in a person or group, we avoid a disconcerting possibility that there is more to the situation beyond the bad individuals. When evil comes packaged within a few human bodies, it is rendered more tractable, identifiable, and perhaps, in a way, less threatening — very “them,” and very “other.” Such a conception undermines the unsettling possibility that, because of the situation, there may be more “evil actors” behind those that we currently face. Get rid of the bad apples, we imagine, and the rest of the batch will be fine. Perhaps more important, it permits us to ignore the possibility that the barrel may be contaminating. We need not confront any apprehensions that our systems are unjust, the groups we identify with are contributing to or benefitting from that injustice, or that we individually play some causal role in it.

Joseph Kony is said to have abducted 20,000 kids in the last 20 years. But he has done so with minimal resistance from Uganda’s government, and with virtually no intervention from foreign powers.

Is there any line at which we non-salient bystanders of the world, including Americans, begin to bear some share of responsibility for suffering such as that endured by Ochola John? Maybe the answer is “no,” as most of us apparetly presume. But maybe it is “yes,” and maybe that line has already been crossed.

We are not making a foreign policy recommendation here. We are simply highlighting a form of blindness that we suspect influences all policy. That is, dispositionism (and motivated attributions generally) helps us push that line of responsibility toward, if not all the way to, the vanishing point — even if it does little to reduce the atrocities themselves. Dispositionism helps us to see the apple, or perhaps the tree, and to miss the orchard and the forest and, perhaps, ourselves.

There are other examples of that tendency of allowing our attributions toward salient (and often despicable) individuals to eclipse any possibility of a more complex, far-reaching causal story. Our criminal justice system is partially built upon it. Consider, also, the widespread response to Susan Sontag’s infamous New Yorker essay, in which she described the of 9/11 terrorism not as

a “cowardly” attack on “civilization” or “liberty” or “humanity” or “the free world” but an attack on the world’s self-proclaimed super-power, undertaken as a consequence of specific American alliances and actions. . . . And if the word “cowardly” is to be used, it might be more aptly applied to those who kill from beyond the range of retaliation, high in the sky, than to those willing to die themselves in order to kill others. In the matter of courage (a morally neutral virtue): whatever may be said of the perpetrators of Tuesday’s slaughter, they were not cowards.

Regardless of the veracity of Sontag’s claims, many Americans did not want to hear such a non-affirming interpretation in the wake of the terror. She not only implicated American policies but suggested that perhaps the attackers were not as “beneath us” as many had portrayed.

As one of us summarized in another article (with Situationist contributors Adam Benforado and David Yosifon), many conservative commentators responded to Sontag and her claims with predictable rage and disgust (while most moderates and liberals took cover in the safety of silence).

Charles Krauthammer called Sontag “morally obtuse,” while Andrew Sullivan labeled her “deranged.”John Podhoretz claimed that she exemplified the “hate-America crowd,” that out-group of Americans who are “dripping with contempt for the nation’s politics, its leaders, its economic system and for their foolish fellow citizens.” And Rod Dreher really drove home the point saying that he wanted“to walk barefoot on broken glass across the Brooklyn Bridge, up to that despicable woman’s apartment, grab her by the neck, drag her down to ground zero and force her to say that to the firefighters.”

We see ourselves as “just,” and don’t like being “implicated” by clear injustice, a discomfort that is often assuaged by looking for the Evil Actor. But when evil continues, even after the evil individuals have been stopped, perhaps we glimpse one reason why, as George Santayana famously put it, “those who cannot remember the past are condemned to repeat it.”

Vodpod videos no longer available.

In trying to prevent discrimination and prejudice, many companies adopt a strategy of “colorblindness”—actively trying to ignore racial differences when enacting policies and making organizational decisions. The logic is simple: if we don’t even notice race, then we can’t act in a racist manner.

The problem is that most of us naturally do notice each other’s racial differences, regardless of our employer’s policy.

“It’s so appealing on the surface to think that the best way to approach race is to pretend that it doesn’t exist,” says behavioral psychologist Michael I. Norton, an associate professor at Harvard Business School. “But research shows that it simply doesn’t work. We do notice race, and there’s no way of getting around this fact.”

Several studies by Norton and his colleagues show that attempting to overcome prejudice by ignoring race is an ineffective strategy that—in many cases—only serves to perpetuate bias. In short, bending over backward to ignore race can exacerbate rather than solve issues of race in the workplace.

“Umm, he has pants”

In efforts to be politically correct, people often avoid mentioning race when describing a person, even if that person’s race is the most obvious descriptor. (Comedian Stephen Colbert often pokes fun of this tendency on his TV show, The Colbert Report, claiming that he doesn’t “see color.”) If a manager, for example, is asked which guy Fred is, he or she may be loath to say, “Fred’s Asian,” even if Fred is the only Asian person in the company.

“Instead, it’s, ‘He’s that nice man who works in operations, and, umm, he has hair, and, umm, he has pants,’ ” Norton says. “And it keeps going on until finally someone comes out and asks, ‘Oh, is he Asian?'”

Norton and several colleagues documented this phenomenon in a study that they described in an article for the journal Psychological Science, Color Blindness and Interracial Interaction. The researchers conducted an experiment in which white participants engaged in a two-person guessing game designed—unbeknownst to them—to measure their tendencies toward attempted racial colorblindness.

Each participant was given a stack of photographs, which included 32 different faces. A partner sat across from the participant, looking at one picture that matched a picture from the participant’s stack. The participants were told that the goal of the game was to determine which photo the partner was holding by asking as few yes/no questions as possible—for example, “Is the person bald?”

Half the faces on the cards were black, and the other half white, so asking a yes/no question about skin color was a very efficient way to narrow down the identity of the photo on the partner’s card. But the researchers found that many of the participants completely avoided asking their partners about the skin color of the person in the photograph—especially when paired with a black partner. Some 93 percent of participants with white partners mentioned race during the guessing game, as opposed to just 64 percent who were playing the game with black partners.

Backfiring results

Two independent coders were hired to watch videos of the sessions on mute, rating the perceived friendliness of the white participants based on nonverbal cues. Alas, the participants who attempted colorblindness came across as especially unfriendly, often avoiding eye contact with their black partners. And when interviewed after the experiment, black partners reported perceiving the most racial bias among those participants who avoided mentioning race.

“The impression was that if you’re being so weird about not mentioning race, you probably have something to hide,” Norton says.

The researchers repeated the experiment on a group of elementary school children. The third graders often scored higher on the guessing game than grown-ups because, Norton says, they weren’t afraid to ask if the person in the photo was black or white. But many of the fourth and fifth graders avoided mentioning race during the game. As it turns out, racial colorblindness is a social convention that many Americans start to internalize by as young as age 10. “Very early on kids get the message that they are not supposed to acknowledge that they notice people’s race—often the result of a horrified reaction from a parent when they do,” Norton says.

A zero-sum game?

In addition to an ineffective strategy at managing interracial interactions, racial colorblindness has evolved into an argument against affirmative action policies, an issue Norton addresses in a recent working paper, Racial Colorblindness: Emergence, Practice, and Implications, cowritten with Evan P. Apfelbaum of MIT and Samuel R. Sommers of Tufts University.

“Though once emblematic of the fight for equal opportunity among racial minorities marginalized by openly discriminatory practices, contemporary legal arguments for colorblindness have become increasingly geared toward combating race-conscious policies,” they write. “If racial minority status confers an advantage in hiring and school admissions and in the selection of voting districts and government subcontractors—the argument goes—then Whites’ right for equal protection may be violated.”

In a related article, Whites See Racism as a Zero-Sum Game That They Are Now Losing, Norton and Sommers surveyed 100 white and 100 black respondents about their perceptions of racial bias in recent American history. They found that black respondents reported a large decrease in antiblack bias between the 1950s and the 2000s, but perceived virtually no antiwhite bias in that same period—ever. White respondents, on the other hand, perceived a large decrease in antiblack bias over time, but also a huge increase in antiwhite bias. In fact, on average, white respondents perceive more antiwhite bias than antiblack bias in the twenty-first century.

“It’s very hard to find a metric that suggests that white people actually have a worse time of it than black people,” Norton says. “But this perception is driving the current cultural discourse in race and affirmative action. It’s not just that whites think blacks are getting some unfair breaks, it’s that whites are thinking, ‘I’m actually the victim of discrimination now.'”

Researchers have found a way to study how our brains assess the behavior – and likely future actions – of others during competitive social interactions. Their study, described in a paper in the Proceedings of the National Academy of Sciences, is the first to use a computational approach to tease out differing patterns of brain activity during these interactions, the researchers report.

“When players compete against each other in a game, they try to make a mental model of the other person’s intentions, what they’re going to do and how they’re going to play, so they can play strategically against them,” said University of Illinois postdoctoral researcher Kyle Mathewson, who conducted the study as a doctoral student in the Beckman Institute with graduate student Lusha Zhu and economics professor and Beckman affiliate Ming Hsu, who now is at the University of California, Berkeley. “We were interested in how this process happens in the brain.”

Previous studies have tended to consider only how one learns from the consequences of one’s own actions, called reinforcement learning, Mathewson said. These studies have found heightened activity in the basal ganglia, a set of brain structures known to be involved in the control of muscle movements, goals and learning. Many of these structures signal via the neurotransmitter dopamine.

“That’s been pretty well studied and it’s been figured out that dopamine seems to carry the signal for learning about the outcome of our own actions,” Mathewson said. “But how we learn from the actions of other people wasn’t very well characterized.”

Researchers call this type of learning “belief learning.”

To better understand how the brain processes information in a competitive setting, the researchers used functional magnetic resonance imaging (fMRI) to track activity in the brains of participants while they played a competitive game, called a Patent Race, against other players. The goal of the game was to invest more than one’s opponent in each round to win a prize (a patent worth considerably more than the amount wagered), while minimizing one’s own losses (the amount wagered in each trial was lost). The fMRI tracked activity at the moment the player learned the outcome of the trial and how much his or her opponent had wagered.

A computational model evaluated the players’ strategies and the outcomes of the trials to map the brain regions involved in each type of learning.

“Both types of learning were tracked by activity in the ventral striatum, which is part of the basal ganglia,” Mathewson said. “That’s traditionally known to be involved in reinforcement learning, so we were a little bit surprised to see that belief learning also was represented in that area.”

Belief learning also spurred activity in the rostral anterior cingulate, a structure deep in the front of the brain. This region is known to be involved in error processing, regret and “learning with a more social and emotional flavor,” Mathewson said.

The findings offer new insight into the workings of the brain as it is engaged in strategic thinking, Hsu said, and may aid the understanding of neuropsychiatric illnesses that undermine those processes.

“There are a number of mental disorders that affect the brain circuits implicated in our study,” Hsu said. “These include schizophrenia, depression and Parkinson’s disease. They all affect these dopaminergic regions in the frontal and striatal brain areas. So to the degree that we can better understand these ubiquitous social functions in strategic settings, it may help us understand how to characterize and, eventually, treat the social deficits that are symptoms of these diseases.”

Cruelty, violence, badness… This episode of Radiolab, we wrestle with the dark side of human nature, and ask whether it’s something we can ever really understand, or fully escape.

We begin with a chilling statistic: 91% of men, and 84% of women, have fantasized about killing someone. We take a look at one particular fantasy lurking behind these numbers, and wonder what this shadow world might tell us about ourselves and our neighbors. Then, we reconsider what Stanley Milgrim’s famous experiment really revealed about human nature (it’s both better and worse than we thought). Next, we meet a man who scrambles our notions of good and evil: chemist Fritz Haber, who won a Nobel Prize in 1918…around the same time officials in the US were calling him a war criminal. And we end with the story of a man who chased one of the most prolific serial killers in US history, then got a chance to ask him the question that had haunted him for years: why?

From The Daily Princetonian:

Failure in the part of the brain that controls social functions could explain why regular people might commit acts of ruthless violence, according to new study by a University research team.

A particular network in the brain is normally activated when we meet someone, empathize with him and think about his experiences.

However, MRI technology showed that when a person encounters someone he deems a drug addict, homeless person or anyone he finds repulsive, parts of this network may fail to activate, creating a pathway to “dehumanized perception” — a failure to acknowledge others’ thoughts and experiences.

According to the study, this process of dehumanizing victims could explain how propoganda portraying Jews as vermin in Nazi Germany and Tutsis of Rwanda as cockroaches led to genocide.

“We all dehumanize other people to some extent,” psychology professor [and Situationist Contributor] Susan Fiske said in an email, noting that it is impossible to delve into the mind of every person we pass.

“That being said, we have shown that people can rehumanize a group they might normally ignore, just by thinking about their preferences, as when a soup kitchen worker thinks about a homeless person’s food preferences.”

Earlier work from the team dealt with social cognition or how individuals perceive the thoughts of others with a study that had individuals think about a day in the life of another person.

The new research attempted to build upon this idea further to include the network in the brain charged with disgust, attention and cognitive control.

To collect their data, the scientists had 119 undergraduates at the University complete judgment and decision-making surveys as they looked at images of individuals such as a firefighter, female college student, elderly man, disabled woman, homeless woman and male drug addict.

This exercise sought to study how the network in the brain involved in social cognition reacted to common emotions shared by participants about the people in the images.

The researchers found that parts of the network in the brain did not activate when participants viewed the images of drug addicts, homeless people and immigrants.

“We all have the capacity to engage in dehumanized perception; it’s not just reserved for serial killers,” Harris said in an email. “There are many routes to dehumanization, and different people may use different routes.”

One such route, according to Harris, may be to avoid thinking about the suffering of others — people who dehumanize homeless people may do this.

Another route could be to view someone as a means to an end. Sports fans may engage in this when they think about trading a favorite player to another team.

Fiske and Harris plan to replicate the study on imprisoned psychopaths, and are continuing to explore the different routes to dehumanized perception.

Why do people commit atrocities? What is responsible for brutality and the cold blooded murder of innocents carried out by Nazis, the Hutu in Rwanda, or the United States against the Vietnamese people and more recently much of the civilian population of Iraq? Some scientists believe they have found the answer.

ScienceDaily reports (“Brain’s Failure to Appreciate Others May Permit Human Atrocities,” 12-14-2011) that the part of the brain responsible for social interaction with others may malfunction resulting in callousness leading to inhumane actions towards others. Scientists at Duke and Princeton have hypothesized, in a recent study, that this brain area can “disengage” when people encounter others they think are “disgusting” and the resulting violence perpetrated against them is due to thinking these objectified others have no “thoughts and feelings.”

The study, according to ScienceDaily, considers this a “shortcoming” which could account for the genocide and torture of other peoples. Examples of this kind of objectification can be seen in the calling of Jews “vermin” by the Nazis, the Tutsi “cockroaches” by the Hutu, and the American habit of calling others “gooks” (as well as other unflattering terms).

Lasana Harris (Duke) says, “When we encounter a person, we usually infer something about their minds [do they have more than one?] Sometimes, we fail to do this, opening up the possibility that we do not perceive the person as fully human.” I wonder about this? What is meant by fully human? Surely the Hutu, for example, who had lived with the Tutsi for centuries, did not really fail to infer that they had “minds.”

Practicing something called “social neuroscience” which seems to consist of showing different people pictures while they are undergoing an MRI and then drawing conclusions from which areas of the brain do or do not “light up” when asked questions about these pictures, the scientists conducting this study discovered that an area of the brain dealing with “social cognition”– i.e., feelings, thoughts, empathy, etc., “failed to engage” when pictures of homeless people, drug addicts, and others “low on the social ladder” were shown.

Susan Fiske (Princeton) remarked, “We need to think about other people’s experience. It’s what makes them fully human to us.” ScienceDaily adds the researchers were struck by the fact that “people will easily ascribe social cognition– a belief in an internal life such as emotions– to animals and cars, but will avoid making eye contact with the homeless panhandler in the subway.”