Blog Stats

Posts Tagged ‘motivated reasoning’

Janice Nadler and Mary-Hunter McDonnell recently posted their paper, “Moral Character, Motive, and the Psychology of Blame” (forthcoming Cornell Law Review) on SSRN. Here’s the abstract.

Blameworthiness, in the criminal law context, is conceived as the carefully calculated end product of discrete judgments about a transgressor’s intentionality, causal proximity to harm, and the harm’s foreseeability. Research in social psychology, on the other hand, suggests that blaming is often intuitive and automatic, driven by a natural impulsive desire to express and defend social values and expectations. The motivational processes that underlie psychological blame suggest that judgments of legal blame are influenced by factors the law does not always explicitly recognize or encourage. In this Article we focus on two highly related motivational processes – the desire to blame bad people and the desire to blame people whose motive for acting was bad. We report three original experiments that suggest that an actor’s bad motive and bad moral character can increase not only perceived blame and responsibility, but also perceived causal influence and intentionality. We show that people are motivated to think of an action as blameworthy, causal, and intentional when they are confronted with a person who they think has a bad character, even when the character information is totally unrelated to the action under scrutiny. We discuss implications for doctrines of mens rea definitions, felony murder, inchoate crimes, rules of evidence, and proximate cause.

Joe Keohane wrote an outstanding article, “How Facts Backfire: Researchers discover a surprising threat to democracy: our brains,” for the Boston Globe last week. Here are some excerpts.

* * *

It’s one of the great assumptions underlying modern democracy that an informed citizenry is preferable to an uninformed one. “Whenever the people are well-informed, they can be trusted with their own government,” Thomas Jefferson wrote in 1789. . . . Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.

In the end, truth will out. Won’t it?

Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information. It’s this: Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

This bodes ill for a democracy, because most voters — the people making decisions about how the country runs — aren’t blank slates. They already have beliefs, and a set of facts lodged in their minds. The problem is that sometimes the things they think they know are objectively, provably false. And in the presence of the correct information, such people react very, very differently than the merely uninformed. Instead of changing their minds to reflect the correct information, they can entrench themselves even deeper.

“The general idea is that it’s absolutely threatening to admit you’re wrong,” says political scientist Brendan Nyhan, the lead researcher on the Michigan study. The phenomenon — known as “backfire” — is “a natural defense mechanism to avoid that cognitive dissonance.”

These findings open a long-running argument about the political ignorance of American citizens to broader questions about the interplay between the nature of human intelligence and our democratic ideals. Most of us like to believe that our opinions have been formed over time by careful, rational consideration of facts and ideas, and that the decisions based on those opinions, therefore, have the ring of soundness and intelligence. In reality, we often base our opinions on our beliefs, which can have an uneasy relationship with facts. And rather than facts driving beliefs, our beliefs can dictate the facts we chose to accept. They can cause us to twist facts so they fit better with our preconceived notions. Worst of all, they can lead us to uncritically accept bad information just because it reinforces our beliefs. This reinforcement makes us more confident we’re right, and even less likely to listen to any new information. And then we vote.

This effect is only heightened by the information glut, which offers — alongside an unprecedented amount of good information — endless rumors, misinformation, and questionable variations on the truth. In other words, it’s never been easier for people to be wrong, and at the same time feel more certain that they’re right.

“Area Man Passionate Defender Of What He Imagines Constitution To Be,” read a recent Onion headline. Like the best satire, this nasty little gem elicits a laugh, which is then promptly muffled by the queasy feeling of recognition. The last five decades of political science have definitively established that most modern-day Americans lack even a basic understanding of how their country works. In 1996, Princeton University’s Larry M. Bartels argued, “the political ignorance of the American voter is one of the best documented data in political science.”

On its own, this might not be a problem: People ignorant of the facts could simply choose not to vote. But instead, it appears that misinformed people often have some of the strongest political opinions. A striking recent example was a study done in the year 2000, led by James Kuklinski of the University of Illinois at Urbana-Champaign. He led an influential experiment in which more than 1,000 Illinois residents were asked questions about welfare — the percentage of the federal budget spent on welfare, the number of people enrolled in the program, the percentage of enrollees who are black, and the average payout. More than half indicated that they were confident that their answers were correct — but in fact only 3 percent of the people got more than half of the questions right. Perhaps more disturbingly, the ones who were the most confident they were right were by and large the ones who knew the least about the topic. (Most of these participants expressed views that suggested a strong antiwelfare bias.)

Studies by other researchers have observed similar phenomena when addressing education, health care reform, immigration, affirmative action, gun control, and other issues that tend to attract strong partisan opinion. Kuklinski calls this sort of response the “I know I’m right” syndrome, and considers it a “potentially formidable problem” in a democratic system. “It implies not only that most people will resist correcting their factual beliefs,” he wrote, “but also that the very people who most need to correct them will be least likely to do so.”

What’s going on? How can we have things so wrong, and be so sure that we’re right?

* * *

To read the rest of the article, including Keohane‘s answers to those questions, click here.

Are judges’ decisions more likely to be based on personal inclinations or legal authority? The answer, Eileen Braman argues, is both. Law, Politics, and Perception brings cognitive psychology to bear on the question of the relative importance of norms of legal reasoning versus decision markers’ policy preferences in legal decision-making. While Braman acknowledges that decision makers’ attitudes—or, more precisely, their preference for policy outcomes—can play a significant role in judicial decisions, she also believes that decision-makers’ belief that they must abide by accepted rules of legal analysis significantly limits the role of preferences in their judgments. To reconcile these competing factors, Braman posits that judges engage in “motivated reasoning,” a biased process in which decision-makers are unconsciously predisposed to find legal authority that is consistent with their own preferences more convincing than those that go against them. But Braman also provides evidence that the scope of motivated reasoning is limited. Objective case facts and accepted norms of legal reasoning can often inhibit decision makers’ ability to reach conclusions consistent with their preferences.

Although a substantial empirical literature has found associations between judges’ political orientation and their judicial decisions, the nature of the relationship between policy preferences and constitutional reasoning remains unclear. In this experimental study, law students were asked to determine the constitutionality of a hypothetical law, where the policy implications of the law were manipulated while holding all legal evidence constant. The data indicate that, even with an incentive to select the ruling best supported by the legal evidence, liberal participants were more likely to overturn laws that decreased taxes than laws that increased taxes. The opposite pattern held for conservatives. The experimental manipulation significantly affected even those participants who believed their policy preferences had no influence on their constitutional decisions.

U.S. Swimmer Michael Phelps just won his 8th gold medal of the Beijing Olympics tonight, the 14th gold of his career. These are feats that have never been accomplished before, and it’s hard to argue with the conclusion that his is the greatest Olympic performance of all time. Some in the sporting world (and beyond) are also calling Phelps the greatest athlete of all time. But not so fast—a number of psychological considerations suggest that the pundits (and public) are likely getting a bit carried away.

Before I go any further, let me make one thing clear for the record. What Phelps has done is extraordinary and unprecedented. . . .

* * *

But why would I suggest that Phelps might not truly be the “greatest athlete” ever . . . ? . . . . I can think of at least three relevant psychological issues:

First, there’s good reason to believe that a variation of the availability heuristic is at play here. This just happened. . . .

So if I ask you to name great athletes, whose name is readily available to you at the moment? Phelps, of course. More generally, even beyond the domain of sports, I’d argue that people are typically lousy at judging “the greatest ever” in any area, due to the availability heuristic among other factors. . . .

Second, in addition to availability, there’s also a self-motivated reason for us to see Phelps deemed the greatest ever. Because we were able to watch Phelps’ triumph and because we’ll have stories to tell about what we saw in these Olympics, we’re able to perceive a personal connection to what he’s done that goes so far as to make us feel good about ourselves.

* * *

Finally, I think there’s also a compelling argument to be made that those who would call Phelps the greatest ever are doing what we humans often do in perceiving the world, namely not giving sufficient weight to the situational factors at play. . . .

[T]his debate is being pitched in largely dispositional terms (i.e., is he the greatest *athlete* ever, as opposed to is this the greatest athletic *performance* ever). And what I really mean to suggest is along the lines of the argument I made in a previous post, namely that important aspects of situations in daily life often escape our attention. In the case of Phelps, he has certainly had a terrific Olympics (now, that might be the greatest understatement of the century). But he also competes in a sport that presents its elite competitors with the opportunity to rack up multiple medals. Swimmers can compete in races of varying distances. There are races in 4 different strokes, as well as individual medleys combining strokes. Then there are relays as well. Is Mark Spitz the second-greatest athlete of all time?

The greatest of basketball and water polo players have a chance at 1 medal in an Olympics. Same with boxers and wrestlers. Track and field stars have more, but still not as many as swimmers. Consider Carl Lewis’ 1984 performance, when he won gold in the 100m, 200m, 4 x 100m relay, and long jump. Was Phelps’ 2008 demonstrably better than that? It’s hard to say. I’m quite sure this last argument will annoy the swimming fans out there, but what if Lewis had been afforded the same opportunities as Phelps to cover different distances in different ways? Swimmers have races in backstroke, breaststroke, butterfly, and freestyle; how many medals could Lewis have won if he could’ve entered the 100m gallop, the 100m skip, and the 100m crabwalk?

OK, so you might resist that last analogy. But the crabwalk would be pretty fun to watch, wouldn’t it? And the bigger point is that Phelps’ historical milestone was attributable to a number of factors: his phenomenal training regimen, his unsurpassed drive to win, his genetic gifts, and more. But he also owes at least part of his title as greatest Olympian ever to the current set-up of the Games, which affords swimmers more opportunities to medal than most other athletes. To ignore this fact and crown Phelps greater than Lewis, Jesse Owens, Eric Heiden, Sonja Henie, Al Oerter, and others seems impulsive. Not to mention, of course, all the non-Olympic athletes who certainly merit consideration for the title of greatest ever.

* * *

To read the entire post, which may well be the greatest post ever, click here.

We recently published a post called the “Moral Psychology Primer,” which briefly highlighted the emerging work of several prominent moral psychologists, including Professor Jonathan Haidt from UVA. Haidt’s important work is relevant to law, morality, and positive psychology – all topics of interest to The Situationist. We thought it made sense, therefore, to follow up the primer with some choice excerpts from Jon Haidt’s terrific book, The Happiness Hypothesis. (We are grateful to Professor Haidt for his assistance in selecting some of these excerpts.)

* * *

I first rode a horse in 1991, in Great Smoky National Park, North Carolina. I’d been on rides as a child where some teenager led the horse by a short rope, but this was the first time it was just me and a horse, no rope. I wasn’t alone—there were eight other people on eight other horses, and one of the people was a park ranger—so the ride didn’t ask much of me. There was, however, one difficult moment. We were riding along a path on a steep hillside, two by two, and my horse was on the outside, walking about three feet from the edge. Then the path turned sharply to the left, and my horse was heading straight for the edge. I froze. I knew I had to steer left, but there was another horse to my left and I didn’t want to crash into it. I might have called out for help, or screamed, “Look out!”; but some part of me preferred the risk of going over the edge to the certainty of looking stupid. So I just froze. I did nothing at all during the critical five seconds in which my horse and the horse to my left calmly turned to the left by themselves.

As my panic subsided, I laughed at my ridiculous fear. The horse knew exactly what she was doing. She’d walked this path a hundred times, and she had no more interest in tumbling to her death than I had. She didn’t need me to tell her what to do, and, in fact, the few times I tried to tell her what to do she didn’t much seem to care. I had gotten it all so wrong because I had spent the previous ten years driving cars, not horses. Cars go over edges unless you tell them not to.

Human thinking depends on metaphor. We understand new or complex things in relation to things we already know. For example, it’s hard to think about life in general, but once you apply the metaphor “life is a journey,” the metaphor guides you to some conclusions: You should learn the terrain, pick a direction, find some good traveling companions, and enjoy the trip, because there may be nothing at the end of the road. It’s also hard to think about the mind, but once you pick a metaphor it will guide your thinking.

* * *

Modern theories about rational choice and information processing don’t adequately explain weakness of the will. The older metaphors about controlling animals work beautifully. The image that I came up with for myself, as I marveled at my weakness, was that I was a rider on the back of an elephant. I’m holding the reins in my hands, and by pulling one way or the other I can tell the elephant to turn, to stop, or to go. I can direct things, but only when the elephant doesn’t have desires of his own. When the elephant really wants to do something, I’m no match for him.

* * *

The point of these studies is that moral judgment is like aesthetic judgment. When you see a painting, you usually know instantly and automatically whether you like it. If someone asks you to explain your judgment, you confabulate. You don’t really know why you think something is beautiful, but your interpreter module (the rider) is skilled at making up reasons, as Gazzaniga found in his split-brain studies. You search for a plausible reason for liking the painting, and you latch on to the first reason that makes sense (maybe something vague about color, or light, or the reflection of the painter in the clown’s shiny nose). Moral arguments are much the same: Two people feel strongly about an issue, their feelings come first, and their reasons are invented on the fly, to throw at each other. When you refute a person’s argument, does she generally change her mind and agree with you? Of course not, because the argument you defeated was not the cause of her position; it was made up after the judgment was already made. If you listen closely to moral arguments, you can sometimes hear something surprising: that it is really the elephant holding the reins, guiding the rider. It is the elephant who decides what is good or bad, beautiful or ugly. Gut feelings, intuitions, and snap judgments happen constantly and automatically . . . , but only the rider can string sentences together and create arguments to give to other people. In moral arguments, the rider goes beyond being just an advisor to the elephant; he becomes a lawyer, fighting in the court of public opinion to persuade others of the elephant’s point of view.

* * *
In my studies of moral judgment, I have found that people are skilled at finding reasons to support their gut feelings: The rider acts like a lawyer whom the elephant has hired to represent it in the court of public opinion.

One of the reasons people are often contemptuous of lawyers is that they fight for a client’s interests, not for the truth. To be a good lawyer, it often helps to be a good liar. Although many lawyers won’t tell a direct lie, most will do what they can to hide inconvenient facts while weaving a plausible alternative story for the judge and jury, a story that they sometimes know is not true. Our inner lawyer works in the same way, but, somehow, we actually believe the stories he makes up. To understand his ways we must catch him in action; we must observe him carrying out low-pressure as well as high-pressure assignments.

* * *

Studies of everyday reasoning show that the elephant is not an inquisitive client. When people are given difficult questions to think about—for example, whether minimum wage should be raised—they generally lean one way or the other right away, and then put a call in to reasoning to see whether support for that position is forthcoming. . . . Most people gave no real evidence for their positions, and most made no effort to look for evidence opposing their initial positions. David Perkins, a Harvard psychologist who has devoted his career to improving reasoning, has found the same thing. He says that thinking generally uses the “makes-sense” stopping rule. We take a position, look for evidence that supports it, and if we find some evidence—enough so that our position “makes sense”—we stop thinking. But at least in a low-pressure situation such as this, if someone else brings up reasons and evidence on the other side, people can be induced to change their minds; they just don’t make an effort to do such thinking for themselves

* * *

Studies of “motivated reasoning” show that people who are motivated to reach a particular conclusion are even worse reasoners than those in Kuhn’s and Perkins’s studies, but the mechanism is basically the same: a one-sided search for supporting evidence only. . . . Over and over again, studies show that people set out on a cognitive mission to bring back reasons to support their preferred belief or action. And because we are usually successful in this mission, we end up with the illusion of objectivity. We really believe that our position is rationally and objectively justified.

Ben Franklin, as usual, was wise to our tricks. But he showed unusual insight in catching himself in the act. Though he had been a vegetarian on principle, on one long sea crossing the men were grilling fish, and his mouth started watering:

I balanc’d some time between principle and inclination, till I recollectd that, when the fish were opened, I saw smaller fish taken out of their stomachs; then thought I, ‘if you eat one another, I don’t see why we mayn’t eat you.” So I din’d upon cod very heartily, and continued to eat with other people, returning only now and then occasionally to a vegetable diet.

Franklin concluded: ‘So convenient a thing is it to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do.’