Blog Stats

Archive for the ‘Choice Myth’ Category

Undergraduates packed Science Center E on Monday to hear two of Harvard’s leading social scientists discuss the way that humans make decisions, and whether having more choices really makes us happier.

The event, “What is Your N? A Personality Test for 4 AM Philosophers,” featured a conversation between social psychologist Dan Gilbert and economist N. Gregory Mankiw, and was sponsored by the Harvard University Initiative on the Foundations of Human Behavior. The discussion was moderated by professors Nicholas Christakis of Harvard Medical School and the Department of Sociology and David Laibson of the Department of Economics.

Laibson began the debate with the following thought experiment:

“We have pre-selected 100 different bottles of alcohol, covering all popular categories — beer, wine, rum, gin, vodka, whiskey, etc. Another person (who remains anonymous) is going to take one (regular-sized) drink, poured from one of the 100 bottles. Call him/her the recipient.

You will pick the number of bottles that the recipient will be able to choose among. To give the recipient complete choice, you would pick N = 100. To simplify the recipient’s decision, you would pick N < 100. You can pick any N value from 1 to 100.

If you pick N < 100, a robot will randomly determine which of the original 100 bottles the recipient will receive (with no repeats). You don’t get to pre-select the specific bottles the robot will choose. The N bottles will be presented to the recipient in categories (like whiskeys or vodkas), so the recipient can easily sort through them.

Your job is to pick N so as to maximize the happiness of the recipient.”

Next, Laibson asked the group to choose the number of bottles that they would send to the recipient under two different scenarios. In the first, the recipient would never know that there were 100 bottles to begin with. In the second scenario, he or she would.

As the students tapped on their laptops to submit their responses to the question online, Mankiw and Gilbert had at it. Mankiw kicked off the discussion by saying that the answer was easy for him and, he hoped, for anyone who had taken his introductory economics class. He would send the anonymous stranger all 100 bottles. Without any knowledge of the recipient’s tastes, it made sense to send as many bottles as possible in order to increase the chance that the stranger would get a drink that they would like.

“My wife and I [recently] went to a bar and had a drink and dinner,” he explained. “The bar had a big selection. I had no trouble at all. I said ‘I want a Tanqueray martini on the rocks with a twist.’ If the bartender had said ‘We randomly reduced the number of selections, so we don’t have Tanqueray tonight. We have Bombay Sapphire,’ I would have been a little disappointed. If they had said ‘We only have Gordon’s gin tonight,’ I would have been really upset. And if they had said ‘All we have is Kahlua, and crème de menthe,’ I would have walked out. So it was very clear to me that more selection is good.”

Gilbert said that Mankiw’s answer was not surprising. Americans like choices; the more the better. We want to choose what we want, even if the options are so great that our decision becomes essentially random. But Gilbert said there is more to choice than simply matching selection with preferences, and that there are costs associated with decision making, particularly when the options are too great. To illustrate his point, Gilbert described a study by Princeton psychologist Eldar Shafir.

Shafir presented doctors with a pink pill that was said to treat osteoarthritis. The physicians learned about the drug, and then were asked whether or not they would be likely to prescribe it. Most said that they would.

Shafir then went to another group of physicians, this time with a pink and blue pill. He told the group that both would treat osteoarthritis and that the drugs were similar in their effects, aside from their color. He asked this group of doctors whether they would prescribe the pink pill, the blue pill, or neither. Fewer doctors said that they would give patients a pill — either pink or blue — than the group that had been presented with only one pill.

“You should get at least the same number prescribing one of the pills,” said Gilbert. “Or even more, because some will only like blue pills. However, the actual number goes down. Why? Because the physicians say ‘Well, I could do nothing, or choose between one of these two similar pills and I really can’t decide between them, so I’ll do nothing, because nothing looks really different than the pill.”

In terms of Laibson’s thought exercise, Gilbert noted that more bottles and more types of liquor could make the decision more difficult for the recipient. If you offer the drinker wine or beer, and the drinker likes wine, the choice is easy. But if the drinker likes wine and gets four different bottles to choose from versus one type of beer, they might actually choose the beer, even though they prefer wine.

“Because I’ve given you extra choices, you have now gone to the thing you like less, because you can’t think of a good reason to pick among the wines that are so similar,” Gilbert said.

After some waffling, Gilbert, half seriously, gave the number of bottles he would send to the stranger: two.

“Then you have only Kahlua and crème de menthe!” laughed Mankiw.

After Gilbert and Mankiw held forth, Laibson revealed the results of the online poll. Under conditions where the recipient would not be informed if their choices were narrowed, there was a barbell-shaped distribution. A large group of the 220 student respondents said that they would send between zero and 30 bottles to the drinker, with another group up at 100 bottles. In the second scenario, however, where the recipient would know if the selection had been pared, undergraduates overwhelmingly voted to send all 100 bottles to the drinker.

The results were fascinating to Laibson, who has studied employee participation in retirement plans and discovered that enrollment increases dramatically when workers are automatically enrolled and must voluntarily opt out. Given that the criticism for making auto enrollment the default in business is often that the policy is paternalistic, the results of the survey shed light on when people are OK with “Big Brother” and when they are not.

“The message here seems to be ‘Be a paternalist, but keep it a secret,’” Laibson said, eliciting laughter from the students. “The minute the recipient knows [his or her choices have been narrowed], this community gives a different answer [to the thought experiment]. Paternalism is bad when the recipient understands that paternalistic motives organize what happens to them.”

Homophobia is more pronounced in individuals with an unacknowledged attraction to the same sex and who grew up with authoritarian parents who forbade such desires, a series of psychology studies demonstrates.

The study is the first to document the role that both parenting and sexual orientation play in the formation of intense and visceral fear of homosexuals, including self-reported homophobic attitudes, discriminatory bias, implicit hostility towards gays, and endorsement of anti-gay policies. Conducted by a team from the University of Rochester, the University of Essex, England, and the University of California in Santa Barbara, the research will be published the April issue of the Journal of Personality and Social Psychology.

“Individuals who identify as straight but in psychological tests show a strong attraction to the same sex may be threatened by gays and lesbians because homosexuals remind them of similar tendencies within themselves,” explains Netta Weinstein, a lecturer at the University of Essex and the study’s lead author.

“In many cases these are people who are at war with themselves and they are turning this internal conflict outward,” adds co-author Richard Ryan, professor of psychology at the University of Rochester who helped direct the research.

The paper includes four separate experiments, conducted in the United States and Germany, with each study involving an average of 160 college students. The findings provide new empirical evidence to support the psychoanalytic theory that the fear, anxiety, and aversion that some seemingly heterosexual people hold toward gays and lesbians can grow out of their own repressed same-sex desires, Ryan says. The results also support the more modern self-determination theory, developed by Ryan and Edward Deci at the University of Rochester, which links controlling parenting to poorer self-acceptance and difficulty valuing oneself unconditionally.

The findings may help to explain the personal dynamics behind some bullying and hate crimes directed at gays and lesbians, the authors argue. Media coverage of gay-related hate crimes suggests that attackers often perceive some level of threat from homosexuals. People in denial about their sexual orientation may lash out because gay targets threaten and bring this internal conflict to the forefront, the authors write.

The research also sheds light on high profile cases in which anti-gay public figures are caught engaging in same-sex sexual acts. The authors write that this dynamic of inner conflict may be reflected in such examples as Ted Haggard, the evangelical preacher who opposed gay marriage but was exposed in a gay sex scandal in 2006, and Glenn Murphy, Jr., former chairman of the Young Republican National Federation and vocal opponent of gay marriage, who was accused of sexually assaulting a 22-year-old man in 2007.

“We laugh at or make fun of such blatant hypocrisy, but in a real way, these people may often themselves be victims of repression and experience exaggerated feelings of threat,” says Ryan. “Homophobia is not a laughing matter. It can sometimes have tragic consequences,” Ryan says, pointing to cases such as the 1998 murder of Matthew Shepard or the 2011 shooting of Larry King.

To explore participants’ explicit and implicit sexual attraction, the researchers measured the discrepancies between what people say about their sexual orientation and how they react during a split-second timed task. Students were shown words and pictures on a computer screen and asked to put these in “gay” or “straight” categories. Before each of the 50 trials, participants were subliminally primed with either the word “me” or “others” flashed on the screen for 35 milliseconds. They were then shown the words “gay,” “straight,” “homosexual,” and “heterosexual” as well as pictures of straight and gay couples, and the computer tracked precisely their response times. A faster association of “me” with “gay” and a slower association of “me” with “straight” indicated an implicit gay orientation.

A second experiment, in which subjects were free to browse same-sex or opposite-sex photos, provided an additional measure of implicit sexual attraction.

Through a series of questionnaires, participants also reported on the type of parenting they experienced growing up, from authoritarian to democratic. Students were asked to agree or disagree with statements like: “I felt controlled and pressured in certain ways,” and “I felt free to be who I am.” For gauging the level of homophobia in a household, subjects responded to items like: “It would be upsetting for my mom to find out she was alone with a lesbian” or “My dad avoids gay men whenever possible.”

Finally, the researcher measured participants’ level of homophobia – both overt, as expressed in questionnaires on social policy and beliefs, and implicit, as revealed in word-completion tasks. In the latter, students wrote down the first three words that came to mind, for example for the prompt “k i _ _”. The study tracked the increase in the amount of aggressive words elicited after subliminally priming subjects with the word “gay” for 35 milliseconds.

Across all the studies, participants with supportive and accepting parents were more in touch with their implicit sexual orientation, while participants from authoritarian homes revealed the most discrepancy between explicit and implicit attraction.

“In a predominately heterosexual society, ‘know thyself’ can be a challenge for many gay individuals. But in controlling and homophobic homes, embracing a minority sexual orientation can be terrifying,” explains Weinstein. These individuals risk losing the love and approval of their parents if they admit to same sex attractions, so many people deny or repress that part of themselves, she said.

In addition, participants who reported themselves to be more heterosexual than their performance on the reaction time task indicated were most likely to react with hostility to gay others, the studies showed. That incongruence between implicit and explicit measures of sexual orientation predicted a variety of homophobic behaviors, including self-reported anti-gay attitudes, implicit hostility towards gays, endorsement of anti-gay policies, and discriminatory bias such as the assignment of harsher punishments for homosexuals, the authors conclude.

“This study shows that if you are feeling that kind of visceral reaction to an out-group, ask yourself, ‘Why?'” says Ryan. “Those intense emotions should serve as a call to self-reflection.”

The study had several limitations, the authors write. All participants were college students, so it may be helpful in future research to test these effects in younger adolescents still living at home and in older adults who have had more time to establish lives independent of their parents and to look at attitudes as they change over time.

Other contributors to the paper include Cody DeHaan and Nicole Legate from the University of Rochester, Andrew Przybylski from the University of Essex, and William Ryan from the University of California in Santa Barbara.

What would you say influenced your voting decisions in the most recent local or national election? Political preferences? A candidate’s stance on a particular issue? The repercussions of a proposition on your economic well-being? All these “rational” factors influence voting, and peoples’ ability to vote, based on what is best for them, is a hallmark of the democratic process.

But Stanford Graduate School of Business researchers, doctoral graduates Jonah Berger and Marc Meredith, and S. Christian Wheeler, associate professor of marketing, conclude that a much more subtle and arbitrary factor may also play a role—the particular type of polling location in which you happen to vote.

It’s hard to imagine that something as innocuous as polling location (e.g., school, church, or fire station) might actually influence voting behavior, but the Stanford researchers have discovered just that. In fact, Wheeler says “the influence of polling location on voting found in our research would be more than enough to change the outcome of a close election.” And, as seen in the neck-to-neck 2000 presidential election where Al Gore ultimately lost to George W. Bush after months of vote counting in Florida, election biases such as polling location could play a significant role in the 2008 presidential election. Even at the proposition level, “Voting at a school could increase support for school spending or voting at a church could decrease support for stem cell initiatives,” says Wheeler.

Why might something like polling location influence voting behavior? “Environmental cues, such as objects or places, can activate related constructs within individuals and influence the way they behave,” says Berger. now an assistant professor of marketing at the Wharton school. “Voting in a school, for example, could activate the part of a person’s identity that cares about kids, or norms about taking care of the community. Similarly, voting in a church could activate norms of following church doctrine. Such effects may even occur outside an individual’s awareness.”

Using data from Arizona’s 2000 general election, Berger, Meredith, a visiting lecturer at MIT, and Wheeler discovered that people who voted in schools were more likely to support raising the state sales tax to fund education. The researchers focused on Proposition 301, which proposed raising the state sales tax from 5.0 percent to 5.6 percent to increase education spending. What they found was that voters were more likely to support this initiative if they voted in a school versus other types of polling locations (55.0 percent versus 53.09 percent).

This effect persisted even when the researchers controlled for—or removed the possibility of—other factors such as:

Where voters lived. People who have kids may be pro-education and more likely to live near, and hence vote at, schools; Political views. Those who voted for Gore or positively on other propositions; and Demographics including age, sex, etc.

In regards to the first control, for example, people were still more likely to support Proposition 301 if they had voted in schools than if they had voted in places that were not schools but had schools nearby. No matter how they cut and spliced the data, the researchers found that voters in schools were more likely to support Proposition 301.

“We want factors like political views—whether someone thinks a candidate is going to make our country a better place—to sway elections,” said Berger. “But in forming election policy, we also want to make sure that arbitrary factors such as polling location don’t ultimately influence voting behaviors.”

To further test their hypothesis, the researchers even conducted the same analysis for the other 13 propositions on the Arizona ballot. They reasoned that if voters who cast their ballots in schools were more likely to vote positively for other unrelated propositions on wildlife or property taxes, for example, then the researchers would know that their model was not adequately accounting for some other factor beyond polling location, and that something such as voting preferences was having an effect. But such additional testing only supported the researchers’ hypotheses further.

The researchers also followed up with a lab experiment that allowed for random assignment of voters to pictures of different voting environments that the researchers thought might influence voting behavior. Participants were shown 10 images from well-maintained schools (e.g. lockers, classrooms) or churches (e.g. pews, alters), plus five additional filler images of generic buildings. A control group was shown images of generic buildings.

The participants then voted on a number of initiatives including California’s 2004 stem cell funding initiative, Arizona’s education initiative, and several others. Initiative wording was taken right from each state’s legislative council documents. As predicted by Berger, Meredith, and Wheeler: Environmental cues contained in the photos influenced voting.

Results from the second study showed that participants were less likely to support the stem cell initiative if they were shown church images than if they were shown school images or a generic photo of a building. The subjects also were more likely to support the education initiative if they were shown school images versus church or generic building images. The results further demonstrated that environmental cues present in different polling locations can influence voting outcomes, even when voters are randomly assigned to different environmental cue conditions.

“What our research suggests is that it might be useful to further investigate influences such as polling location to better understand how such factors affect different types of voting situations. From a policy perspective, the hope is that a voting location assignment could be less arbitrary and more determined in order to avoid undue biases in the future,” says Wheeler.

University of Maine psychology professor Jordan LaBouff and co-author Wade Rowatt, Ph.D., associate professor of psychology and neuroscience at Baylor, have a new paper out in the International Journal for the Psychology of Religion, finding that people expressed “cold” rather than “warm” attitudes toward gay men and women if they were asked their views while they were within sight of a church.

The research, conducted in England and the Netherlands with participants of 20 different nationalities, found that the unmentioned but evident visual cue of a church prompted people to express more conservative views on a range of issues — foreign aid, immigration, protection of the environment, separation of church and state, and more.

LaBouff said Thursday, “The effect is not specific to Christianity, but the sight of a church highlights our internal boundaries — who is like us and who is not like us. And we are more negative toward people who are not like us, whether we are religious or not.”

Why can’t some people stop themselves from doing things that are bad for them? Why can’t some people stop themselves from doing things that hurt others? These questions have puzzled philosophers, economists, and psychologists for centuries. Professor Joshua Buckholtz will discuss these issues in the context of his work at Harvard’s Systems Neuroscience of Psychopathology Lab, where he seeks to understand how genes and environments affect brain chemistry and function to influence variability in human self-control.

Harvard Law School just published an interview with Jon Hanson. We’ve posted it in full below.

Director of the Project on Law and Mind Sciences at Harvard Law School (PLMS), Professor Jon Hanson has long combined social psychology, economics, history, and law in his scholarship. After PLMS hosted several conferences featuring leading mind scientists and legal scholars, Hanson collected the work of many of the contributors in a book he edited, “Ideology, Psychology, and Law” (Oxford University Press). [Introductory chapter available, here].

In the following Q&A, he speaks about the new book, the connection between law and mind sciences, and his own work in a field that has grown rapidly over the past 20 years.

What sparked your interest in the study of mind sciences and the law?

My interest has evolved through several stages. Although I studied economics in college, I did so with special interest in health care policy, where the life-and-death decisions have little in common with the consumption choices imagined in neoclassical economics. Purchasing an appendectomy through insurance has little in common with buying a fruit at the market.

After college, I spent a year studying the provision of neonatal intensive care in Britain’s National Health Service, attending weekly rounds with neonatologists at London hospitals, meeting with pediatricians in rural English hospitals, interviewing nurses who were providing daily care for the infants, some of whom were not viable, and speaking with parents about the profound challenges they were confronting. Those experiences strengthened my doubts regarding the real-world relevance of basic economic models for certain types of decisions.

In law school, I studied law and economics, but tended to focus on informational problems and externalities that had been given short shrift by some legal economists at the time. After attending a talk by, and then meeting with, the late Amos Tversky, I became an early fan of the nascent behavioral economics movement.

It wasn’t, however, until I spent a couple of years immersed in cigarette-industry documents in the early and mid 1990s that I felt the need to make a clean break from the law’s implied psychological models and to turn the mind sciences for a more realistic alternative.

What was it about the cigarette documents that had that effect?

Well, they made clear that the tobacco industry articulated two views of their consumers – an inaccurate public portrayal, and a more accurate private view.

The first, which the industry conveyed to their consumers and to lawmakers, was of smokers who are independent, rational, and deliberate. Smokers smoke cigarettes because they choose to, because smoking makes them happier, even considering the risks. The industry thus gave consumers a flattering view of themselves as autonomous, liberated actors while assuring would-be regulators that there was no need to be concerned about the harmful consequences of smoking. Smokers were, after all, just getting what they wanted.

The second view of the consumer, which was evident in the industry’s internal documents, was of consumers as irrational, malleable, and manipulable. The industry’s confidential marketing strategy documents, for instance, made clear that the manufacturers theorized and experimented to discover how to target, persuade, lure, and chemically hook young consumers to take up and maintain the smoking habit. That internal understanding of consumers had nothing in common with the industry’s external portrayals.

I came to the realization that, unfortunately, the latter view of the human animal is far more accurate and, furthermore, that failure to understand the actual forces behind human behavior may be contributing to injustice.

How did that realization influence your research?

In the late 1990s, I put my writing down and devoted a couple of years to learning what I could about the mind sciences – social psychology, social cognition, cognitive neuroscience, and the like. Those fields, coincidentally, were blossoming with new theories, new methodologies, and new findings and insights, most of which created challenges to the fundamental assumptions in law and legal theory.

What were some of those insights?

To keep things simple, I’ll boil them down to two big ones.

First, mind scientists had learned that most people in western cultures operate with a naïve and commonsensical model of human psychology that presumes that an individual’s actions reflect a stable personality or disposition and little else. From that perspective, people are presumed to be in control of, and responsible for, their behavior and its consequences.

By the way, that’s the same model of human behavior that is employed in law and conventional legal theory. And it’s the same model that the tobacco industry actively promoted.

The second big insight was that that model of human behavior is fundamentally wrong. People are moved less by a stable disposition and more by internal and external forces that generally go unnoticed in our causal stories. The errors go beyond our causal assessments of other people’s behavior; we confuse and deceive even ourselves, believing our own reasons, when social science reveals those reasons often turn out to be mere confabulations.

What does that mean for the law?

Exactly. That’s the big question. My briefest answer is: a lot. The book is one place where the contributors and I begin to sketch some of the answers.

Given the large gap between what the law assumes and what the mind sciences have shown to be true, my initial goal has been to understand the breadth and contours of that gap and to develop a better understanding of the psychological and contextual forces behind human behavior. I have resisted the strong urge to focus on only those psychological tendencies that can lead to straightforward but narrow implications for law.

Having said that, abandoning the familiar, if wrong, conception of human behavior is daunting and unsettling; it calls for establishing new knowledge structures and being open to some humbling truths about ourselves and some uncomfortable truths about our justice system.

I expect that several generations of lawmakers, legal academics, and lawyers will be grappling with the implications of what mind scientists are discovering about human behavior. Indeed, they will have to do so, if we are ever going to find meaningful solutions to many of our thorniest policy challenges.

Is this entirely new terrain?

I shouldn’t give the impression that I am alone in the wilderness. The approach I’ve taken has its origins in the legal realism movement, and there is actually significant overlap with parts of more recent legal theoretic schools of thought, from law and economics to critical legal studies.

Furthermore, there are other scholars around the country exploring this terrain, and I have been extraordinarily lucky to work with a number of remarkable students over the years, including Melissa Hart, Doug Kysar, David Yosifon, Adam Benforado, Michael McCann, and Mark Yeboah. Most of those students have gone on to make their own path-breaking contributions to law and mind sciences.

Can you say more about how the field has evolved and your involvement in it over the last 20 years?

Well, 20 years ago, only a small but important corner of psychology known as “decision theory” or “behavioral economics” was getting much attention among legal theorists. Roughly, the research and evidence in that field disputed the “rationality” assumption of the “rational actor” model. I co-authored several articles arguing that those insights suggested that market actors could, would, and do manipulate the risk perceptions of consumers.

A decade ago, I co-wrote a pair of law-review articles (“The Situation” and “The Situational Character”) introducing some of the broader insights of mind sciences and speculating on some of their implications for law. The articles were among the first of their kind, and contested even the “actor” portion of the “rational actor” model. At the time, many readers from legal academia found the research we reviewed to be foreign and hard to fathom.

Five years ago, I began the Project on Law and Mind Sciences. With then-Dean Kagan’s support, some technical know-how from Michael McCann, and the aid of many outstanding students, I set up a website and blog and began holding annual conferences intended to help bridge the gap between the law and the mind sciences. In the meantime, numerous books have popularized the mind sciences, and several new law school programs and projects have been established around the country reflecting and reinforcing this burgeoning interdisciplinary approach.

As of today, the mind sciences are, well, hot. There is now almost too much scholarship for me to keep up with, judges are beginning to cite such research in their opinions, and student groups are springing up in law schools, including the vibrant Student Association for Law and Mind Sciences (or “SALMS”) at Harvard Law School. Every year, I hear from more 1Ls who tell me they chose Harvard Law School because of the exciting work that we’ve been doing.

Are other members of the HLS faculty now employing mind sciences in their work?

Absolutely. Alan Stone has been writing and teaching about the law and psychiatry since the 1960s. Cass Sunstein and Christine Jolls, when here, were prominent leaders of the economic behavioralism movement. Several other members of the faculty employ mind sciences in elements of their scholarship and teaching. Lani Guinier, Bob Bordone, Martha Minow, Duncan Kennedy, Charles Ogletree, Bob Mnookin, Larry Lessig, Diana Feldman, Bruce Hay, Yochai Benkler, Glenn Cohen, and David Cope come to mind, and I’m surely forgetting some. Among our visitors this year, Dan Kahan and Martha Chamallas are prominent leaders in this interdisciplinary approach.

Many of us are interacting more often and more collaboratively with mind scientists in other departments of this University and beyond, and I would be surprised if we didn’t add a social psychologist to our faculty in the next decade, as other law schools have.

Your book has more than 20 contributors representing different disciplines. Does their work share a common theme?

First, let me emphasize that the book reflects the work of many students and my assistant, Carol Igoe, who helped organize the conferences on which much of the book is based and who helped in the initial editing stages as part of a seminar that I taught.

To your question, I need to be quite abstract to locate one common theme. If there is a single thread running throughout the book, it is that “how we think” affects “what we think” about law. Many of the contributors – social psychologists, political scientists, legal scholars among them – also consider the effects of “what we want to believe” on “how we think.”

More concretely, some authors examine the implications of the dispositionist conception of the person for the law. Others scrutinize and challenge the ideological premises of prominent legal goals, including utilitarianism and instrumentalism. Some consider the harmful effects of the “free market” ideology. Others look at the implicit motives underlying political ideologies – that is, left and right – while a few summarize evidence regarding the effects of political ideology on judicial decision-making. That’s a sample.

You write that the legal system is built on a dubious ideological framework. How so?

There are several ways in which that is true. Construing “ideology” broadly to refer to shared understandings of human behavior, I’ll answer by echoing what I’ve already highlighted. The legal system presumes that a person’s behavior is the manifestation of little more than a stable set of preferences, combined with a given supply of information, activated by the person’s will. Such perceived truths about what makes people behave as they do shape beliefs about why some groups are advantaged or disadvantaged or about how well certain systems or institutions operate. Unfortunately, those shared understandings are often incorrect.

How do ideology and psychology influence judicial decision making?

That’s another great question, which calls for a bigger answer than I can muster here. What I can say is that there seems to be little disagreement among observers of the legal system that judicial decision making is influenced by ideology. Although some point to Roe v. Wade while others point to Citizens United as their exemplar, the disagreement is over when and how judges are swayed by ideology.

Social psychology and social cognition help us see that there is no escaping the influence of ideology, any more than a person can speak without an accent. Although we tend to hear the accents and perceive the ideologies of those who don’t share our own, we all have both. So ideology is inescapable; pretending that we operate outside of ideology probably makes us more, not less, subject to its biasing influence.

More important, mind scientists have discovered some of the implicit motives and situational factors that push us toward one ideology or another, including political ideologies or legal-theoretic ideologies.

Will an awareness of mind sciences help an attorney in practicing the law?

I hope so.

Having an awareness of the power and effects of psychology and ideology on the law, a lawyer can better predict the outcomes of cases and more ably persuade jurors or judges to see a case their way.

An imperfect analogy is to a doctor who understands the underlying causes of a disease and not simply its symptoms. A lawyer who understands what is moving the law is like the doctor who understands the disease and its processes. Such a lawyer can be effective in taking on the tough, novel cases on the frontiers of the law.

Understanding the remarkable insights being generated by mind scientists similarly can help lawyers to understand and work with their clients or even to recognize and articulate injustices that might otherwise be missed.

My own teaching reflects my strong belief that law students will make better lawyers if they learn some psychology. At the very least, they will learn something about themselves.

For more on the situation of eating, see Situationist contributors Adam Benforado, Jon Hanson, and David Yosfion’s law review article Broken Scales: Obesity and Justice in America. For a listing of numerous Situaitonist posts on the situational sources of obesity, click here.

There is a simple method for making decisions, from trivial to life changing, that most people find easy to understand but impossible to follow. In a talk entitled “How To Do Precisely the Right Thing At All Possible Times,” Daniel Gilbert, Professor of Psychology at Harvard University, author of “Stumbling on Happiness,” and host of the PBS television series “This Emotional Life,” discussed research in psychology, neuroscience and behavioral economics that explains why it is indeed possible, yet incredibly difficult, to do the right thing at all possible times.

Gilbert’s talk was sponsored by the Living Well in the Law program at Harvard Law School, which endeavors to complement the teaching of the skills and substance of the law with attention to and development of each student’s sense of purpose as both a professional and a person.

Gilbert explained that our own minds thwart our attempts to make good decisions because our brains evolved to function in a world very different from the one we live in today, one in which decisions were limited to finding a mate and living in small communities, not purchasing long-term care insurance or making other complex decisions.

“We’re on an ancient vessel and can’t evolve quickly enough, but we’re not stupid,” Gilbert said. “The way we got to the moon wasn’t through intuition—we used science and disciplined rational thinking. We can use the same approach to make any kind of personal decision. The question isn’t whether we know how to do precisely the right thing at all the right times. The question is whether we will actually use what we know.”

He said that it should be simple to make a decision—all we need to do is multiply the odds of getting what we want by the value of getting it. But people make two classes of errors when trying to make decisions: errors in odds and errors in value.

Gilbert discussed the psychological phenomena leading to errors in odds, including the imaginability error and the optimism bias. We miscalculate the odds of a particular outcome because the imaginability error causes us to calculate odds based on how easy it is to bring something to mind. For example, people overestimate the odds of dying in a tornado or from using fireworks because those deaths make headlines, while they underestimate the odds of dying by drowning or from asthma, which are in reality far more common. The optimism bias, on the other hand, is simply attributed to the fact that we’re wildly optimistic about the odds of getting what we want, he said. Together, the imaginability error and the optimism bias distort our ability to anticipate odds of a particular outcome.

“The optimism bias occurs because, when you practice doing things, they become easier to do,” Gilbert said. “Motivational speakers tell you to practice thinking about success and not even let thoughts of failure cross your mind. If you just keep thinking about how your plans will work without being willing to entertain equally how they’re not going to work, success becomes easier and easier for you to imagine, and thus the imagineability error is at play. We practice thinking about success so much that it’s inevitable that we’ll overestimate the likelihood that it’s going to happen.”

Calculating how happy we’ll be if we actually achieve the outcome we want—the value of that outcome—is even more difficult. Anticipating value is so difficult because every form of judgment works by comparison, Gilbert explained. To illustrate that point, he used a decision very familiar to law students—deciding between two job offers. Job 1 offers a salary of $100,000, but everyone else at that firm will earn $105,000. The salary for Job 2 is $90,000, but everyone else will earn $85,000. Gilbert said that most study participants respond that Job 2 will make them happier because, although they’ll make less money, they won’t feel underpaid. But for that to be the right decision, one who chooses Job 2 must then walk around all day in that new job thinking about how wonderful that extra $5,000 is. In reality, people will not spend time making that comparison once they dive in and start the job.

“You forget about the setup. The comparison you make when determining the value of getting what you want is no longer the comparison you make once you get it, so it bedevils your attempt to make a good decision,” Gilbert said.

He warned that there is really nothing we can do to ensure that we make the right decisions—there’s no pill we can swallow, class we can take, or book we can read that will prevent us from making these errors in odds and value because they’re simply so natural to us.

“How can you do the right thing at all possible times? You probably can’t,” he said. “The best thing you can do is to catch yourself making these errors and know to watch out for them. Ask if yesterday’s price really matters today, or if today’s comparison will really matter tomorrow. We can stop ourselves not from making errors, but from completing errors.”

To take a gratifying, low-paying job or a well-paid corporate position, to get married or play the field, to move across the country or stay put: The fact that most people face such choices at some point in their lives doesn’t make them any easier. No one knows the dilemma better than law students, who are poised to enter a competitive job market after staking years of study on their chosen field.

When faced with a tough choice, we already have the cognitive tools we need to make the right decision, Daniel Gilbert, professor of psychology at Harvard and host of the PBS series “This Emotional Life,” told a Harvard Law School (HLS) audience on Feb. 16. The hard part is overcoming the tricks our minds play on us that render rational decision-making nearly impossible.

Gilbert’s talk, titled “How To Do Precisely the Right Thing at All Possible Times,” was part of Living Well in the Law, a new program sponsored by the HLS Dean of Students Office that aims to help law students consider their personal and professional development beyond the fast track of summer associate positions and big-law job offers.

There is a relatively simple equation for figuring out the best course of action in any situation, Gilbert explained: What are the odds of a particular action getting you what you want, and how much do you value getting what you want? If you really want something, and you identify an action that will make it likely, then taking that action is a good move.

Unfortunately, Gilbert said, “these are also the two ways human beings screw up.”

First, he said, humans have a hard time estimating how likely we are to get what we want. “We know how to calculate odds [mathematically], but it’s not how we actually calculate odds,” he said.

We buy lottery tickets, because we “never see interviews with lottery losers.” If every one of the 170 million losing ticket holders were interviewed on television for 10 seconds apiece, we’d be having the image of losing drilled into our brains for 65 straight years, he said.

“When something’s easy to imagine, you think it’s more likely to happen,” he said.

For example, if asked to guess the number of annual deaths in the United States by firework accidents and storms versus asthma and drowning, most people will vastly overestimate the former and underestimate the latter. That’s because we don’t see headlines when someone dies of an asthma attack or drowns, Gilbert said. “It’s less available in your memory, but it is in fact more frequent.”

Then there’s the fact that we’re prone to irrational levels of optimism, a pattern that has been documented across all areas of life. Sports fans in every city believe their team has better-than-average odds of winning; the vast majority of people believe they’ll live to be 100.

A study of Harvard seniors, Gilbert gleefully reported, showed they on average believed they’d finish their theses within 28 to 48 days, but most likely within 33 — “a number virtually indistinguishable from their best-case scenario.” In reality, they complete their theses within 56 days on average.

Still, he said, calculating our odds of success is actually the easy part. “What’s really hard in life is knowing how much you’re going to value the thing you’re striving so hard to get,” he said.

When we consider buying a $2 cup of coffee at Starbucks, for example, we don’t compare the satisfaction of a morning caffeine jolt against the millions of other things we could purchase for $2. Rather, we compare the value of that cup of coffee against our own past experiences. If the same coffee only cost $1.50 yesterday, we might balk at paying $2 for it today.

“One of the problems with this bias, this tendency to pay attention to change, is that it’s hard to know if things really did change,” he said. “Whether things changed is often in the eye of the beholder.

“It turns out that every form of judgment works by comparison,” he said. “People shop by comparison.” Unfortunately, our comparisons are easily manipulated, and comparing one option with all other possible options is an impossible task.

Real estate companies, for example, show potential buyers “set-up properties,” rundown fixer-uppers that they actually own, to lower their clients’ expectations for houses that are actually for sale.

In his own lab, Gilbert’s research team had two groups of college students predict how much they would enjoy eating a bag of potato chips. The group that sat in a room with chocolates on display predicted they’d enjoy the chips less, while the second group — stuck in a room with the chips and a variety of canned meats — predicted much higher enjoyment of the salty snack.

But when the students rated their enjoyment of the chips while they were eating them, those differences disappeared. While their previous visual judgment was tainted by comparison, their judgment of the actual taste was not.

“The comparisons you make when you’re shopping are not the ones you’ll make after you’ve bought,” Gilbert said.

The human mind evolved to deal with different dilemmas than the ones we face today, Gilbert explained. Our ancestors weighed short-term consequences to ensure their survival, evolving a snap-judgment process that often serves us poorly when making long-term decisions such as buying a home, investing in the stock market, or making a cross-country move.

The brain “thinks like the old machine it is,” Gilbert said. “We are in some sense on a very ancient vessel, and we are sailing a very ancient sea.”

Still, he told his audience, we have the ability to overcome these evolutionary roadblocks to self-aware, smart decision-making, as long as we acknowledge our biases.

“We’ve been given that gift,” Gilbert said. “The question is, will we use it?”

Pierre Chandonm and Brian Wansink recently posted their paper “Is Food Marketing Making Us Fat? A Multi-Disciplinary Review” on SSRN. Here’s the abstract.

Whereas everyone recognizes that increasing obesity rates worldwide are driven by a complex set of interrelated factors, the marketing actions of the food industry are often singled out as one of the main culprits. But how exactly is food marketing making us fat? To answer this question, we review evidence provided by studies in marketing, nutrition, psychology, economics, food science, and related disciplines that have examined the links between food marketing and energy intake but have remained largely disconnected. Starting with the most obtrusive and most studied marketing actions, we explain the multiple ways in which food prices (including temporary price promotions) and marketing communication (including branding and nutrition and health claims) influence consumption volume. We then study the effects of less conspicuous marketing actions which can have powerful effects on eating behavior without being noticed by consumers. We examine the effects on consumption of changes in the food’s quality (including its composition, nutritional and sensory properties) and quantity (including the range, size and shape of the packages and portions in which it is available). Finally, we review the effects of the eating environment, including the availability, salience and convenience of food, the type, size and shape of serving containers, and the atmospherics of the purchase and consumption environment. We conclude with research and policy implications.

For more on the situation of eating, see Situationist contributors Adam Benforado, Jon Hanson, and David Yosfion’s law review article Broken Scales: Obesity and Justice in America. For a listing of numerousSituaitonist posts on the situational sources of obesity, click here.

From the APS Monitor (excerpts from a terrific primer on “The Mechanics of Choice”):

* * *

The prediction of social behavior significantly involves the way people make decisions about resources and wealth, so the science of decision making historically was the province of economists. And the basic assumption of economists was always that, when it comes to money, people are essentially rational. It was largely inconceivable that people would make decisions that go against their own interests. Although successive refinements of expected-utility theory made room for individual differences in how probabilities were estimated, the on-the-surface irrational economic behavior of groups and individuals could always be forced to fit some rigid, rational calculation.The problem is — and everything from fluctuations in the stock market to decisions between saving for retirement or purchasing a lottery ticket or a shirt on the sale rack shows it — people just aren’t rational. They systematically make choices that go against what an economist would predict or advocate.Enter a pair of psychological scientists — Daniel Kahneman (currently a professor emeritus at Princeton) and Amos Tversky — who in the 1970s turned the economists’ rational theories on their heads. Kahneman and Tversky’s research on heuristics and biases and their Nobel Prize winning contribution, prospect theory, poured real, irrational, only-human behavior into the calculations, enabling much more powerful prediction of how individuals really choose between risky options.

* * *

Univ. of Toronto psychologist Keith E. Stanovich and James Madison Univ. psychologist Richard F. West refer to these experiential and analytical modes as “System 1” and “System 2,” respectively. Both systems may be involved in making any particular choice — the second system may monitor the quality of the snap, System-1 judgment and adjust a decision accordingly.7 But System 1 will win out when the decider is under time pressure or when his or her System-2 processes are already taxed.

This is not to entirely disparage System-1 thinking, however. Rules of thumb are handy, after all, and for experts in high-stakes domains, it may be the quicker form of risk processing that leads to better real-world choices. In a study by Cornell University psychologist Valerie Reyna and Mayo Clinic physician Farrell J. Lloyd, expert cardiologists took less relevant information into account than younger doctors and medical students did when making decisions to admit or not admit patients with chest pain to the hospital. Experts also tended to process that information in an all-or-none fashion (a patient was either at risk of a heart attack or not) rather than expending time and effort dealing with shades of gray. In other words, the more expertise a doctor has, the more that his or her intuitive sense of the gist of a situation was used as a guide.8

In Reyna’s variant of the dual-system account, fuzzy-trace theory, the quick-decision system focuses on the gist or overall meaning of a problem instead of rationally deliberating on facts and odds of alternative outcomes.9 Because it relies on the late-developing ventromedial and dorsolateral parts of the frontal lobe, this intuitive (but informed) system is the more mature of the two systems used to make decisions involving risks.

A 2004 study by Vassar biopsychologist Abigail A. Baird and Univ. of Waterloo cognitive psychologist Jonathan A. Fugelsang showed that this gist-based system matures later than do other systems. People of different ages were asked to respond quickly to easy, risk-related questions such as “Is it a good idea to set your hair on fire?”, “Is it a good idea to drink Drano?”, and “Is it a good idea to swim with sharks?” They found that young people took about a sixth of a second longer than adults to arrive at the obvious answers (it’s “no” in all three cases, in case you were having trouble deciding).10 The fact that our gist-processing centers don’t fully mature until the 20s in most people may help explain the poor, risky choices younger, less experienced decision makers commonly make.

Adolescents decide to drive fast, have unprotected sex, use drugs, drink, or smoke not simply on impulse but also because their young brains get bogged down in calculating odds. Youth are bombarded by warning statistics intended to set them straight, yet risks of undesirable outcomes from risky activities remain objectively small — smaller than teens may have initially estimated, even — and this may actually encourage young people to take those risks rather than avoid them. Adults, in contrast, make their choices more like expert doctors: going with their guts and making an immediate black/white judgment. They just say no to risky activities because, however objectively unlikely the risks are, there’s too much at stake to warrant even considering them.11

Making Better Choices

The gist of the matter is, though, that none of us, no matter how grown up our frontal lobes, make optimal decisions; if we did, the world would be a better place. So the future of decision science is to take what we’ve learned about heuristics, biases, and System-1 versus System-2 thinking and apply it to the problem of actually improving people’s real-world choices.

One obvious approach is to get people to increase their use of System 2 to temper their emotional, snap judgments. Giving people more time to make decisions and reducing taxing demands on deliberative processing are obvious ways of bringing System 2 more into the act. Katherine L. Milkman (U. Penn.), Dolly Chugh (NYU), and Max H. Bazerman (Harvard) identify several other ways of facilitating System-2 thinking.12 One example is encouraging decision makers to replace their intuitions with formal analysis — taking into account data on all known variables, providing weights to variables, and quantifying the different choices. This method has been shown to significantly improve decisions in contexts like school admissions and hiring.

Having decision makers take an outsider’s perspective on a decision can reduce overconfidence in their knowledge, in their odds of success, and in their time to complete tasks. Encouraging decision makers to consider the opposite of their preferred choice can reduce judgment errors and biases, as can training them in statistical reasoning. Considering multiple options simultaneously rather than separately can optimize outcomes and increase an individual’s willpower in carrying out a choice. Analogical reasoning can reduce System-1 errors by highlighting how a particular task shares underlying principles with another unrelated one, thereby helping people to see past distracting surface details to more fully understand a problem. And decision making by committee rather than individually can improve decisions in group contexts, as can making individuals more accountable for their decisions.13

In some domains, however, a better approach may be to work with, rather than against, our tendency to make decisions based on visceral reactions. In the health arena, this may involve appealing to people’s gist-based thinking. Doctors and the media bombard health consumers with numerical facts and data, yet according to Reyna, patients — like teenagers — tend initially to overestimate their risks; when they learn their risk for a particular disease is actually objectively lower than they thought, they become more complacent — for instance by forgoing screening. Instead, communicating the gist, “You’re at (some) risk, you should get screened because it detects disease early” may be a more powerful motivator to make the right decision than the raw numbers. And when statistics are presented, doing so in easy-to-grasp graphic formats rather than numerically can help patients (as well as physicians, who can be as statistically challenged as most laypeople) extract their own gists from the facts.14

Complacency is a problem when decisions involve issues that feel more remote from our daily lives — problems like global warming. The biggest obstacle to changing people’s individual behavior and collectively changing environmental policy, according to Columbia University decision scientist Elke Weber, is that people just aren’t scared of climate change. Being bombarded by facts and data about perils to come is not the same as having it affect us directly and immediately; in the absence of direct personal experience, our visceral decision system does not kick in to spur us to make better environmental choices such as buying more fuel-efficient vehicles.15

How should scientists and policymakers make climate change more immediate to people? Partly, it involves shifting from facts and data to experiential button-pressing. Powerful images of global warming and its effects can help. Unfortunately, according to research conducted by Yale environmental scientist Anthony A. Leisurowitz, the dominant images of global warming in Americans’ current consciousness are of melting ice and effects on nonhuman nature, not consequences that hit closer to home; as a result, people still think of global warming as only a moderate concern.16

Reframing options in terms that connect tangibly with people’s more immediate priorities, such as the social rules and norms they want to follow, is a way to encourage environmentally sound choices even in the absence of fear.17 For example, a study by Noah J. Goldstein (Univ. of Chicago), Robert B. Cialdini (Arizona State), and Vladas Griskevicius (Univ. of Minnesota) compared the effectiveness of different types of messages in getting hotel guests to reuse their towels rather than send them to the laundry. Messages framed in terms of social norms — “the majority of guests in this room reuse their towels” — were more effective than messages simply emphasizing the environmental benefits of reuse.18

Yet another approach to getting us to make the most beneficial decisions is to appeal to our natural laziness. If there is a default option, most people will accept it because it is easiest to do so — and because they may assume that the default is the best. University of Chicago economist Richard H. Thaler suggests using policy changes to shift default choices in areas like retirement planning. Because it is expressed as normal, most people begin claiming their Social Security benefits as soon as they are eligible, in their early to mid 60s — a symbolic retirement age but not the age at which most people these days are actually retiring. Moving up the “normal” retirement age to 70 — a higher anchor — would encourage people to let their money grow longer untouched.19

* * *

Making Decisions About the Environment

APS Fellow Elke Weber recently had the opportunity to discuss her research with others who share her concern about climate change, including scientists, activists, and the Dalai Lama. Weber . . . shared her research on why people fail to act on environmental problems. According to her, both cognitive and emotional barriers prevent us from acting on environmental problems. Cognitively, for example, a person’s attention is naturally focused on the present to allow for their immediate survival in dangerous surroundings. This present-focused attitude can discourage someone from taking action on long-term challenges such as climate change. Similarly, emotions such as fear can motivate people to act, but fear is more effective for responding to immediate threats. In spite of these challenges, Weber said that there are ways to encourage people to change their behavior. Because people often fail to act when they feel powerless, it’s important to share good as well as bad environmental news and to set measurable goals for the public to pursue. Also, said Weber, simply portraying reduced consumption as a gain rather than a loss in pleasure could inspire people to act.

This is the third in our series of posts intended to help our readers with their New Year’s resolutions. From The Sun Herald, here is a brief description of recent research on the benefits of retraining your brain.

What does it really take to change a habit? It may have less to do with willpower and more to do with consistency and a person’s environment, researchers have found.

A 2009 study in the European Journal of Social Psychology had 96 people adopt a new healthful habit over 12 weeks – things like running for 15 minutes at the same time each day or eating a piece of fruit with lunch. The average number of days it took for participants to pick up the habit was 66, but the range was huge, from 18 to 254 days.

Those who chose simple habits, such as drinking a glass of water, did better overall than those who had more involved tasks, such as running.

Skipping a day here and there didn’t seem to derail things, but greater levels of inconsistency did. Erratic performers tended not to form habits.

The same study also found that having a cue for when or where you performed the habit acted as a reminder and helped to make the habit stick. By always exercising in the morning you’re reminded that when you get up, it’s time to head to the gym. Consistently eating meals at the dining table takes away the urge to eat while sitting on the sofa with the television on.

Contrary to popular belief, adopting more healthful routines may have little to do with how much resolve someone has, says Wendy Wood, provost professor of psychology and business at the University of Southern California.

“We tell ourselves that if only we had willpower we’d be able to exercise every day and avoid eating bags of chips,” she says. “But those behaviors are difficult to control because we have patterns that are cued by the environment” – patterns that we’ve learned from past bad habits.

We’ve learned to associate being in the car with eating from fast-food restaurant drive-throughs, so that when we’re out running errands we find ourselves wanting a burger and fries, perhaps when we’re not even hungry.

We’ve learned to associate arriving home with collapsing in front of the TV, and arriving at work with taking the elevator.

We go to the movies and automatically purchase a giant drum of buttery popcorn – and once the habit is formed, we’ll eat the popcorn even if it tastes bad, Wood has found.

In a study she coauthored that was published in 2011 in the journal Personality and Social Psychology Bulletin, moviegoers were given fresh or stale popcorn to snack on while watching trailers.

People who were avid popcorn-eaters ate the same amount of stale popcorn as fresh: They evidently were snacking mindlessly. In contrast, those who didn’t have a movie-popcorn habit ate less stale popcorn than fresh.

“Once these habits become cued by the environment,” Wood says, “they tend to continue whether people are enjoying them or not.”

At the movie theater, instead of getting a large popcorn, get a small one or drink water instead. Soon you’ll associate movies with those new choices. Take the stairs the minute you walk into the building where you work – soon you’ll associate arriving at work with stair-climbing.

Instead of succumbing to the habit of snacking while sitting on the sofa and watching TV, use the time instead to do some simple exercises. After a while … you get the idea.

It takes some thought in the beginning, Wood says, “but once you’ve figured it out, it runs on its own. You’ve outsourced your behavior to the environment.”

For more on the situation of eating, see Situationist contributors Adam Benforado, Jon Hanson, and David Yosfion’s law review article Broken Scales: Obesity and Justice in America. For a listing of numerous Situaitonist posts on the situational sources of obesity, click here.

Since the early 2000s, much of Situationist Contributors’ research, writing, teaching, and speaking has focused on the role of “choice,” “the choice myth,” and “choicism” in rationalizing injustice and inequality, particularly in the U.S. (e.g., The Blame Frame: Justifying (Racial) Injustice in America). That work has, among other factors, helped to inspire a growing body of fascinating experimental research (and, unfortunately, one derivative book) on the topic. Over the next couple of months, we will highlight some of that intriguing new research on The Situationist. (First installment, “Choice and Inequality, is here.)

Here is a summary of research co-authored by Situationist friend Nicole Stephens.

For the first time in history, the majority of Americans believe that women’s job opportunities are equal to men’s. For example, a 2005 Gallup poll indicated that 53 percent of Americans endorse the view that opportunities are equal, despite the fact that women still earn less than men, are underrepresented at the highest levels of many fields, and face other gender barriers such as bias against working mothers and inflexible workplaces.

New research from the Kellogg School of Management at Northwestern University helps to explain why many Americans fail to see these persistent gender barriers. The research demonstrates that the common American assumption that behavior is a product of personal choice fosters the belief that opportunities are equal and that gender barriers no longer exist in today’s workplace.

The study, “Opting Out or Denying Discrimination? How the Framework of Free Choice in American Society Influences Perceptions of Gender Inequality,” suggests that the assumption that women “opt out” of the workforce, or have the choice between career or family, promotes the belief that individuals are in control of their fates and are unconstrained by the environment.

The study was co-authored by Nicole M. Stephens, assistant professor of management and organizations at the Kellogg School of Management, and Cynthia S. Levine, a doctoral student in the psychology department at Stanford University. It will be published in a forthcoming issue of Psychological Science, a journal of the Association for Psychological Science.

“Although we’ve made great strides toward gender equality in American society, significant obstacles still do, in fact, hold many women back from reaching the upper levels of their organizations,” said Stephens. “In our research, we sought to determine how the very idea of ‘opting out,’ or making a choice to leave the workplace, may be maintaining these social and structural barriers by making it more difficult to recognize gender discrimination.”

In one study, a group of stay-at-home mothers answered survey questions about how much choice they had in taking time off from their career and about their feelings of empowerment in making life plans and controlling their environment.

The participants then reviewed a set of real statistics about gender inequality in four fields – business, politics, law and science/engineering – and were asked to evaluate whether these barriers were due to bias against women or societal and workplace factors that make it difficult for women to hold these positions.

As predicted, most women explained their workplace departure as a matter of personal choice – which is reflective of the cultural understanding of choice in American society and underscores how the prevalence of choice influences behavior. These same women experienced a greater sense of personal well-being, but less often recognized the examples of discrimination and structural barriers presented in the statistics.

In a follow-up experiment, the researchers examined the consequences of the common cultural representation of women’s workplace departure as a choice. Specifically, they examined how exposure to a choice message influenced Americans’ beliefs about equality and the existence of discrimination. First, undergraduate students were subtly exposed to one of two posters on a wall about women leaving the workforce: either a poster with a choice message (“Choosing to Leave: Women’s Experiences Away from the Workforce”) or one in a control condition that simply said “Women at Home: Experiences Away from the Workforce.”

Then, the participants were asked to take a survey about social issues. The participants exposed to the first poster with the choice message more strongly endorsed the belief that opportunities are equal and that gender discrimination is nonexistent, versus the control group who more clearly recognized discrimination. Interestingly, those participants who considered themselves to be feminists were more likely than other participants to identify discrimination.

“This second experiment demonstrates that even subtle exposure to the choice framework promotes the belief that discrimination no longer exists,” said Levine. “One single brief encounter – such as a message in a poster – influenced the ability to recognize discrimination. Regular exposure to such messages could intensify over time, creating a vicious cycle that keeps women from reaching the top of high-status fields.”

Overall, Stephens and Levine noted that while choice may be central to women’s explanations of their own workplace departure, this framework is a double-edged sword.

“Choice has short-term personal benefits on well-being, but perhaps long-term detriments for women’s advancement in the workplace collectively,” said Stephens. “In general, as a society we need to raise awareness and increase attention for the gender barriers that still exist. By taking these barriers into account, the discussion about women’s workplace departure could be reframed to recognize that many women do not freely choose to leave the workplace, but instead are pushed out by persistent workplace barriers such as limited workplace flexibility, unaffordable childcare, and negative stereotypes about working mothers.”

Since the early 2000s, much of Jon Hanson’s (and other Situationist Contributor’s) research, writing, teaching, and speaking has focused on the role of “choice,” “the choice myth,” and “choicism” in rationalizing injustice and inequality, particularly in the U.S. (e.g., The Blame Frame: Justifying (Racial) Injustice in America). That work has helped to inspire a significant amount of fascinating experimental research (and, unfortunately, one derivative book) on the topic. Over the next couple of months, we will highlight some of that intriguing new research on The Situationist.

Abstract: Wealth inequality has significant psychological, physiological, societal, and economic costs. We investigate how seemingly innocuous, culturally pervasive ideas can help maintain and further wealth inequality. Specifically, we test whether the concept of choice, which is deeply valued in American society, leads people to act in ways that maintain and perpetuate wealth inequality. Choice, we argue, activates the belief that life outcomes stem from personal agency, not from societal factors, leading people to justify wealth inequality. Six experiments show that when choice is highlighted, people are less disturbed by facts about the existing wealth inequality in the U.S., more likely to underestimate the role of societal factors in individuals’ successes, less likely to support the redistribution of educational resources, and less likely to tax the rich even to resolve a government budget deficit crisis. The findings indicate that the culturally valued concept of choice contributes to the maintenance of wealth inequality.

* * *

Wealth inequality has substantial negative consequences for societies, including reduced well-being (Napier & Jost, 2008), fewer public goods (Frank, 2011; Kluegel & Smith, 1986), and even lower economic growth (Alesina & Rodrik, 1994). Despite these well-known negative consequences, high levels of wealth inequality persist in many nations. For example, the U.S. has the greatest degree of wealth inequality among all the industrialized countries in terms of the Gini Coefficient (93rd out of 134 countries; CIA Factbook, 2010). Moreover, wealth inequality in the U.S. substantially worsened in the first decade of the 21st century, with median household income in 2010 equal to that in 1997 (U.S. Census Bureau, 2011), although per-capita GDP increased by 33% over the same period (Bureau of Economic Analysis, 2011), indicating that all of the gain in wealth was concentrated at the top end of the wealth distribution.

A large majority of Americans disapprove of a high degree of wealth inequality (Norton & Ariely, 2011), for example, when the top 1% of people on the wealth distribution possess 35% of the nation’s wealth, as was the case in the U.S. in 2007 (Wolff, 2010). Instead, people prefer a more equal distribution of wealth that includes a strong middle class, such as when the middle 60% of people own approximately 60% of the nation’s wealth, rather than only the 15% that they owned in the U.S. in 2007. If people are unhappy with wealth inequality, then policies that reduce this inequality should be widely supported, particularly in times of increasing wealth inequality. However, Americans often oppose specific policies that would remedy wealth inequality (Bartels, 2005). For example, taxation and redistribution—taxing the rich and using the proceeds to provide public goods, public insurance, and a minimum standard of living for the poor—is probably the most effective means for reducing wealth inequality from an economic perspective (Frank, 2011; Korpi & Palme, 1998). However, most Americans, including working class and middle class citizens, have supported tax cuts even for the very rich and oppose government spending on social services that would mitigate inequality (Bartels, 2005; Fong, 2001). What factors explain thisinconsistency between a general preference for greater wealth equality and opposition to specific policies that would produce it? We investigate whether people’s attitudes toward wealth inequality and support for policies that reduce wealth inequality are influenced by the concept of choice.

Choice is a core concept in U.S. American culture . . . .

Recent research suggests that the concept of choice decreases support for societally beneficial policies (e.g., a tax on highly polluting cars) but increases support for policies furthering individual rights (e.g., legalizing drugs; Savani, Stephens, & Markus, 2011). Historical analyses also suggest that Americans often use the concept of choice to justify inequality, arguing that the poor are poor because they made bad choices (Hanson & Hanson, 2006; see also Stephens & Levine, 2011). Building upon this work, we theorized that the assumption that people make free choices, when combined with the fact that some people turned out rich and others poor, leads people to believe that inequality in life outcomes is justified and reasonable. Therefore, when people think in terms of choice, we hypothesized that they would be less disturbed by wealth inequality and less supportive of policies aimed at reducing this inequality. . . .

The web is also extremely useful in allowing dinner party debates and argument to achieve some semblance of closure.

Take, for example, a recent dinner party exchange about whether people who are virulently homophobic are more likely to be homosexual. In the past, this discussion would have turned largely on personal stories and well-known cases, but, with smart phones at the ready, it was possible to quickly check whether any psychological studies had been done on the matter.

The authors investigated the role of homosexual arousal in exclusively heterosexual men who admitted negative affect toward homosexual individuals. Participants consisted of a group of homophobic men (n = 35) and a group of nonhomophobic men (n = 29); they were assigned to groups on the basis of their scores on the Index of Homophobia (W. W. Hudson & W. A. Ricketts, 1980). The men were exposed to sexually explicit erotic stimuli consisting of heterosexual, male homosexual, and lesbian videotapes, and changes in penile circumference were monitored. They also completed an Aggression Questionnaire (A. H. Buss & M. Perry, 1992). Both groups exhibited increases in penile circumference to the heterosexual and female homosexual videos. Only the homophobic men showed an increase in penile erection to male homosexual stimuli. The groups did not differ in aggression. Homophobia is apparently associated with homosexual arousal that the homophobic individual is either unaware of or denies.

Interesting stuff! Now, if you wouldn’t mind passing the mashed potatoes, that would be great . . .

NPLAN filed a complaint today with the FTC today alleging that Frito-Lay has engaged in deceptive marketing to teens by disguising Doritos ads as entertainment; by collecting and using kids’ personal information in violation of its own privacy policy and without adequate disclosure about the extent and purpose of the data collection; and by engaging in viral marketing in violation of the FTC’s endorsement guidelines. Learn more about the complaint here.

These videos, which detail the advertising strategies and goals, speak for themselves.

For more on the situation of eating, see Situationist contributors Adam Benforado, Jon Hanson, and David Yosfion’s law review article Broken Scales: Obesity and Justice in America. For a listing of numerous Situaitonist posts on the situational sources of obesity, click here.

Over the past 50 years, we’ve seen a number of gigantic policies produce disappointing results — policies to reduce poverty, homelessness, dropout rates, single-parenting and drug addiction. Many of these policies failed because they were based on an overly simplistic view of human nature. They assumed that people responded in straightforward ways to incentives. Often, they assumed that money could cure behavior problems.

Fortunately, today we are in the middle of a golden age of behavioral research. Thousands of researchers are studying the way actual behavior differs from the way we assume people behave. They are coming up with more accurate theories of who we are, and scores of real-world applications.

* * *

Yet in the middle of this golden age of behavioral research, there is a bill working through Congress that would eliminate the National Science Foundation’s Directorate for Social, Behavioral and Economic Sciences. This is exactly how budgets should not be balanced — by cutting cheap things that produce enormous future benefits.

Let’s say you want to reduce poverty. We have two traditional understandings of poverty. The first presumes people are rational. They are pursuing their goals effectively and don’t need much help in changing their behavior. The second presumes that the poor are afflicted by cultural or psychological dysfunctions that sometimes lead them to behave in shortsighted ways. Neither of these theories has produced much in the way of effective policies.

Eldar Shafir of Princeton and Sendhil Mullainathan of Harvard have recently, with federal help, been exploring a third theory, that scarcity produces its own cognitive traits.

A quick question: What is the starting taxi fare in your city? If you are like most upper-middle-class people, you don’t know. If you are like many struggling people, you do know. Poorer people have to think hard about a million things that affluent people don’t. They have to make complicated trade-offs when buying a carton of milk: If I buy milk, I can’t afford orange juice. They have to decide which utility not to pay.

These questions impose enormous cognitive demands. The brain has limited capacities. If you increase demands on one sort of question, it performs less well on other sorts of questions.

Shafir and Mullainathan gave batteries of tests to Indian sugar farmers. After they sell their harvest, they live in relative prosperity. During this season, the farmers do well on the I.Q. and other tests. But before the harvest, they live amid scarcity and have to think hard about a thousand daily decisions. During these seasons, these same farmers do much worse on the tests. They appear to have lower I.Q.’s. They have more trouble controlling their attention. They are more shortsighted. Scarcity creates its own psychology.

Princeton students don’t usually face extreme financial scarcity, but they do face time scarcity. In one game, they had to answer questions in a series of timed rounds, but they could borrow time from future rounds. When they were scrambling amid time scarcity, they were quick to borrow time, and they were nearly oblivious to the usurious interest rates the game organizers were charging. These brilliant Princeton kids were rushing to the equivalent of payday lenders, to their own long-term detriment.

Shafir and Mullainathan have a book coming out next year, exploring how scarcity — whether of time, money or calories (while dieting) — affects your psychology. They are also studying how poor people’s self-perceptions shape behavior. Many people don’t sign up for the welfare benefits because they are intimidated by the forms. Shafir and Mullainathan asked some people at a Trenton soup kitchen to relive a moment when they felt competent and others to recount a neutral experience. Nearly half of the self-affirming group picked up an available benefits package afterward. Only 16 percent of the neutral group did.

People are complicated. We each have multiple selves, which emerge or don’t depending on context. If we’re going to address problems, we need to understand the contexts and how these tendencies emerge or don’t emerge. We need to design policies around that knowledge. Cutting off financing for this sort of research now is like cutting off navigation financing just as Christopher Columbus hit the shoreline of the New World.