Integrity, Socio-Economic Class and Moral Licensing

To see whether dishonesty varies with social class, psychologist Paul Piff of the University of California, Berkeley, and colleagues devised a series of tests, working with groups of 100 to 200 Berkeley undergraduates or adults recruited online. Subjects completed a standard gauge of their social status, placing an X on one of 10 rungs of a ladder representing their income, education, and how much respect their jobs might command compared with other Americans.

The team’s findings suggest that privilege promotes dishonesty. For example, upper-class subjects were more likely to cheat. After five apparently random rolls of a computerized die for a chance to win an online gift certificate, three times as many upper-class players reported totals higher than 12—even though, unbeknownst to them, the game was rigged so that 12 was the highest possible score.

The Gemara (as explained by the commentators) mentions the opposite idea, declaring that typically, a borrower will trust a lender, “for if he were not a trustworthy and upright person, he would not have been made rich by Heaven”, whereas a lender will not trust a borrower, “for if he were not a betrayer and a deceiver, he would not have been required by Heaven to become a borrower”:

In a final experiment, the researchers took their hypothesis to the streets. At a busy intersection in the San Francisco Bay area, the team stationed “pedestrians” at crosswalks, with instructions to approach the crossing at a point when oncoming drivers would have a chance to stop. Observers coded the status of the cars’ drivers based on the vehicles’ age, make, and appearance. Drivers of shiny, expensive cars were three times more likely than those of old clunkers to plow through a crosswalk, failing to yield to pedestrians as required by California state law. High-status motorists were also four times more likely than those with cheaper, older cars to cut off other drivers at a four-way stop.

In an interesting twist, about one-third of Prius drivers broke crosswalk laws, putting the hybrid among the highest “unethical driving” car brands. “This is a good demonstration of the ‘moral licensing’ phenomenon, in which hybrid-car drivers who believe they’re saving the Earth may feel entitled to behave unethically in other ways,” Piff says. (The Prius results were observed but not analyzed for statistical significance in the study.)

Our moral thermostat – why being good can give people license to misbehave

What happens when you remember a good deed, or think of yourself as a stand-up citizen? You might think that your shining self-image would reinforce the value of selflessness and make you more likely to behave morally in the future. But a new study disagrees.

Through three psychological experiments, Sonya Sachdeva from Northwestern University found that people who are primed to think well of themselves behave less altruistically than those whose moral identity is threatened. They donate less to charity and they become less likely to make decisions for the good of the environment.

Sachdeva suggests that the choice to behave morally is a balancing act between the desire to do good and the costs of doing so – be they time, effort or (in the case of giving to charities) actual financial costs. The point at which these balance is set by our own sense of self-worth. Tip the scales by threatening our saintly personas and we become more likely to behave selflessly to cleanse our tarnished perception. Do the opposite, and our bolstered moral identity slackens our commitment, giving us a license to act immorally. Having established our persona as a do-gooder, we feel less impetus to bear the costs of future moral actions.

It’s a fascinating idea. It implies both that we have a sort of moral thermostat, and that it’s possible for us to feel “too moral”. Rather than a black-and-white world of heroes and villains, Sachdeva paints a picture of a world full of “saintly sinners and sinning saints”.

In her first experiment, Sachdeva asked 46 students to copy a list of nine words that were either positive (“caring”, “generous” or “kind”), negative (“disloyal”, “greedy” or “selfish”) or neutral (“book”, “keys” or “house”). The recruits were told that they had signed up for a study on the psychology of handwriting, and they had to write a story about themselves that included all of the words they saw. They then completed a filler task, after which they were asked if they wanted to make a small donation to a charity of their choice.

Sachdeva found that the students who described themselves with positive words gave the least to charity – a measly $1.07. That was less than the average $2.71 donations of the group that used the neutral words, and about a fifth as much as the $5.30 contributions given by the negative-word group.

Of course, the volunteers’ essays may not actually have affected their moral identity. Indeed, they had a tendency to use the positive words to describe themselves, but the negative ones to portray someone else in their lives. To control for that, Sachdeva repeated the experiment with another group of 39 students but this time, she randomly told them to write specifically about either themselves or someone they knew.

Among those who described other people, the nature of the words they used had no significant bearing on the amount of money they donated. But among the group who wrote about themselves, those who described themselves positively gave less to charity ($1.11) than those whose choice of words were negative ($5.56). It seems that a person’s propensity for selflessness changes when their self-image shifts.

A third experiment supported that idea. After completing the same task as before, 46 students were led to what they believed was a second unrelated study. They were role-playing as the manager of a manufacturing plant, which was facing pressure from environmental lobbyists to reduce the pollutants from its smokestacks using expensive air filters. Other managers had agreed to run them for 60% of the time.

Amid a smokescreen of general questions, Sachdeva asked the volunteers to say how often they themselves would run the filters for. Their answers showed the same trend as the first experiment.

Those who saw the negative words were extra-cooperative, running the filters for 73% of the time. The neutral group ran the filters 67% of the time. And the positive-word group were the least cooperative, running them just 56% of the time. They, in particular, were more likely to think that the plant’s profits outweighed environmental concerns. However, when Sachdeva asked them to predict what proportion of the other managers would stick to the 60% agreement, the three groups gave similar answers. Again, it was their own self-image that mattered.

In all three studies, Sachdeva believes that her story-telling task psychologically primed the volunteers with positive or negative traits. They either wanted to cleanse themselves morally, or felt they had license to kick back a bit and let their wicked side out.

Other groups have found similar results before. In 1969, Merrill Carlsmith and Alan Gross found that people are more compliant to a researcher’s requests if they had previously been forced to deliver painful (and fake) electric shocks to a (pretend) victim (but not if they just watched this happening). Their motive was to alleviate their own personal guilt, for they behaved in the same way even if the researcher was apparently unaware of their wrongdoing and even if their act of restitution had no impact on the shocked victim. I’ve also blogged before about situations where people will prefer cleaning products and will physically clean themselves if they remembered a past misdeed.

Sachdeva also cites several studies which have found that ethical behaviour provides a license for laxer morality. People who can establish their identity as a non-prejudiced person, by contradicting sexist statements or hiring someone from an ethnic minority, become more likely to make prejudiced choices later.

There are many potentially fascinating ways of expanding on this study. For example, it would be interesting to see if asking people to remember many instances where they behaved ethically would produce a stronger license to misbehave than recalling just a single good deed.

Even better, you could see if changing a person’s self-image would affect their tendency to cheat in psychological games. That would tell us whether moral licensing gives people an excuse to avoid actively doing good deeds, or whether it actually increases the chances of immoral behaviour, perhaps by lowering the bar for what is deemed acceptable. Do people just avoid being good or would they actively be bad?

Sachdeva is also interested in the types of situations where people seem to break free of this self-regulating loop of morality, and where good behaviour clearly begets more good behaviour. For example, many social or political activists drop out of their causes after some cursory participation, but others seem to draw even greater fervour. Why?

Sachdeva has two explanations. The first deals with habits – many selfless actions become more routine with time (recycling, for one). As this happens, the effort involved lessens, the “costs” seem smaller, and the potential for moral licensing fades. The second explanation relates to the standards that people set for themselves. Those who satisfy their moral goals award themselves with a license to disengage more easily, but those who hold themselves to loftier standards are more likely to stay the course.

The moral thermostat and the problem of cultivating ethical scientists.

The general attitude that emerges from these studies seems to be that being good is a chore (since it requires effort and sometimes expenditure), but that it’s a chore that stays done longer than dishes, laundry, or those other grinding but necessary labors that we understand need attention on a regular basis.

As someone who thinks a lot about the place of ethics in the everyday interactions of scientists, you can imagine I have some thoughts about this attitude.

Sadly, a track record of being ethical isn’t sufficient in a world where your fellow scientist is relying on you to honestly report the results of your current study, to refrain from screwing over the author of the manuscript you are currently reviewing, and to make decisions that are not swayed by prejudice on the hiring committee on which you currently serve. But Sachdeva’s experiments raise the possibility that your awareness of your past awesomeness, ethically speaking, could undercut your future ethical performance.

How on earth can people maintain the ethical behaviors we hope they will exercise?

As Ed notes, the research does not rule out the possibility that mere mortals could stay the ethical course. It’s just a question of how consistently ethical people are setting their moral thermostats: …

Within the community of science, there are plenty of habits scientists cultivate, some conscious and some unconscious. From the point of view of fostering more ethical behavior, it seems reasonable to say that cultivating a habit of honesty is a good thing — giving fair and accurate reports ought to be routine, rather than something that requires a great deal of conscious effort. Cultivating a habit of fairness (in evaluating the ideas and findings of others, in distributing the goods needed to do science, etc.) might also be worthwhile. The point is not to get scientists to display extraordinarily saintly behavior, but to make honesty and fairness a standard part of how scientists roll.

Then there’s the strategy of setting lofty goals. The scientific community shares a commitment to objectivity, something that involves both individual effort and coordination of the community of scientists. Objectivity is something that is never achieved perfectly, only by degrees. This sets the bar high enough that scientists’ frailties are always pretty evident, which may reduce the potential for backsliding.

At the same time, objectivity is so tightly linked with the scientific goal of building a reliable body of knowledge about the world that it’s unlikely that this lofty goal will be jettisoned simply because it’s hard to achieve.

I don’t think we can overlook the danger in latching onto goals that reveal themselves to be impossible or nearly so. Such goals won’t motivate action — or if they do, they will motivate actions like cheating as the most rational response to a rigged game. Indeed, situations in which the individual feels like her own success might require going against the interests of the community make me think that it’s vitally important for individual interests and community interests to be aligned with each other. If what is good for the community of scientists is also good for the individual scientist trying to participate in the scientific discourse and to make a contribution to the shared body of knowledge, then being good feels a lot less like altruism. And, tuning up the institutional contexts in which science is practiced to reduce the real career costs of honesty and cooperation might be the kind of thing that would lead to better behavior, too.

Our own ethical-spiritual tradition certainly acknowledges the pitfalls attendant upon the self-perception of moral accomplishment, but in my experience, the concern is generally with the potential for the feeling of conceit, complacency, smugness and contempt and condescension toward those perceived to be of lesser spiritual stature, and not the broader concept of moral licensing: