Pages

Sunday, 9 March 2014

The ultimatum game, a key experiment showing intrinsic fairness and altruism among strangers

Knowledge will come only if economics can be reoriented to the study of man as he is and the economic system as it actually exists.Ronald Coase

There is a line of economic research on altruism that get only little attention in the media, which is why I want to report on it here. There is by now solid evidence that humans can behave altruistically towards strangers. This is surprising because a naive version of evolutionary theory would expect that altruism is only possible among kin. It also goes against the basic assumptions of economics, game theory and public choice theory, which all assume that humans only have an eye for their self-interest. These assumptions are often defended referring to evolution by stating that you need to be an unapologetic egoist for optimal reproduction. I mention this, because non-academics may think it is natural to assume humans can be altruistic and wonder why one would research something that trivial.

It is not controversial science. There are hundreds of scientists working on it. Almost every month Nature or Science publishes an article on altruism. And at the University of Bonn, where I work, Reinhard Selten is emeritus professor and got his Nobel price for experimental research on the ultimatum game. That is also how I heard about it, our University Magazine had an article on Selten and this beautiful experiments.

In a social setting, where there is an opportunity to build up a reputation, altruism can be explained. In this case, altruism may lead to future benefits and one may even argue that it is thus not real altruism. However, this type of altruism is expected to break down when a group is under pressure and may soon dissolve, but humans also collaborate under such difficult circumstances. Furthermore, reputation-building naturally does not work under conditions of anonymity, while experiments show that humans also collaborate with strangers they will never see again. Our ability to collaborate with non-kin is an important innovation that contributes much to our success as dominant animals.

Ultimatum game

A very simple and pure economic game, which thus shows the problem very clearly, is the ultimatum game (Güth et al., 1982). In the ultimatum game, two players must divide a sum of money. The first player has to propose a certain division. The second player (responder) can accept this division or reject it; in the latter case both players do not receive any money. In its purest form, the experiment is played only once and anonymously with players that do not know each other.

When the article described the experiment, I wondered what was interesting about that. Naturally people offer 50% and the responder accepts this. Not? That only showed my lack of economic training.Economic and game theory predict that the responder will accept any non-zero amount, because for a rational person obtaining something is better than nothing. Knowing this, the proposer is expected to give the responder only the smallest amount possible.

The experiment, however, shows that fair split are common and that low offers are regularly rejected (Güth et al., 1982). This result is found across many different cultural groups (Henrich et al., 2005) and although the fraction offered varies, giving the responder a fair share seems to be universal.

Many variations of the ultimatum game and experiments with similar economic games have led to the conclusion that a sense of fairness or the willingness to costly (altruistically) punish unfair behavior is innate to humans (Fehr et al., 2002). Neurological studies show that people feel disgust if treated unfair (Sanfey et al., 2003) and contrary feel good when able to punish unfair behavior (Singer et al., 2006). One should not confuse altruism with always playing nice. Without people that punish egoistic behavior, bad behavior will spread.

Maybe it sounds like a rather artificial laboratory experiment to you. That argument is used by many economists not to take the experiment seriously. However, similar situation happen regularly.

If you take a taxis (in another town), you typically pay for the ride. Why not run away? Economic theory would predict that the driver would not run after you. That is risky and all the hassle with the police would take hours and in that time the driver could have earned money. Going after a non-paying customer is pure altruism. There is no way to build up a reputation and it is completely anonymously; in the best case the chase would help a colleague taxi driver getting paid next time. Irrespective of economic theory, you can be sure that the taxi driver will try to catch you.

Basically, fairness is central to almost every economic interaction with a stranger. Once you have an eye for it, you notice how important altruism, fairness, and trust are in a modern society. Had we really been the ruthless egoists, the elites preach us to be, we would not have accomplished much.

Explanations

It has been argued that altruism is evolutionary not possible (Fehr and Fischerbacher, 2003). A "rational person" willing to accept any offer, a free rider of the ultimatum game, would be richer and able to produce more offspring as an altruistic individual. Consequently altruism would be expected to die out in such a naive evolutionary framework.

One way out is a multilevel selection framework (which is similar to what used to be called "group selection", which has become a derogatory term; Wilson and Wilson, 2007). The idea is that if a group benefits strongly, this might compensate for individual disadvantages of altruistic behavior. In such a framework altruism can develop if the competitive disadvantage of altruistic behavior within the group is compensated by strong advantages for the group. This framework thus needs to assume very little intra-group competition, very strong extra-group competition and group stability. Most scientists find the assumed numbers not realistic (Williams, 1966).

Another explanation for collaboration among non-kin could be joint investments, especially joint investments in building a nest and defenses for it. Like reputation, this does not work in an anonymous setting, however.

An explanation I could imagine could work, but for which I have not seen any study yet, would be microbes. If we live in a group with non-kin, we may not share genes, but we do share microbes. If altruism is beneficial for the group and thus for the size of the niche of the microbe, it could be a strategy of the microbe to make us more altruistic.

Microbes changing human behavior is less far fetched as one may think. The parasitic protozoan called Toxoplasma gondii can influence the behavior of rodents to increase the likelihood that they are eaten by cats, which the protozoan needs for sexual reproduction. Infected rodents are less repelled by cat odors, they become more curious and less anxious and they move around more. There is even some indication that Toxoplasma gondii also influences humans and can increase dopamine production and makes men more study and women more promiscuous.

Whatever the mechanism behind human altruism towards strangers will turn out to be, we know that evolution somehow found a trick and in this way improved the reproductive chances of people demanding fairness in general and specifically of the responders in the ultimatum game. It would be important to understand how it works, however. That would make it much harder for the economists to ignore these findings. And only then we can understand how prevalent altruism and fairness is and where we have to look for deviations from the simplistic homo economicus assumption.

Related reading

The Guardian published a review of the book: "I Spend, Therefore I Am" by Philip Roscoe. One sentence made very clear why it is important to improve economics and make sure everyone knows it view of humanity if flawed: "Not only does economics embody a false image of man...it remakes him according to that false image."

Go ahead and gossip. It’s good for society.
Article in the Washington Post on research showing that when people know others may talk about their reputation, and when it is possible to remove people from participation, they tend to behave more generously.

8 comments:

Anonymous
said...

I also recommend Daniel Kahneman's "Thinking fast and slow". Quite readable and some nice jabs at a considerable group of economical scientists for their common erroneous assumption of rationality in consumers.

Marco, thanks for the recommendation. I expect that you are right. The book is on my reading pile. I even have the hard cover, because at the time I did not want to wait.

Many of the examples in the book are, however, examples of what people call "bounded rationality". People not behaving rationally due to limited insight/capabilities. What I like about the ultimatum game is that the responder is better of by behaving "irrationally", by having a sense of fairness. Thus this is a case where rational is inferior.

If you are boundedly rational, you can often be exploited. Fairness on the other hand provides protection against being exploited.

And as a physicist, I also like the reduced nature of the game, it is very simple. In physics you also try to find an experiment that illustrates the effect you are interested in, while being as simple as possible. That allows you to go in depth.

Hopefully Kahneman's book will also provide some more experiments of such a nature.

I enjoyed reading this post, Victor. I studied a bit of economics at University and learnt about game theory which at first made me feel a bit angry. How could they assume complete self-interest? Humans are altruistic and I think there is an evolutionary basis for this.

Peter Singer has quite a good article about it online at The Biological Basis of Ethics and he mentions game theory and the prisoner's dilemma. I'll copy and paste a bit:

"The Prisoner's Dilemma shows that, paradoxical as it may seem, we will sometimes be better off if we are not self-interested. Two or more people motivated by self-interest alone may not be able to promote their interests as well as they could if they were more altruistic or more conscientious.

"The Prisoner's Dilemma explains why there could be an evolutionary advantage in being genuinely altruistic instead of making reciprocal exchanges on the basis of calculated self-interest. Prisons and confessions may not have played a substantial role in early human evolution, but other forms of cooperation surely did. Suppose two early humans are attacked by a sabertooth cat. If both flee, one will be picked off by the cat; if both stand their ground, there is a very good chance that they can fight the cat off; if one flees and the other stands and fights, the fugitive will escape and the fighter will be killed. Here the odds are sufficiently like those in the Prisoner's Dilemma to produce a similar result. From a self­interested point of view, if your partner flees your chances of survival are better if you flee too (you have a 50 percent chance rather than none at all) and if your partner stands and fights you still do better to run (you are sure of escape if you flee, whereas it is only probable, not certain, that together you and your partner can overcome the cat). So two purely self-interested early humans would flee, and one of them would die. Two early humans who cared for each other, however, would stand and fight, and most likely neither would die. Let us say, just to be able to put a figure on it, that two humans cooperating can defeat a sabertooth cat on nine out of every ten occasions and on the tenth occasion the cat kills one of them. Let us also say that when a sabertooth cat pursues two fleeing humans it always catches one of them, and which one it catches is entirely random, since differences in human running speed are negligible in comparison to the speed of the cat. Then one of a pair of purely self-interested humans would not, on average, last more than a single encounter with a sabertooth cat; but one of a pair of altruistic humans would on average survive ten such encounters."

Maybe I have underestimated my audience. High quality comments! Rachel, thanks for that link to Singer, a long read, but well worth it. And that text nicely complements my post as it describes the altruism among kin and by building up a reputation (indirect reciprocity). All these together with altruism towards strangers likely reenforce each other.

The example with a sabertooth tiger clearly shows that altruism and collaboration have always been important. Similarly we also see collaboration among animals, even among single-celled slime molds. They even sacrifice themselves so that other can be transported by the wind to more fertile grounds. They are likely not intelligent enough to allow for reputation to be important. Thus for such examples we need other mechanisms.

You've mostly talked about the altruism bit - ie, the "fair split". But the "low offers are rejected" bit is important; without that half, the other half doesn't work (or so I assert, and you have no experimental evidence to show otherwise).

> In its purest form, the experiment is played only once and anonymously with players that do not know each other

Once in that round, or once for that person? I'm a bit dubious that people get pulled in to play just once, and are then let back onto the street. More likely is that you play several rounds against unknowns. In which case, there is an incentive to reject: you hope to get better next time. Also, if you're splitting $100 and are offered $1, then the penalty for reject - vs the satisfaction of denying the other guy $99 - is small. Who cares about $1 anyway?

But more importantly, I think there is a lesson for society in all this, which we're not very good at learning, which is that punishing those who "cheat" is a necessary part of maintaining civilisation.

"You've mostly talked about the altruism bit - ie, the "fair split". But the "low offers are rejected" bit is important; without that half, the other half doesn't work (or so I assert, and you have no experimental evidence to show otherwise)."

The rejection of low offers is altruism. It may help others by enforcing the social norm of fairness, but it will normally not help you personally and you do get less money by rejecting a low offer. I would say it is the main conclusion of the research on the ultimatum game, that fairness is only possible if people altruistically punish unfair behaviour. So, yes, I agree fully with you that that is paramount to maintain civilisation. Maybe I should have stressed that even more above the line.

The anonymity is typically created by playing in large groups by computer or in the past by handing papers with offers and rejections to the experiment leader.

If you have two people that play multiple times with each other (by computer, without knowing each other), you see that in the beginning higher offers are rejected as at the end. This shows that people are aware that they can build up a reputation and that that can be helpful to get higher offers later. But also in the last round, or in single-shot games people reject too low offers.

There is also research on the ultimatum game in poor countries, where it is possible to play for amounts that are high for the players, comparable or even higher as a monthly salary. Even in such a scenario low offers are rejected. The fraction proposed and rejected does go down with the amount, but not very steeply.

Paying a seamstress in Asia 10 times more would make your sweater less than 1$ more expensive. The managers of clothing companies do their best to avoid getting a label on the market that would guarantee decent labour conditions.

Thus I would personally guess, that the amount is actually not that important. As with many social norms, you either violate it or not. If you violate it, you must count on retaliation, not matter how small the violation.

I think that there is something missing when trying to look for evolutionary explanations here. Situations like this - of interaction with a complete stranger, or more generally one-off no-consequence interactions, were probably rare in the past for the vast majority. So we may be looking at a spandrel; people default to fairness because a default behavior of unfairness would turn you into an outcast pretty quickly 'in the wild'.

You could equally argue that a mouse triggering a mousetrap on itself is being altruistic towards other mice in the vicinity. Of course, it's really a self-destructive behavior caused by a lack of evolutionary exposure to mousetraps. Hence the people playing the pure form of this game are engaging in behavior harmful to themselves (rejecting small offers), because their evolution has not been shaped by such encounters.

Andrew, yes, I think that is the strongest argument. You can never be sure that your anonymity is really guaranteed, especially in the past.

And behaving consistently when you do not expect to be observed is a very strong signal in case it is observed.

On the other hand, the experiments do show that people are quite aware of the possibility to build up a reputation and respond appropriately in the repeated games mentioned above. In earlier rounds people demand more fairness as later.

And also in the past, you were not just dealing with you own little tribe, but also with all your neighbours. With a high birth and death rate, that is a large amount of people and it would be hard to keep track of all those reputations.

In the end, this might be a good example for my recent post on falsification. What for one person is a fine solution to an apparent discrepancy, is an ugly ad-hoc fix to another and a sign to look for a better theory. I am in the second camp, but it is well possible that I am wrong. It is not like in the climate blog "debate", both positions are reasonable and we will have to see how it resolves.