Articles

The Ethics of Tit-for-Tat

Massimo Pigliucci on game theory, rational egoism and the evolution of fairness.

Is it rational to be ethical? Many philosophers have wrestled with this most fundamental of questions, attempting to clarify whether humans are well served by ethical rules or whether they weigh us down. Would we really be better off if we all gave in to the desire to just watch out for our own interests and take the greatest advantage to ourselves whenever we can? The Objectivist Ayn Rand, for one, thought that the only rational behavior is egoism, and books aiming at increasing personal wealth (presumably at the expense of someone else’s wealth) regularly make the bestsellers list.

Plato, Kant, and John Stuart Mill, to mention a few, have tried to show that there is more to life than selfishness. In the Republic, Plato has Socrates defending his philosophy against the claim that justice and fairness are only whatever rich and powerful people decide they are. Despite a long and beautiful defense, the arguments of his opponents – that we can see plenty of examples of unjust people who have a great life and of just ones who suffer in equally great manner – seem more convincing than the high-mindedness of the father of philosophy.

Immanuel Kant attempted to reject what he saw as the cynical attitude of Christianity, where you are good now because you will get an infinite payoff later, and to establish independent rational foundations for morality. Therefore he suggested that in order to decide if something is ethical or not, one has to ask what would happen if everybody were adopting the same behavior. However, Kant never explained why his version of rational ethics is indeed rational. Rand would object that establishing double standards, one for yourself and one for the rest of the universe, makes perfect sense. Besides, Kant was so stern that he thought that an ethical action had dubious moral worth if the agent is getting any pleasure whatsoever out of it – even simple self-congratulation for a job well done. Since few human beings would be able to withstand Kant’s test of the truly moral individual, how rational can his views be?

Mill also tried to establish ethics on firm rational foundations, in his case by improving on Jeremy Bentham’s idea of utilitarianism. In chapter two of his book Utilitarianism, Mill writes: “Actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness.”Leaving aside the thorny question of what happiness is and the difficulty of actually making such calculations (Bentham’s ‘hedonic calculus’ had soon run into deep practical trouble), one still has to answer the fundamental question of why one should care about increasing the average degree of happiness instead of just one’s own.

Things got worse with the advent of modern evolutionary biology. It seemed for a long time that Darwin’s theory would provide the naturalistic basis for the ultimate selfish universe: nature red in tooth and claw evokes images of ‘every man for himself,’ in pure Randian style. In fact, Herbert Spencer popularized the infamous doctrine of ‘Social Darwinism’ (which Darwin himself never espoused) well before Rand wrote Atlas Shrugged. Most people still cling to the idea that evolution may be true, but it certainly is no way to raise your children. Indeed, it is precisely because of the perception of the moral consequences of evolutionary theory that a large percentage of Americans fiercely opposes the teaching of Darwin’s ideas in public schools.

Recently, however, several scientists and philosophers have been taking a second look at evolutionary theory and its relationship with ethics, and are finding new ways of realizing the project of Plato, Kant, and Mill of deriving a fundamentally rational way of being ethical. Elliot Sober and David Sloan Wilson, in their Unto Others: the Psychology and Evolution of Unselfish Behavior, as well as Peter Singer in A Darwinian Left: Politics, Evolution and Cooperation, argue that human beings evolved as social animals, not as lone, self-reliant brutes. In a society, those who exhibit cooperative behavior (or at least, a balance between cooperation and selfishness) will be selected in favor, while those looking out exclusively for number one will be ostracized because it reduces the fitness of most individuals and of the group as a whole.

All of this sounds good, but does it actually work? A recent study published in Science by Martin Nowak, Karen Page and Karl Sigmund provides a splendid example of how mathematical evolutionary theory can be applied to ethics, and how in fact social evolution favors fair and cooperative behavior. Nowak and his co-workers tackled the problem posed by the so-called ‘ultimatum game.’ In it, two players are offered the possibility of winning a pot of money, but they have to agree on how to divide it. One of the players, the proposer, makes an offer of a split ($90 for me, $10 for you, for example) to the other player; she, the responder, has the option of accepting or rejecting the offer. If she rejects it, the game is over and neither of them gets any money.

It is easy to demonstrate that the rational strategy is for the proposer to behave egotistically and to suggest a highly uneven split in which she takes most of the money, and for the responder to accept. The alternative is that neither of them gets anything. However, when real human beings from a variety of cultures and using a panoply of rewards play the game the outcome is invariably a fair share of the prize. This would seem prima facie evidence that the human sense of fair play overwhelms mere rationality and thwarts the rationalistic prediction. On the other hand, it would also provide Ayn Rand with an argument that most humans are simply stupid, because they don’t appreciate the math behind the game.

Nowak and colleagues, however, simulated the evolution of the game in a situation in which several players get to interact repeatedly. That is, they considered a social situation rather than isolated encounters. If the players have memory of previous encounters (i.e. each player builds a ‘reputation’ in the group), then the winning strategy is to be fair because people are willing to punish dishonest proposers, which increases their own reputation for fairness and damages the proposer’s reputation for the next round. This means that – given the social environment – it is rational to be less selfish toward your neighbors.

A classical example of the same idea is offered by the now famous solution to the Prisoners’ Dilemma introduced by American social theorist Robert Axelrod in the early 1980s. The game exists in several variants, the common feature among which is that two individuals have a choice of cooperating with each other (at a small cost) or of defecting. The defection pays off if only one of the two players does it, but there is a large disadvantage if they both do. Axelrod demonstrated that a simple strategy, called tit-for-tat, always wins these kinds of games as long as the players interact with each other several times, again building a ‘reputation’ for their behavior. Tit-for-tat is simply based on the idea that you cooperate on every first encounter, but after that you adjust your behavior to that of your partner: if he cooperates, you keep cooperating, if he defects, you retaliate. Axelrod actually called for a competition in which tit-for-tat, implemented in a computer program, would play against a variety of other strategies. In every case, being conditionally nice was the winner, and no completely selfish strategy was able to overcome it. I agree with Peter Singer when he says (in How Are We to Live?) that the full significance of Axelrod’s result is still not properly appreciated outside a narrow circle of specialists, but that it has the potential to change not only our personal lives, but the world of international politics as well.

While we are certainly far from a satisfying mathematical and evolutionary theory of morality, it seems that science does have something to say about optimal ethical rules after all. And the emerging picture is one of fairness and cooperation – not egotism – as the smart choice to make.

Massimo Pigliucci is Associate Professor of Ecology and Evolutionary Biology at the University of Tennessee. In his spare time, he is a graduate student in Philosophy at the same university. His ramblings can be found at http://www.rationallyspeaking.org.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X