We conceive of a player in dynamic games as a set of agents, which are assigned the distinct tasks of reasoning and node-specific choices. The notion of agent connectedness measuring the sequential stability of a player over time is then modeled in an extended type-based epistemic framework. Moreover, we provide an epistemic foundation for backward induction in terms of agent connectedness. Besides, it is argued that the epistemic independence assumption underlying backward induction is stronger than usually presumed.

The iterated prisoner’s dilemma (IPD) has been widely used in the biological and social sciences to model dyadic cooperation. While most of this work has focused on the discrete prisoner’s dilemma, in which actors choose between cooperation and defection, there has been some analysis of the continuous IPD, in which actors can choose any level of cooperation from zero to one. Here, we analyse a model of the continuous IPD with a limited strategy set, and show that a generous strategy (...) achieves the maximum possible payoff against its own type. While this strategy is stable in a neighborhood of the equilibrium point, the equilibrium point itself is always vulnerable to invasion by uncooperative strategies, and hence subject to eventual destabilization. The presence of noise or errors has no effect on this result. Instead, generosity is favored because of its role in increasing contributions to the most efficient level, rather than in counteracting the corrosiveness of noise. Computer simulation using a single-locus infinite alleles Gaussian mutation model suggest that outcomes ranging from a stable cooperative polymorphism to complete collapse of cooperation are possible depending on the magnitude of the mutational variance. Also, making the cost of helping a convex function of the amount of help provided makes it more difficult for cooperative strategies to invade a non-cooperative equilibrium, and for the cooperative equilibrium to resist destabilization by noncooperative strategies. (shrink)

This article argues that various deviations from the basic principles of the scientific ethos – primarily the appearance of pseudoscience in scientific communities – can be formulated and explained using specific models of game theory, such as the prisoner’s dilemma and the iterated prisoner’s dilemma. The article indirectly tackles the deontology of scientific work as well, in which it is assumed that there is no room for moral skepticism, let alone moral anti-realism, in the ethics of scientific communities. Namely, on (...) the basis of the generally accepted dictum of scientific endeavor as the pursuit of knowledge exclusively for knowledge’s sake, scientifically »right« behavior is seen to be clearly defined and distinguishable from scientifically »wrong« behavior. After elucidating the basic principles of game theory, the article illustrates – by using imaginary and real cases, as well as some views from the philosophyof biology (the units of selection debate) – how this sort of reasoning could be applied in an analysis of the functioning of science. (shrink)

Several philosophers have discussed informal versions of a "symmetry argument" that seems to show that two rational maximizers will cooperate when they are in a prisoner's dilemma. I present a more precise version of that argument and I argue that it is valid only if some crucial statements are misinterpreted as material conditionals instead of being interpreted correctly as subjunctive conditionals.

The backward induction argument purports to show that rational and suitably informed players will defect throughout a finite sequence of prisoner's dilemmas. It is supposed to be a useful argument for predicting how rational players will behave in a variety of interesting decision situations. Here, I lay out a set of assumptions defining a class of finite sequences of prisoner's dilemmas. Given these assumptions, I suggest how it might appear that backward induction succeeds and why it is actually fallacious. Then, (...) I go on to consider the consequences of adopting a stronger set of assumptions. Focusing my attention on stronger sets that, like the original, obey the informedness condition, I show that any supplementation of the original set that preserves informedness does so at the expense of forcing rational participants in prisoner's dilemma situations to have unexpected beliefs, ones that threaten the usefulness of backward induction. (shrink)

This is the introductory essay to the Italian translation of Matt Ridley's "The origins of virtue", surveying the game-theoretic and evolutionary approaches to the emergence and evolution of cooperation and altruism.

Existing models of strategic decision making typically assume that only the attributes of the currently played game need be considered when reaching a decision. The results presented in this article demonstrate that the so-called “cooperativeness” of the previously played prisoner’s dilemma games influence choices and predictions in the current prisoner’s dilemma game, which suggests that games are not considered independently. These effects involved reinforcement-based assimilation to the previous choices and also a perceptual contrast of the present game with preceding games, (...) depending on the range and the rank of their cooperativeness. A. Parducci’s (1965) range frequency theory and H. Helson’s (1964) adaptation level theory are plausible theories of relative judgment of magnitude information, which could provide an account of these context effects. (shrink)

Using a simple learning agent, we show that learning self-control in the primrose path experiment does parallel learning cooperation in the prisoner's dilemma. But Rachlin's claim that “there is no essential difference between self-control and altruism” is too strong. Only iterated prisoner's dilemmas played against reciprocators are reduced to self-control problems. There is more to cooperation than self-control and even altruism in a strong sense.

This collection focuses on questions that arise when morality is considered from the perspective of recent work on rational choice and evolution. Linking questions like "Is it rational to be moral?" to the evolution of cooperation in "The Prisoners Dilemma," the book brings together new work using models from game theory, evolutionary biology, and cognitive science, as well as from philosophical analysis. Among the contributors are leading figures in these fields, including David Gauthier, Paul M. Churchland, Brian Skyrms, Ronald de (...) Sousa, and Elliot Sober. (shrink)

InMorals by Agreement, David Gauthier (1986) argues that it is rational to intend to cooperate, even in single-play Prisoner's Dilemma games, provided (1) your co-player has a similar intention; (2) both intentions can be revealed to the other player. To this thesis four objections are made. (a) In a strategic decision the parameters on which the argument relies cannot be supposed to be given. (b) Of each pair ofa-symmetric intentions at least one is not rational. But it is impossible to (...) form symmetric intentions to cooperate conditionally. For the condition on which the decision depends cannot be fulfilled without deciding. (c) If one's intention has to be ascertained on the basis of information about one's past performance, it is straightforwardly rational to intend to cooperate, but there is no reason to do so in a single-play PD. (d) The argument cannot be extended ton-person games which are Gauthier's principal concern. (shrink)

The aim of this paper is to critically review the game-theoretic discussion of Hobbes and to develop a game-theoretic interpretation that gives due attention both to Hobbes's distinction between “moderates” and “dominators” and to what actually initiates conflict in the state of nature, namely, the competition for vital goods. As can be shown, Hobbes's state of nature contains differently structured situations of choice, the game-theoretic representation of which requires the prisoner's dilemma and the assurance game and the so-called assurance dilemma. (...) However, the “state of war” ultimately emerges from situations that cannot be described by any of these games because they represent zero-sum games in which the outcome of mutual cooperation does not exist. (shrink)

The "Prisoner's Dilemma" game has been extensively discussed in both the public and academic press. Thousands of articles and many books have been written about this disturbing game and its apparent representation of many problems of society. The origin of the game is attributed to Merrill Flood and Melvin Dresher. I quote from the Stanford Encyclopedia of Philosophy: Puzzles with this structure were devised and discussed by Merrill Flood and Melvin Dresher in 1950, as part of the Rand CorporationÂ’s investigations (...) into game theory (which Rand pursued because of possible applications to global nuclear strategy). The title "prisonerÂ’s dilemma" and the version with prison sentences as payoffs are due to Albert Tucker, who wanted to make Flood and DresherÂ’s ideas more accessible to an audience of Stanford psychologists. The Prisoner's Dilemma is a short parable about two prisoners who are individually offered a chance to rat on each other for which the "ratter" would receive a lighter sentence and the "rattee" would receive a harsher sentence. The problem results from the fact that both can play this game -- that is, defect -- and if both do, then both do worse than they would had they both kept silent. This peculiar parable serves as a model of cooperation between two or more individuals (or corporations or countries) in ordinary life in that in many cases each individual would be personally better off not cooperating (defecting) on the other. (shrink)

Gauthier's argument for constrained maximization, presented inMorals by Agreement, is perfected by taking into account the possibility of accidental exploitation and discussing the limitations on the values of the parameters which measure the translucency of the actors. Gauthier's argument is nevertheless shown to be defective concerning the rationality of constrained maximization as a strategic choice. It can be argued that it applies only to a single actor entering a population of individuals who are themselves not rational actors but simple rule-followers. (...) A proper analysis of the strategic choice situation involving two rational actors who confront each other shows that constrained maximization as the choice of both actors can only result under very demanding assumptions. (shrink)

The so-called "Prisoner''s Dilemma" is often referred to in business ethics, but probably not well understood. This article has three parts: (1) I claim that models derived from game theory are significant in the field for discussions of prudential ethics and the practical decisions managers make; (2) I discuss using them as a practical pedagogical exercise and some of the lessons generated; (3) more speculatively, I suggest that they are useful in discussions of corporate personhood.

The Prisoner’s Dilemma is a popular device used by researchers to analyze such institutions as business and the modem corporation. This popularity is not deserved under a certain condition that is widespread in college education. If we, as management educators, take seriouslyour parts in preparing our students to participate in the institutions of a democratic society, then the Prisoner’s Dilemma-as clever a rhetoricaldevice as it is-is an unacceptable means to that end. By posing certain questions about the prisoners in the (...) Prisoner’s Dilemma, I show that management educators have created a Prisoners Dilemma, whereby they intellectually imprison themselves and their students by continuingto appeal to the Prisoner’s Dilemma. These questions are not encouraged by the advocates of the Prisoner’s Dilemma. (shrink)

Collective action is interpreted as a matter of people doing something together, and it is assumed that this involves their having a collective intention to do that thing together. The account of collective intention for which the author has argued elsewhere is presented. In terms that are explained, the parties are jointly committed to intend as a body that such-and-such. Collective action problems in the sense of rational choice theoryproblems such as the various forms of coordination problem and the (...) prisoners dilemmaare then considered. An explanation is given of how, when such a problem is interpreted in terms of the parties inclinations, a suitable collective intention resolves the problem for agents who are rational in a broad sense other than the technical sense of game theory. Key Words: rationality  collective action  collective intention  joint commitment. (shrink)

Existing economic models of prosociality have been rather silent in terms of proximate psychological mechanisms. We nevertheless identify the psychologically most informed accounts and offer a critical discussion of their hypotheses for the proximate psychological explanations. Based on convergent evidence from several fields of research, we argue that there nevertheless is a more plausible alternative proximate account available: the social motivation hypothesis. The hypothesis represents a more basic explanation of the appeal of prosocial behavior, which is in terms of anticipated (...) social rewards. We also argue in favor of our own social motivation hypothesis over Robert Sugden’s fellow-feeling account (due originally to Adam Smith). We suggest that social motivation not only stands as a proximate account in its own right but also provides a plausible scaffold for other more sophisticated motivations (e.g., fellow-feelings). We conclude by discussing some possible implications of the social motivation hypothesis on existing modeling practice. (shrink)

The primrose path and prisoner's dilemma paradigms may require cognitive (executive) control: The active maintenance of context representations in lateral prefrontal cortex to provide top-down support for specific behaviors in the face of short delays or stronger response tendencies. This perspective suggests further tests of whether altruism is a type of self-control, including brain imaging, induced affect, and dual-task studies.

A version of this paper was presented at the IEEE International Conference on Computational Intelligence, combined meeting of ICNN, FUZZ-IEEE, and ICEC, Orlando, June-July, 1994, and an earlier form of the result is to appear as "The Undecidability of the Spatialized Prisoner's Dilemma" in Theory and Decision . An interactive form of the paper, in which figures are called up as evolving arrays of cellular automata, is available on DOS disk as Research Report #94-04i . An expanded version appears as (...) chapter 6 of The Philosophical Computer. (shrink)

We extend previous work on cooperation to some related questions regarding the evolution of simple forms of communication. The evolution of cooperation within the iterated Prisoner's Dilemma has been shown to follow different patterns, with significantly different outcomes, depending on whether the features of the model are classically perfect or stochastically imperfect (Axelrod 1980a, 1980b, 1984, 1985; Axelrod and Hamilton, 1981; Nowak and Sigmund, 1990, 1992; Sigmund 1993). Our results here show that the same holds for communication. Within a simple (...) model, the evolution of communication seems to require a stochastically imperfect world. (shrink)

In the spatialized Prisoner's Dilemma, players compete against their immediate neighbors and adopt a neighbor's strategy should it prove locally superior. Fields of strategies evolve in the manner of cellular automata (Nowak and May, 1993; Mar and St. Denis, 1993a,b; Grim 1995, 1996). Often a question arises as to what the eventual outcome of an initial spatial configuration of strategies will be: Will a single strategy prove triumphant in the sense of progressively conquering more and more territory without opposition, or (...) will an equilibrium of some small number of strategies emerge? Here it is shown, for finite configurations of Prisoner's Dilemma strategies embedded in a given infinite background, that such questions are formally undecidable: there is no algorithm or effective procedure which, given a specification of a finite configuration, will in all cases tell us whether that configuration will or will not result in progressive conquest by a single strategy when embedded in the given field. The proof introduces undecidability into decision theory in three steps: by (1) outlining a class of abstract machines with familiar undecidability results, by (2) modelling these machines within a particular family of cellular automata, carrying over undecidability results for these, and finally by (3) showing that spatial configurations of Prisoner's Dilemma strategies will take the form of such cellular automata. (shrink)

We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible to distinguish between a player's assessment of ambiguity and his attitude towards ambiguity. We also generalize the concept of trembling hand perfect equilibrium. Finally, we demonstrate that for certain attitudes towards ambiguity it is possible to explain (...) cooperation in the one-shot Prisoner's Dilemma in a way that is in accordance with some recent experimental findings. (shrink)

I first argue against Peter Singer's exciting thesis that the Prisoner's Dilemma explains why there could be an evolutionary advantage in making reciprocal exchanges that are ultimately motivated by genuine altruism over making such exchanges on the basis of enlightened long-term self-interest. I then show that an alternative to Singer's thesis — one that is also meant to corroborate the view that natural selection favors genuine altruism, recently defended by Gregory Kavka, fails as well. Finally, I show that even granting (...) Singer's and Kavka's claim about the selective advantage of altruism proper, it is doubtful whether that type of claim can be used in a particular sort of sociobiological argument against psychological egoism. (shrink)

In "Morals by Agreement," David Gauthier (1986) argues that it is rational to intend to cooperate, even in single-play Prisoner's Dilemma games, provided (1) your co-player has a similar intention; (2) both intentions can be revealed to the other player. To this thesis four objections are made. (a) In a strategic decision the parameters on which the argument relies cannot be supposed to be given. (b) Of each pair of a-symmetric intentions at least one is not rational. But it is (...) impossible to form symmetric intentions to cooperate conditionally. For the condition on which the decision depends cannot be fulfilled without deciding. (c) If one's intention has to be ascertained on the basis of information about one's past performance, it is straightforwardly rational to intend to cooperate, but there is no reason to do so in a single-play PD. (d) The argument cannot be extended to n-person games which are Gauthier's principal concern. (shrink)

Many recent studies of norm emergence employ the "prisoner's dilemma" (PD) paradigm, which focuses on the free-rider problem that can block the cooperation required for the emergence of social norms. This paper proposes an expansion of the PD paradigm to include a closely related game termed the "altruist's dilemma" (AD). Whereas egoistic behavior in the PD leads to collectively irrational outcomes, the opposite is the case in the AD: altruistic behavior (e.g., following the Golden Rule) leads to collectively irrational outcomes, (...) whereas egoistic behavior leads to Pareto-optimal outcomes. The analysis shows that PDs can be converted into ADs either by increasing cooperation costs or by diminishing marginal gains from cooperation; therefore ADs are as empirically abundant as PDs. In addition, the analysis shows that altruists are not the only type of actors who fall prey to the AD; egoists can fall into this trap as well if they possess a capacity for interpersonal control. Where group solidarity is defined analytically in terms of the extent of cooperation in both PDs and ADs, this paper presents a model based on rational choice to account for variations in solidarity. According to the proposed analysis, levels of group solidarity depend on the balance in the group between compliant control, which increases cooperation, and oppositional control, which reduces it. That balance, in turn, depends on the allocation of power within the group. (shrink)

The results of a series of computer simulations demonstrate how the introduction of separate spatial dimensions for agent interaction and learning respectively affects the possibility of cooperation evolving in the repeated prisoner's dilemma played by populations of boundedly-rational agents. In particular, the localisation of learning promotes the emergence of cooperative behaviour, while the localisation of interaction has an ambiguous effect on it.

Individualism fixes the unit of rational agency at the individual, creating problems exemplified in Hi-Lo and Prisoner's Dilemma (PD) games. But instrumental evaluation of consequences does not require a fixed individual unit. Units of agency can overlap, and the question of which unit should operate arises. Assuming a fixed individual unit is hard to justify: It is natural, and can be rational, to act as part of a group rather than as an individual. More attention should be paid to how (...) units of agency are formed and selected: Are the local processes local or nonlocal? Do they presuppose the ability to understand other minds? (shrink)

Teaching economics has been shown to encourage students to defect in a prisoner's dilemma game. However, can ethics training reverse that effect and promote cooperation? We conducted an experiment to answer this question. We found that students who had the ethics module had higher rates of cooperation than students without the ethics module, even after controlling for communication and other factors expected to affect cooperation. We conclude that the teaching of ethics can mitigate the possible adverse incentives of the prisoner's (...) dilemma, and, by implication, the adverse effects of economics and business training. (shrink)

According to the so-called “Folk Theorem” for repeated games, stable cooperative relations can be sustained in a Prisoner’s Dilemma if the game is repeated an indefinite number of times. This result depends on the possibility of applying strategies that are based on reciprocity, i.e., strategies that reward cooperation with subsequent cooperation and punish defectionwith subsequent defection. If future interactions are sufficiently important, i.e., if the discount rate is relatively small, each agent may be motivated to cooperate by fear of retaliation (...) in the future. For finite games, however, where the number of plays is known beforehand, there is a backward induction argument showing that rational agents will not be able to achieve cooperation. On behalf of the Hobbesian “Foole”, who cannot see any advantage in cooperation, Gregory Kavka (1983, 1986) has presented an argument that significantly extends the range of the backward induction argument. He shows that, for the backward induction argument to be effective, it is not necessary that the precise number of future interactions be known. It is sufficient that there is a known definite upper bound on the number of interactions. A similar argument is developed by John W. Carroll (1987). We will here question the assumption of a known upper bound. When the assumption is made precise in the way needed for the argument to go through, its apparent plausibility evaporates. We then offer a reformulation of the argument, based on weaker, and more plausible, assumptions. (shrink)