Search results for 'backaward induction in indefinitely long games' (try it on Scholar)

According to the so-called “Folk Theorem” for repeated games, stable cooperative relations can be sustained in a Prisoner’s Dilemma if the game is repeated an indefinite number of times. This result depends on the possibility of applying strategies that are based on reciprocity, i.e., strategies that reward cooperation with subsequent cooperation and punish defectionwith subsequent defection. If future interactions are sufficiently important, i.e., if the discount rate is relatively small, each agent may be motivated to cooperate by fear of (...) retaliation in the future. For finite games, however, where the number of plays is known beforehand, there is a backward induction argument showing that rational agents will not be able to achieve cooperation. On behalf of the Hobbesian “Foole”, who cannot see any advantage in cooperation, Gregory Kavka (1983, 1986) has presented an argument that significantly extends the range of the backward induction argument. He shows that, for the backward induction argument to be effective, it is not necessary that the precise number of future interactions be known. It is sufficient that there is a known definite upper bound on the number of interactions. A similar argument is developed by John W. Carroll (1987). We will here question the assumption of a known upper bound. When the assumption is made precise in the way needed for the argument to go through, its apparent plausibility evaporates. We then offer a reformulation of the argument, based on weaker, and more plausible, assumptions. (shrink)

We provide eductive foundations for the concept of forward induction, in the class of games with an outside option. The formulation presented tries to capture in a static notion the rest point of an introspective process, achievable from some restricted preliminary beliefs. The former requisite is met by requiring the rest point to be a Nash equilibrium that yields a higher payoff than the outside option. With respect to the beliefs, we propose the Incentive Dominance Criterion. Players should (...) consider one action more likely than another whenever the former is better than getting the outside option for more conjectures over his rival's actions. We apply this model to the case where the subgame is a coordination game with a conflict between payoff dominance and risk dominance. Our results provide support for dominance solvability, but not for Van Damme's notion of forward induction. We show how the forward induction logic helps to select the Pareto dominant equilibrium. This is the case whenever player 1's act of giving up the outside option reverses the incentive dominance relations among 1's pure actions in the subgame. (shrink)

Backward induction has been the standard method of solving finite extensive-form games with perfect information, notwithstanding the fact that this procedure leads to counter-intuitive results in various games (iterated prisoner's dilemma, centipede, chain store, etc.). However, beginning in the late eighties, the method of backward induction became an object of criticism. It is claimed (most notably, by Reny 1988, 1989, Binmore 1987, Bicchieri 1989, and Pettit & Sugden 1989) that the assumptions needed for its defence are (...) quite implausible, if not incoherent. It is therefore natural to ask for the justification of backward induction: Can one show that rational players who know the structure of the game, have trust in each other's practical rationality and reason correctly, will act in accordance in backward induction? Several researchers have tried a justification of this kind, but the argument presented in Robert Aumann's paper from 1995 is perhaps the most well-known and influential attempt to provide such a justification. Clausing (1999) provides a sustained discussion of the justification problem for backward induction. It is an excellent work and the criticism I will present below does not detract from this evaluation: the issues discussed by the author are complex and it is difficult to get everything right. Furthermore, I hope that the criticism to be presented may be instructive; Even though it has not been Clausing's intention, his logical reconstruction of Aumann's defence of backward induction allows us to see very clearly what is wrong with that argument. It also provides us with * This paper was presented at workshops in Lund and in Uppsala, in the Fall of 1999. I am indebted to the participants for their useful comments. The work on the paper was supported by a generous research grant from The Bank of Sweden Tercentenary Foundation. (shrink)

The standard backward-induction reasoning in a game like the centipede assumes that the players maintain a common belief in rationality throughout the game. But that is a dubious assumption. Suppose the first player X didn't terminate the game in the first round; what would the second player Y think then? Since the backwards-induction argument says X should terminate the game, and it is supposed to be a sound argument, Y might be entitled to doubt X's rationality. Alternatively, Y (...) might doubt that X believes Y is rational, or that X believes Y believes X is rational, or Y might have some higher-order doubt. X’s deviant first move might cause a breakdown in common belief in rationality, therefore. Once that goes, the entire argument fails. The argument also assumes that the players act rationally at each stage of the game, even if this stage could not be reached by rational play. But it is also dubious to assume that past irrationality never exerts a corrupting influence on present play. However, the backwards-induction argument can be reconstructed for the centipede game on a more secure basis.1 It may be implausible to assume a common belief in rationality throughout the game, however the game might go, but the argument requires less than this. The standard idealisations in game theory certainly allow us to assume a common belief in rationality at the beginning of the game. They also allow us to assume this common belief persists so long as no one makes an irrational move. That is enough for the argument to go through. (shrink)

We develop a logical system that captures two different interpretations of what extensive games model, and we apply this to a long-standing debate in game theory between those who defend the claim that common knowledge of rationality leads to backward induction or subgame perfect (Nash) equilibria and those who reject this claim. We show that a defense of the claim à la Aumann (1995) rests on a conception of extensive game playing as a one-shot event in combination (...) with a principle of rationality that is incompatible with it, while a rejection of the claim à la Reny (1988) assumes a temporally extended, many-moment interpretation of extensive games in combination with implausible belief revision policies. In addition, the logical system provides an original inductive and implicit axiomatization of rationality in extensive games based on relations of dominance rather than the usual direct axiomatization of rationality as maximization of expected utility. (shrink)

According to a standard objection to the use of backward induction in extensive-form games with perfect information, backward induction (BI) can only work if the players are confident that each player is resiliently rational - disposed to act rationally at each possible node that the game can reach, even at the nodes that will certainly never be reached in actual play - and also confident that these beliefs in the players’ future resilient rationality are robust, i.e. that (...) they would be kept come what may, whatever evidence of irrationality would by then transpire concerning past performance of the players. Since both resiliency and robustness assumptions are extremely strong and their appropriateness as idealizations is quite problematic, it has been argued (by Binmore, Reny, Bicchieri, Pettit and Sugden, among others) that BI is an indefensible procedure. Therefore, we need not be worried that BI can be used to justify seemingly counter-intuitive game solutions. I show, however, that there is a restricted class of extensive-form games in which BI solutions can be defended without assuming resiliency or robustness. For these ”BI-terminating games” (= games in which BI moves always terminate the play, at each choice node), to defend BI solutions, it is enough to make confidence-in-rationality assumptions concerning actual play; stipulations about various counterfactual developments are unnecessary. For this class of games, then, the standard objection to BI is inapplicable. At the same time, however, it will transpire that the class in question contains some well-known games, such as the Centipede in its different versions, in which BI recommends a seemingly unreasonable behaviour. (shrink)

This paper contributes to the understanding of economic strategic behaviors in inter-temporal settings. Comparing the MPE and the OLNE of a widely used class of differential games it is shown: (i) what qualifications on behaviors a markov (dynamic) information structure brings about compared with an open-loop (static) information structure, (ii) what is the reason leading to intensified or reduced competition between the agents in the long run. It depends on whether agents’ interactions are characterized by markov substitutability or (...) markov complementarity, which can be seen as dynamic translations of the ideas of strategic substitutability and strategic complementarity (Bulow et al. 1985, Journal of Political Economy 93:488–511). In addition, an important practical contribution of the paper for modelers is to show that these results can be directly deduced from the payoff structure, with no need to compute equilibria first. (shrink)

This paper proposes a revised Theory of Moves (TOM) to analyze matrix games between two players when payoffs are given as ordinals. The games are analyzed when a given player i must make the first move, when there is a finite limit n on the total number of moves, and when the game starts at a given initial state S. Games end when either both players pass in succession or else a total of n moves have been (...) made. Studies are made of the influence of i, n, and S on the outcomes. It is proved that these outcomes ultimately become periodic in n and therefore exhibit long-term predictable behavior. Efficient algorithms are given to calculate these ultimate outcomes by computer. Frequently the ultimate outcomes are independent of i, S, and n when n is sufficiently large; in this situation this common ultimate outcome is proved to be Pareto-optimal. The use of ultimate outcomes gives rise to a concept of stable points, which are analogous to Nash equilibria but consider long-term effects. If the initial state is a stable point, then no player has an incentive to move from that state, under the assumption that any initial move could be followed by a long series of moves and countermoves. The concept may be broadened to that of a stable set. It is proved that every game has a minimal stable set, and any two distinct minimal stable sets are disjoint. Comparisons are made with the results of standard TOM. (shrink)

Rehabilitation of sensorimotor impairment resulting from cerebral lesion (CL) utilizes task specific training and massed practice to drive reorganization and sensorimotor improvement due to induction of neuroplasticity mechanisms. Loss of sensory abilities often complicates recovery, and thus the individual’s ability to use the affected body part for functional tasks. Therefore, the development of additional and alternative approaches that supplement, enhance, or even replace conventional training procedures would be advantageous. Repetitive sensory stimulation protocols (rSS) have been shown to evoke sensorimotor (...) improvements of the affected limb in patients with chronic stroke. However, the possible impact of long-term rSS on sensorimotor performance of patients with CL, where the incident dated back many years remains unclear. The particular advantage of rSS is its passive nature, which does not require active participation of the subjects. Therefore, rSS can be applied parallel to other occupations, making the intervention easier to implement and more acceptable to the individual. Here we report the effects of applying rSS for 8, 36 and 76 weeks on the paretic hand of 3 long-term patients with different types of CL. Different behavioral tests were used to assess sensory and/or sensorimotor performance of the upper extremities prior, after, and during the intervention. In one patient, the impact of long-term rSS on restoration of cortical activation was investigated by recording somatosensory evoked potentials. After long-term rSS all three patients showed considerable improvements of their sensory and motor abilities. In addition, almost normal evoked potentials could be recorded after rSS in one patient. Our data show that long-term rSS applied to patients with chronic CL can improve tactile and sensorimotor functions, which, however, developed in some cases only after many weeks of stimulation, and continued to further improve on a time scale of months. (shrink)

Rehabilitation of sensorimotor impairment resulting from cerebral lesion (CL) utilizes task specific training and massed practice to drive reorganization and sensorimotor improvement due to induction of neuroplasticity mechanisms. Loss of sensory abilities often complicates recovery, and thus the individual’s ability to use the affected body part for functional tasks. Therefore, the development of additional and alternative approaches that supplement, enhance, or even replace conventional training procedures would be advantageous. Repetitive sensory stimulation protocols (rSS) have been shown to evoke sensorimotor (...) improvements of the affected limb in patients with chronic stroke. However, the possible impact of long-term rSS on sensorimotor performance of patients with CL, where the incident dated back many years remains unclear. The particular advantage of rSS is its passive nature, which does not require active participation of the subjects. Therefore, rSS can be applied parallel to other occupations, making the intervention easier to implement and more acceptable to the individual. Here we report the effects of applying rSS for 8, 36 and 76 weeks on the paretic hand of 3 long-term patients with different types of CL. Different behavioral tests were used to assess sensory and/or sensorimotor performance of the upper extremities prior, after, and during the intervention. In one patient, the impact of long-term rSS on restoration of cortical activation was investigated by recording somatosensory evoked potentials. After long-term rSS all three patients showed considerable improvements of their sensory and motor abilities. In addition, almost normal evoked potentials could be recorded after rSS in one patient. Our data show that long-term rSS applied to patients with chronic CL can improve tactile and sensorimotor functions, which, however, developed in some cases only after many weeks of stimulation, and continued to further improve on a time scale of months. (shrink)

Robert Aumann argues that common knowledge of rationality implies backward induction in finite games of perfect information. I have argued that it does not. A literature now exists in which various formal arguments are offered in support of both positions. This paper argues that Aumann's claim can be justified if knowledge is suitably reinterpreted.

Game-theoretic solution concepts describe sets of strategy profiles that are optimal for all players in some plausible sense. Such sets are often found by recursive algorithms like iterated removal of strictly dominated strategies in strategic games, or backward induction in extensive games. Standard logical analyses of solution sets use assumptions about players in fixed epistemic models for a given game, such as mutual knowledge of rationality. In this paper, we propose a different perspective, analyzing solution algorithms as (...) processes of learning which change game models. Thus, strategic equilibrium gets linked to fixed-points of operations of repeated announcement of suitable epistemic statements. This dynamic stance provides a new look at the current interface of games, logic, and computation. (shrink)

A syntactic formalism for the modeling of belief revision in perfect information games is presented that allows to define the rationality of a player's choice of moves relative to the beliefs he holds as his respective decision nodes have been reached. In this setting, true common belief in the structure of the game and rationality held before the start of the game does not imply that backward induction will be played. To derive backward induction, a “forward belief” (...) condition is formulated in terms of revised rather than initial beliefs. Alternative notions of rationality as well as the use of knowledge instead of belief are also studied within this framework. Footnotes1 I would like to thank Wlodek Rabinowicz and three anonymous referees for very helpful comments. (shrink)

Understanding human behaviour involves "why"'s as well as "how"'s. Rational people have good reasons for acting, but it can be hard to find out what these were and how they worked. In this Note, we discuss a few ways in which actions, preferences, and expectations are intermingled. This mixture is especially clear with the well-known solution procedure for extensive games called 'Backward Induction'. In particular, we discuss three scenarios for analyzing behaviour in a game. One can rationalize given (...) moves as revealing agents' preferences, one can also rationalize them as revealing agents' beliefs about others, but one can also change a predicted pattern of behaviour by making promises. All three scenarios transform given games to new ones, and we prove some results about their scope. A more general view of relevant game transformations would involve dynamic and epistemic game logics. Finally, our analysis describes and disentangles matters: but it will not tell you what to do! (shrink)

That past patterns may continue in many different ways has long been identified as a problem for accounts of induction. The novelty of Goodman’s ”new riddle of induction” lies in a meta-argument that purports to show that no account of induction can discriminate between incompatible continuations. That meta-argument depends on the perfect symmetry of the definitions of grue/bleen and green/blue, so that any evidence that favors the ordinary continuation must equally favor the grue-ified continuation. I argue (...) that this very dependence on the perfect symmetry defeats the novelty of the new riddle. The symmetry can be obtained in contrived circumstances, such as when we grue-ify our total science. However, in all such cases, we cannot preclude the possibility that the original and grue-ified descriptions are merely notationally variant descriptions of the same physical facts; or if there are facts that separate them, these facts are ineffable, so that no account of induction should be expected to pick between them. In ordinary circumstances, there are facts that distinguish the regular and grue-ified descriptions. Since accounts of induction can and do call upon these facts, Goodman’s meta-argument cannot provide principled grounds for the failure of all accounts of induction. It assures us only of the failure of accounts of induction, such as unaugmented enumerative induction, that cannot exploit these symmetry breaking facts. (shrink)

Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error (...) probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a meta-statistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies. Introduction and overview 1.1 Behavioristic and inferential rationales for Neyman–Pearson (N–P) tests 1.2 Severity rationale: induction as severe testing 1.3 Severity as a meta-statistical concept: three required restrictions on the N–P paradigm Error statistical tests from the severity perspective 2.1 N–P test T(): type I, II error probabilities and power 2.2 Specifying test T() using p-values Neyman's post-data use of power 3.1 Neyman: does failure to reject H warrant confirming H? Severe testing as a basic concept for an adequate post-data inference 4.1 The severity interpretation of acceptance (SIA) for test T() 4.2 The fallacy of acceptance (i.e., an insignificant difference): Ms Rosy 4.3 Severity and power Fallacy of rejection: statistical vs. substantive significance 5.1 Taking a rejection of H0 as evidence for a substantive claim or theory 5.2 A statistically significant difference from H0 may fail to indicate a substantively important magnitude 5.3 Principle for the severity interpretation of a rejection (SIR) 5.4 Comparing significant results with different sample sizes in T(): large n problem 5.5 General testing rules for T(), using the severe testing concept The severe testing concept and confidence intervals 6.1 Dualities between one and two-sided intervals and tests 6.2 Avoiding shortcomings of confidence intervals Beyond the N–P paradigm: pure significance, and misspecification tests Concluding comments: have we shown severity to be a basic concept in a N–P philosophy of induction? (shrink)

It is widely agreed that humans have specific abilities for cooperation and culture that evolved since their split with their last common ancestor with chimpanzees. Many uncertainties remain, however, about the exact moment in the human lineage when these abilities evolved. This article argues that cooperation and culture did not evolve in one step in the human lineage and that the capacity to stick to long-term and risky cooperative arrangements evolved before properly modern culture. I present evidence that Homo (...) heidelbergensis became increasingly able to secure contributions form others in two demanding Paleolithic public good games (PPGGs): cooperative feeding and cooperative breeding. I argue that the temptation to defect is high in these PPGGs and that the evolution of human cooperation in Homo heidelberngensis is best explained by the emergence of modern-like abilities for inhibitory control and goal maintenance. These executive functions are localized in the prefrontal cortex and allow humans to stick to social norms in the face of competing motivations. This scenario is consistent with data on brain evolution that indicate that the largest growth of the prefrontal cortex in human evolution occurred in Homo heidelbergensis and was followed by relative stasis in this part of the brain. One implication of this argument is that subsequent behavioral innovations, including the evolution of symbolism, art, and properly cumulative culture in modern Homo sapiens , are unlikely to be related to a reorganization of the prefrontal cortex, despite frequent claims to the contrary in the literature on the evolution of human culture and cognition. (shrink)

The justification of induction is of central significance for cross-cultural social epistemology. Different ‘epistemological cultures’ do not only differ in their beliefs, but also in their belief-forming methods and evaluation standards. For an objective comparison of different methods and standards, one needs (meta-)induction over past successes. A notorious obstacle to the problem of justifying induction lies in the fact that the success of object-inductive prediction methods (i.e., methods applied at the level of events) can neither be shown (...) to be universally reliable (Hume's insight) nor to be universally optimal. My proposal towards a solution of the problem of induction is meta-induction. The meta-inductivist applies the principle of induction to all competing prediction methods that are accessible to her. By means of mathematical analysis and computer simulations of prediction games I show that there exist meta-inductive prediction strategies whose success is universally optimal among all accessible prediction strategies, modulo a small short-run loss. The proposed justification of meta-induction is mathematically analytical. It implies, however, an a posteriori justification of object-induction based on the experiences in our world. In the final section I draw conclusions about the significance of meta-induction for the social spread of knowledge and the cultural evolution of cognition, and I relate my results to other simulation results which utilize meta-inductive learning mechanisms. (shrink)

The paper provides a framework for representing belief-contravening hypotheses in games of perfect information. The resulting t-extended information structures are used to encode the notion that a player has the disposition to behave rationally at a node. We show that there are models where the condition of all players possessing this disposition at all nodes (under their control) is both a necessary and a sufficient for them to play the backward induction solution in centipede games. To obtain (...) this result, we do not need to assume that rationality is commonly known (as is done in [Aumann (1995)]) or commonly hypothesized by the players (as done in [Samet (1996)]). The proposed model is compared with the account of hypothetical knowledge presented by Samet in [Samet (1996)] and with other possible strategies for extending information structures with conditional propositions. (shrink)

Preference is a basic notion in human behaviour, underlying such varied phenomena as individual rationality in the philosophy of action and game theory, obligations in deontic logic (we should aim for the best of all possible worlds), or collective decisions in social choice theory. Also, in a more abstract sense, preference orderings are used in conditional logic or non-monotonic reasoning as a way of arranging worlds into more or less plausible ones. The ﬁeld of preference logic (cf. Hansson [10]) studies (...) formal systems that can express and analyze notions of preference between various sorts of entities: worlds, actions, or propositions. The art is of course to design a language that combines perspicuity and low complexity with reasonable expressive power. In this paper, we take a particularly simple approach. As preferences are binary relations between worlds, they naturally support standard unary modalities. In particular, our key modality ♦ϕ will just say that is ϕ true in some world which is at least as good as the current one. Of course, this notion can also be indexed to separate agents. The essence of this language is already in [4], but our semantics is more general, and so are our applications and later language extensions. Our modal language can express a variety of preference notions between propositions. Moreover, as already suggested in [9], it can “deconstruct” standard conditionals, providing an embedding of conditional logic into more standard modal logics. Next, we take the language to the analysis of games, where some sort of preference logic is evidently needed ([23] has a binary modal analysis diﬀerent from ours). We show how a qualitative unary preference modality suﬃces for deﬁning Nash Equilibrium in strategic games, and also the Backward Induction solution for ﬁnite extensive games. Finally, from a technical perspective, our treatment adds a new twist. Each application considered in this paper suggests the need for some additional access to worlds before the preference modality can unfold its true power.. (shrink)

The migration of elder-care workers appears to be a zero-sum game. This naturally offends our sense of justice, especially when the host populations are richer. In this article, I argue that we ought to look beyond the short run. Once we look at the long run, we will see possibilities of non-zero-sum games that are mutually beneficial.

In certain finite extensive games with perfect information, Cristina Bicchieri (1989) derives a logical contradiction from the assumptions that players are rational and that they have common knowledge of the theory of the game. She argues that this may account for play outside the Nash equilibrium. She also claims that no inconsistency arises if the players have the minimal beliefs necessary to perform backward induction. We here show that another contradiction can be derived even with minimal beliefs, so (...) there is no paradox of common knowledge specifically. These inconsistencies do not make play outside Nash equilibrium plausible, but rather indicate that the epistemic specification must incorporate a system for belief revision. Whether rationality is common knowledge is not the issue. (shrink)

How do humans discover causal relations when the effect is not immediately observable? Previous experiments have uniformly demonstrated detrimental effects of outcome delays on causal induction. These findings seem to conflict with everyday causal cognition, where humans can apparently identify long-term causal relations with relative ease. Three experiments investigated whether the influence of delay on adult human causal judgements is mediated by experimentally induced assumptions about the timeframe of the causal relation in question, as suggested by Einhorn and (...) Hogarth (1986). Causal judgements generally decreased when a delay separated cause and effect. This decrease was less pronounced when the thematic context of the causal relation induced participants to expect a delay. Experiment 3 ruled out an alternative explanation of the effect based on variations of cue and outcome saliencies, and showed that detrimental effects of delay are reduced even more when instructions explicitly mentioned the timeframe of the causal relation in question. Knowledge thus mediates the impact of delay on human causal judgement. Implications for contemporary theories of human causal induction are discussed. (shrink)

We consider a class of dynamic tournaments in which two contestants are faced with a choice between two courses of action. The first is a riskless option (“hold”) of maintaining the resources the contestant already has accumulated in her turn and ceding the initiative to her rival. The second is the bolder option (“roll”) of taking the initiative of accumulating additional resources, and thereby moving ahead of her rival, while at the same time sustaining a risk of temporary setback. We (...) study this tournament in the context of a jeopardy race game (JRG), extend the JRG to $N > 2$ contestants, and construct its equilibrium solution. Compared to the equilibrium solution, the results of three experiments reveal a dysfunctional bias in favor of the riskless option. This bias is substantially mitigated when the contestants are required to commit in advance how long to pursue the risky course of action. (shrink)

The logical foundations of game-theoretic solution concepts have so far been explored within the con¯nes of epistemic logic. In this paper we turn to a di®erent branch of modal logic, namely temporal logic, and propose to view the solution of a game as a complete prediction about future play. The branching time framework is extended by adding agents and by de¯ning the notion of prediction. A syntactic characterization of backward induction in terms of the property of internal consistency of (...) prediction is given. (shrink)

The theories of cerebellar function presented in this BBS special issue cannot be reconciled with the established induction properties of cerebellar LTD. At the same time, the authors presenting their research on cerebellar LTD do not appear very interested in function.

In Projecting a Camera, film theorist Edward Branigan offers a groundbreaking approach to understanding film theory. Why, for example, does a camera move? What does a camera "know"? (And when does it know it?) What is the camera's relation to the subject during long static shots? What happens when the screen is blank? Through a wide-ranging engagement with Wittgenstein and theorists of film, he offers one of the most fully developed understandings of the ways in which the camera operates (...) in film. With its thorough grounding in the philosophy of spectatorship and narrative, Projecting a Camera takes the study of film to a new level. With the care and precision that he brought to Narrative Comprehension and Film , Edward Branigan maps the ways in which we must understand the role of the camera, the meaning of the frame, the role of the spectator, and other key components of film-viewing. By analyzing how we think, discuss, and marvel about the films we see, Projecting a Camera, offers insights rich in implications for our understanding of film and film studies. (shrink)

To the extent that acting fairly is in an individual's long-term interest, short-term impulses to cheat present a self-control problem. The only effective solution is to interpret the problem as a variant of repeated prisoner's dilemma, with each choice as a test case predicting future choices. Moral choice appears to be the product of a contract because it comes from self-enforcing intertemporal cooperation.

Rational choice theory enjoys unprecedented popularity and influence in the behavioral and social sciences, but it generates intractable problems when applied to socially interactive decisions. In individual decisions, instrumental rationality is defined in terms of expected utility maximization. This becomes problematic in interactive decisions, when individuals have only partial control over the outcomes, because expected utility maximization is undefined in the absence of assumptions about how the other participants will behave. Game theory therefore incorporates not only rationality but also common (...) knowledge assumptions, enabling players to anticipate their co-players' strategies. Under these assumptions, disparate anomalies emerge. Instrumental rationality, conventionally interpreted, fails to explain intuitively obvious features of human interaction, yields predictions starkly at variance with experimental findings, and breaks down completely in certain cases. In particular, focal point selection in pure coordination games is inexplicable, though it is easily achieved in practice; the intuitively compelling payoff-dominance principle lacks rational justification; rationality in social dilemmas is self-defeating; a key solution concept for cooperative coalition games is frequently inapplicable; and rational choice in certain sequential games generates contradictions. In experiments, human players behave more cooperatively and receive higher payoffs than strict rationality would permit. Orthodox conceptions of rationality are evidently internally deficient and inadequate for explaining human interaction. Psychological game theory, based on nonstandard assumptions, is required to solve these problems, and some suggestions along these lines have already been put forward. Key Words: backward induction; Centipede game; common knowledge; cooperation; epistemic reasoning; game theory; payoff dominance; pure coordination game; rational choice theory; social dilemma. (shrink)

Evidence presented in Salmon (2001; Econometrica 69(6) 1597) indicates that typical tests to identify learning behavior in experiments involving normal form games possess little power to reject incorrect models. This paper begins by presenting results from an experiment designed to gather alternative data to overcome this problem. The results from these experiments indicate support for a learning-to-learn or rule learning hypothesis in which subjects change their decision rule over time. These results are then used to construct an adaptive learning (...) model which is intended to mimic more accurately the behavior observed. The final section of the paper presents results from a simple simulation based analysis comparing the performance of this adaptive learning model with that of several standard decision rules in reproducing the choice patterns observed in the experiment. (shrink)

C. S. Peirce argued that inductive reasoning and probability judgments are adequately secure only in the indefinitelylong run, and that therefore it is illogical to employ these modes of inference unless one's chief devotion is to the interests of an ideal community of all rational beings, past, present, and future. He thought of this devotion as a "social sentiment", involving self-sacrifice. An examination of his argument shows that the attitude presupposed by his conceptions of induction and (...) probability is in fact not self-sacrificial and is social only in a very special sense. Furthermore, it seems doubtful that this attitude is characteristic of practicing scientists; and this is a reason for questioning Peirce's analysis of induction and probability. (shrink)

When a person performs a certain action, it signifies that he is causing a certain event to occur. Therefore the action is conveying a certain true sentence. Playing a game is a mutual activity, namely the listener and the speaker undertake an exchange through a linguistic dialogue or communicate through action. Because of the peculiar nature of the action, the actions in games belong to an activity where the speaker speaks true words and the listener hears true words. A (...) static game is a process through which the participants are simultaneously speaking and listening ; and a dynamic game is a process where speaking and listening take place in turn. Each step of a dynamic game is a speaking-listening exchange. Through listening and speaking, changes in the epistemic states of the participants occur. Of course, the degree of change depends on the type of game being played. In a dynamic game, each participant proceeds through a process of induction, and thus forms new epistemic states. (shrink)

Alice encounters at least three distinct problems in her struggles to understand and navigate Wonderland. The first arises when she attempts to predict what will happen in Wonderland based on what she has experienced outside of Wonderland. In many cases, this proves difficult -- she fails to predict that babies might turn into pigs, that a grin could survive without a cat or that playing cards could hold criminal trials. Alice's second problem involves her efforts to figure out the basic (...) nature of Wonderland. So, for example, there is nothing Alice could observe that would allow her to prove whether Wonderland is simply a dream. The final problem is manifested by Alice's attempts to understand what the various residents of Wonderland mean when they speak to her. In Wonderland, "mock turtles" are real creatures and people go places with a "porpoise" (and not a purpose). All three of these problems concern Alice's attempts to infer information about unobserved events or objects from those she has observed. In philosophical terms, they all involve *induction*. -/- In this essay, I will show how Alice's experiences can be used to clarify the relation between three more general problems related to induction. The first problem, which concerns our justification for beliefs about the future, is an instance of David Hume's classic *problem of induction*. Most of us believe that rabbits will not start talking tomorrow -- the problem of induction challenges us to justify this belief. Even if we manage to solve Hume's puzzle, however, we are left with what W.V.O. Quine calls the problems of *underdetermination *and *indeterminacy.* The former problem asks us to explain how we can determine *what the world is really like *based on *everything that could be observed about the world. *So, for example, it seems plausible that nothing that Alice could observe would allow her to determine whether eating mushrooms causes her to grow or the rest of the world to shrink. The latter problem, which might remain even if resolve the first two, casts doubt on our capacity to determine *what a certain person means *based on *which words that person uses.* This problem is epitomized in the Queen's interpretation of the Knave's letter. The obstacles that Alice faces in getting around Wonderland are thus, in an important sense, the same types of obstacles we face in our own attempts to understand the world. Her successes and failures should therefore be of real interest. (shrink)

We consider the desirability, or otherwise, of various forms of induction in the light of certain principles and inductive methods within predicate uncertain reasoning. Our general conclusion is that there remain conflicts within the area whose resolution will require a deeper understanding of the fundamental relationship between individuals and properties.

Godfrey-Smith advocates for linking deception in sender-receiver games to the existence of undermining signals. I present games in which deceptive signals can be arbitrarily frequent, without this undermining information transfer between sender and receiver.

The main objective of the paper is to propose a frequentist interpretation of probability in the context of model-based induction, anchored on the Strong Law of Large Numbers (SLLN) and justifiable on empirical grounds. It is argued that the prevailing views in philosophy of science concerning induction and the frequentist interpretation of probability are unduly influenced by enumerative induction, and the von Mises rendering, both of which are at odds with frequentist model-based induction that dominates current (...) practice. The differences between the two perspectives are brought out with a view to defend the model-based frequentist interpretation of probability against certain well-known charges, including [i] the circularity of its definition, [ii] its inability to assign ‘single event’ probabilities, and [iii] its reliance on ‘random samples’. It is argued that charges [i]–[ii] stem from misidentifying the frequentist ‘long-run’ with the von Mises collective. In contrast, the defining characteristic of the long-run metaphor associated with model-based induction is neither its temporal nor its physical dimension, but its repeatability (in principle); an attribute that renders it operational in practice. It is also argued that the notion of a statistical model can easily accommodate non-IID samples, rendering charge [iii] simply misinformed. (shrink)

We calculate the Lebesgueâmeasures of the stability sets of Nash-equilibria in pure coordination games. The results allow us to observe that the ordering induced by the Lebesgueâmeasure of stability sets upon strict Nash-equilibria does not necessarily agree with the ordering induced by riskâdominance. Accordingly, an equilibrium selection theory based on the Lebesgueâmeasure of stability sets would be necessarily different from one which uses the Nash-property as a point of orientation.