Games represent an idealized model of strategic situations: they usually assume common priors, common knowledge of the model, common knowledge of every other player’s payoff function, and completely optimal play given those assumptions. In many cases, games also make many predictions, in the sense that there are many Nash equilibria, or more generally, many rationalizable action profiles. Going back at least to Selten’s trembling hand perfection, economists have tried to “refine” away some of these game predictions by essentially reducing one or more of the listed strong assumptions about how games are played. Particularly in the 1970s and 1980s, huge amounts of effort was spent trying to derive the “right” refinements which would give unique predictions. The broadest of these refinements, like risk dominance in van Damme and Carlsson global games, even “refines away” some strict Nash equilibria.

The present paper, a 2007 Econometrica by Weinstein and Yildiz, is a pretty good argument that the whole refinement literature has been a waste of time and talent; this is my interpretation, not necessarily theirs, but I think true nonetheless.

Here is the basic philosophical point. Most refinements refine beliefs about payoffs or strategy directly. For example, players may know the payoff matrix only with some noise that is revealed through private signals, as in global games. But before the signal is received, priors are common, and knowledge of the model is common. Following Mertens and Zamir (1985), though, we can think of players in a game as types in a universal type space, where my type represents not only my own payoffs from each strategy profile, but also my beliefs about other player’s types, and my beliefs about his beliefs about my beliefs, and so on to infinity. These beliefs are called higher-order beliefs, and the kth step of this logic is called a kth-order belief.

Consider an analyst who doesn’t believe in common knowledge of the model among players. Rather, she only can ask each player directly about their higher order beliefs up to level k. For all orders between 1 and k, the analyst knows the beliefs of each player – this is just a probability distribution at each level – to arbitrary precision according to some appropriate metric (Levy-Prokhorov shows how to define a metric given the weak topology on probability distributions). For orders about k, the analyst knows nothing. A game of cards where we each get a known signal that the deck may be fixed doesn’t really fit this model, but situations where “incomplete information” also means “either the analyst or the players has no intuitive reason to be able to infer 1934th order beliefs” does fit, and the second situation seems much more relevant when discussing robustness.

Define a game in the normal way, but keep in mind that the analyst may be wrong about the payoff matrix because of the above information restrictions. List the set of rationalizable outcomes. Weinstein and Yildiz show that if there is a unique rationalizable strategy profile, then that profile is robust to minor changes in the above information structure. However, if there are multiple rationalizable strategies, then arbitrarily small changes to the analyst’s information – that is, an epsilon shift in beliefs below kth order, or a mistake in beliefs above the kth order – can make each and every one of those rationalizable strategies uniquely rationalizable.

That is, if we “refine away” some equilibrium or rationalizable strategy because we find it “implausible”, then there is another game with slightly different higher order beliefs where that “implausible” strategy is the unique outcome of the game. (You may be wondering now, as a technical point, what rationalizable means in the context of incomplete information games, but the definition of rationalizable used in the present paper in the weakest such definition.) This is a massive blow to the refinement/robustness in games literature as far as I’m concerned: every refinement strategy, no matter how intuitive, is throwing away strategy profiles that are absolutely plausible given the best possible information about players an analyst could know in the real world.

(The above discussion is the result of conversations we’ve had in a student theory reading group – of course the errors are all mine and the insights all theirs. If you’re interested in game robustness, you might also want to the check out the following. Rubinstein (1989) showed in his famous email/army coordination game that small changes in higher order belief can kill off even strictly Pareto dominant equilibria. Kreps, Wilson, Roberts & Milgrom (1982, on how higher order beliefs can justify cooperate in the prisoner’s dilemma) and Fudenberg, Kreps & Levine (1988) are in a similar vein. Monderer & Samet (1989) introduce p-belief, replacing common knowledge’s “I know that you know that I know that…” with “I believe with probability p that you believe with probability p that I believe…”. Rob, Morris and Shin (1995), Kajii and Morris (1997,1998) and Ui (2001) then use the idea of p-belief to consider which equilibria are robust to this weakening of common knowledge. Unlike the present paper, that line of the literature still considers only “common” p-belief, meaning p-belief to infinite order. Lipman (2003) and Oyama and Tercieux (2010) loosen the common prior assumption. As far as direct extensions of Weinstein and Yildiz’s paper, Weinstein and Yildiz (2010) lets the action space be infinite (and, in particular, a continuum), Chen (2008) extends the proof to finite dynamic games with a slightly stronger new assumption, Weinstein and Yildiz (2011) let the horizon go to infinity and still get a similar result, and Ely and Peski (2011) discuss in a more general sense when assumptions about beliefs in a type space model are “important”; the large robust mechanisms literature (principally from Morris and Bergemann) is along similar lines.)