Nash Equilibrium is without doubt the most widely used solution concept in Game Theory, and also in its applications to Economics, Political Science, International Relations, Law, Business, Computers, Evolutionary Biology, and many other disciplines. Yet its conceptual foundations are murky. The most common justification - offered already by von Neumann and Morgenstern - is that if Game Theory is to recommend strategies to the players in a game, then the resulting strategy profile must be known; for example, because it could be read off from a game theory text. It follows that each strategy must be a best reply to the others, which means that the strategies constitute a Nash equilibrium.

The difficulty with this is that Game Theory need not recommend any particular strategy. Rather, it needs to recommend a *procedure* for arriving at a strategy. For example, such a procedure could be, "maximize your expected payoff given how you think the others will play." The resulting strategy of a player would not be known to the others, so a Nash equilibrium need not result.

Other serious criticisms of Nash equilibrium have been raised, inter alia by Bernheim and by Pearce in their 1984 Econometrica articles introducing the concept of rationalizability. Yet whereas that concept became widely known and applied, their criticism of Nash equilibrium was largely ignored. Perhaps, that is because rationalizability is a fairly "loose" concept; in many games it is very far from providing precise solutions. In particular, in the benchmark case of two-person zero-sum games, it can yield payoffs that are very far from the value.

The lecture will explore these ideas. We will see that they lead to a solution notion that while somewhat "looser" than Nash equilibrium, is still fairly "tight;" in particular, in two-person zero-sum games, it does yield precisely the value. Auxiliary use will be made of the concept of correlated equilibrium.