A bit unsure if the following vague question has enough mathematical content to be suitable upon here. In the case, please feel free to close it.

In several circumstances of competition, a particular situation of partial information occurs, usually described as "I know that you know that I know... something". We may distinguish a whole hierarchy of more and more complicate situations closer and closer to a complete information. E.g. :

$I_0$: I know $X$, but you don't know that I know.

$I_1$: I know $X$, you know that I know, but I don't know that you know that I know.

$I_1$: I know $X$, you know that I know, I know that you know that I know, but you don't know that I know that you know that I know.

.... &c.

For small values of $k$, I can imagine simple situations where passing from $I_k$ to $I_ {k+1}$ really makes a difference (for instance: you are Grandma Duck, and $X$ is : "you left a cherry pie to cool on the window ledge". Clearly, $I_0$ is quite agreeable position; $I_1$ may lead to an unpleasant end (for me); $I_2$ leaves me some hope, if I behave well, and so on). But, I can't imagine how passing from $I_6$ to $I_7$ may affect my strategy, or Grandma's.

Are there situations, real or
factitious, concrete or abstract, where $I_k$
implies a different strategy than
$I_{k+1}$ for the competitors? What
about $I_{\omega}$ and, more generally,
$I_\alpha$ for an ordinal $\alpha$
(suitably defined by induction)? How these situations are modeled mathematically?

There are some papers on "reflexive game theory" by Alexander Chkhartishvili, see e.g. link.springer.com/article/10.1134%2FS0005117910060214, where this hierarchy of reflexive levels is formalized and he shows how this affects the Nash equilibrium etc. Chkhartishvili apparently did not put any of his work on arXiv, so unfortunately one has to resort to English translations of some Russian journals where this is published.
–
ansobolNov 23 '12 at 19:49

6 Answers
6

My wife and I have a standing agreement where I pick up our son Horatio from school and she picks up our daughter Hypatia.

One day, because I knew I would be near Hypatia's school, it was convenient to swap duties. I emailed her a message, "I'll pick up Hypatia today, and you get Horatio. Please confirm; otherwise it is as usual." She texted me back, "Let's do it. Let me know if you get this message, so that I know we're really on." I left her a voicemail, "OK, we're definitely on for the swap! ....as long as I know you get this message." She emailed me back, "Got the message. We're on! But let me know that you get this message so I can count on you." You see, without confirmation she couldn't be sure that I knew she had gotten my earlier confirmation of her acknowledgement of my first message, and she may have worried that the plan to swap was consequently off.

And so on ad infinitum......

How truly frustrating it was for us that at no stage of our conversation could we seem to know for certain that the other person had all the necessary information to ensure that the plan would be implemented! The result, of course, since we had time to exchange only at most a finite number of messages, was that the only rational course of action was for us each to abandon the plan to swap: we both independently decided just to pick up the usual child.

To see that this was rational, observe that clearly the first message needed to have been confirmed in order for the plan to be implemented properly. Furthermore, if the $n$-th message need not have been confirmed, then it wasn't important to know that it had been received and the algorithm should have worked whether or not it was received, meaning that it didn't actually need to have been sent. So by induction, no number of confirmations suffices to implement the common knowledge that we both needed, namely, that we had each agreed to make the swap.

Incidentally, it seems to me that the same problem still arises even if we were to speak in person, because one would have to make sure that the previous comment was received and understood, before its acknowledgement would rise to the level of common knowledge. I am left to wonder how it is possible ever for us to attain common knowledge...
–
Joel David HamkinsNov 23 '12 at 21:25

Tweetie-bird, of course my intention here was to provide a comparatively concrete example exhibiting the principle issues of common knowledge. I am actually fascinated by the logic and mathematics of this kind of situation, which I view as a genunine, sophisticated mathematical topic.
–
Joel David HamkinsNov 23 '12 at 22:50

I deleted my previous comment so as not to confuse others. This logic puzzle is already confusing enough.
–
tweetie-birdNov 23 '12 at 23:18

Here is another puzzle I like which is in the spirit of the question. 100 people play a game as follows. Each person secretly writes a number between 1 and 1000000. The numbers are then all revealed and the person who is closest to 2/3 of the average wins a prize. If there is a tie, the prize is shared between the winners.

What number should one write down?

Now, it is clearly foolish to write down any number greater than 666667, since 2/3 of the average cannot be more than 666667. But now we can view the game as being played on the interval from 1 to 666667 instead of from 1 to 1000000. Now we can iterate again and conclude that it is foolish to choose any number greater than 444444. Ultimately (but this requires many iterations of knowledge), the only rational choice is for all players to choose 1 and to split the prize.

A agree that choosing $1$ is the only rational choice, but still I would like to actually do this experiment with a bunch of non-mathematicians and see what $2/3$ of the average turns out to be. I suspect it changes after the first go, or if you give them much time to think, but such empirical information could have been useful in a pub quiz I used to go to.
–
Paul ReynoldsNov 23 '12 at 19:43

1

IIRC, I’ve read that in human experiments the 2/3 of the average tends to be around 20% of the maximum.
–
Emil JeřábekNov 23 '12 at 19:58

In practice when actually playing this game it is important to model everyone else you're playing with (especially if they don't know how to compute Nash equilibria). I played a version of this game once and I intentionally chose the highest number to throw off everyone else's strategies (and I wasn't alone in doing this).
–
Qiaochu YuanNov 24 '12 at 2:33

3

---Now, it is clearly foolish to write down any number greater than 666667, since 2/3 of the average cannot be more than 666667.--- Not if you playing with a partner. Of course, you won't take the prize yourself, but you can raise your partner's chances quite a bit by writing a large number if you agree upon a decent strategy. Now, how do I know that there are no coalitions out there? And if I don't, the second step of the induction fails...
–
fedjaNov 26 '12 at 0:38

Check out the wikipedia page on common knowlege, which includes some mathematical formalizations of the concept as well as the famous blue-eyed islander problem mentioned by Steven Gubkin. This problem was also discussed by Terrence Tao on his blog.

The classic model of these things is due to Robert Aumann and was introduced in his 1976 Agreeing to Disagree. There is a famous example due to Ariel Rubinstein, the electronic mail game, in which the behavior of $I_\omega$ radically differs from $I_n$ for any $n$.

Here is a way to show that one might have to apply this reasoning up to an arbitrarily large ordinal. Let $\alpha$ be a successor ordinal. Ann and Bob play the game of picking ordinals in $\alpha$ simultaneously. The one who chooses the highest number wins. One can never win by choosing $0$, so rationality rules out choosing $0$. It is however possible to win by choosing $1$ if the other player plays $0$. But if both Ann and Bob know that they are both rational, they have to choose at least $2$. It is clear that one has to iterate this reasoning up to the predecessor of $\alpha$.

I don’t get the example. Whatever the other player knows, thinks, or plays, I’m always better off playing the predecessor of $\alpha$ than any other ordinal, hence rationality dictates I do just that, without any iteration.
–
Emil JeřábekNov 23 '12 at 16:03

@Emil It is correct that playing the predecessor is always compatible with common knowledge of rationality up to some ordinal $\beta<\alpha$. But there are many more choices that are compatible with common rationality up to a lower level. If you want to role them out, you have to go all the way. The notion of rationality being used is not using a choice you know is not a best response.
–
Michael GreineckerNov 23 '12 at 16:15

3

Playing an ordinal $\beta$ smaller than the predecessor of $\alpha$ is never rational, because there is the possibility that the opponent may play something larger than $\beta$, which would make me lose, whereas if I play the predecessor, it is either a win for me or a draw. I know this a priori, without considering what the opponent knows.
–
Emil JeřábekNov 23 '12 at 16:43

@Emil: No using weakly dominated strategies is stronger requirement than rationality. And having common knowledge of not playing weakly dominated strategies might be impossible. There is a well known paper by L. Samuelson, in which he show that there are games in which common knowledge of not-playing weakly dominated strategies is inconsistent (alturl.com/d2s5m). The notion I used coincides with point-rationalizability in the sense of Bernheim (alturl.com/fc9oe).
–
Michael GreineckerNov 23 '12 at 17:55

But I am not assuming any common knowledge, I am in fact not assuming anything whatsoever about the opponent. You defined rationality as “not using a choice you know is not a best response”, and I know choosing an ordinal smaller than the predecessor is not the best response, because choosing the predecessor is a better response. I have no idea whether this conforms to one formal definition or another, but if not, then IMHO the example merely illustrates that the definition is contrived rather than the phenomena mentioned by the OP.
–
Emil JeřábekNov 23 '12 at 18:39

The question of common knowledge, and of the algorithmics of epistemic modal logics, provides a lot of good material on this sort of problem. (It is good fun to try the mathematical side of this in Popularisation activities. I used Muddy Children with various groups with a lot of laughter and appreciation of the mathematical models.)