A Countable Hat Game and the Kolmogorov 0-1 Law

Last term, I took a lovely class in probability theory with Professor Omer Tamuz. On the first problem set, he posed an interesting bonus problem about a certain game played by a countable sequence of players wearing colored hats. The game goes like this:

Countably many cooperating players stand in a line indexed by . A neutral game master walks down the row of players, placing black and white hats on the player’s heads independently and uniformly at random. (In other words, the color of each player’s hat is decided by a fresh fair coin flip.) Afterwards, the game master goes through again, asking each player in turn to announce a guess for their own hat color. Each player hears the guesses of the players that came before them and sees all of the hats on the heads of people ahead of them. The players win if everyone except for the first person guesses correctly. (The first person, of course, is doomed to only guess their hat correctly 50% of the time, regardless of their strategy. However, the rest of the players have a chance to come out ahead by using the information provided by those behind them.)

The players conspire beforehand to come up with a strategy. Is it possible for them to cook up a strategy for which they always win? A bit more conservatively, how about one that wins with probability ?

It is somewhat well-known that in this game andsimilarones, the players can always win if they make use of the Axiom of Choice. (It seems to me that this was first discovered by Yuval Gabay and Michael O’Connor in 2004 but not formally published until Hardin and Taylor’s 2008 paper. Despite this, I’ve usually heard it presented as folklore.) Choice is awfully nonconstructive and inelegant, so one might ask whether the use of the Axiom of Choice is necessary. Hardin and Taylor answer this in the affirmative, showing that some nontrivial form of Choice is required for this result. They do this by showing that if all sets of real numbers have the property of Baire, then there are no strategies that always win. The existence sets of reals without the property of Baire is a consequence of the Axiom of Choice, and it is known that even Dependent Choice is not enough to prove it. Therefore, the amount of Choice required for this result lies somewhere between Dependent Choice and full Choice. (See Section 3.4 of Hardin and Taylor’s book for a more technical discussion of the role of the Axiom of Choice in hat games.)

Such matters are a bit out of place in an intro probability class. Professor Tamuz posed a slightly different question which gets at the same idea: Demonstrate a measurable strategy which wins with probability or else show that none exists. (If you’re not familiar with measurability, an informal definition is that a player’s strategy is measurable if it always makes sense to ask “what is the probability that this player guesses that their own hat is black?”) The existence of non-measurable sets of reals is also a consequence of the Axiom of Choice, and it plays a crucial role in the Banach–Tarksi Theorem.

In my notes, I first show you how they can win this game in a way that naturally extends the winning strategy for finite versions of the game. Next, I answer Professor Tamuz’s question in the negative by proving that all measurable strategies must fail with positive probability. The proof is slightly more technical than I’d like to write in a blog post, but I will show you something else as a nice warm-up.

The main ingredient of the proof of non-measurability is the Kolmogorov zero-one law, which we’ll informally state a special case of now. Suppose we have an infinite sequence of independent random variables. (For example, consider the sequence of hat colors assigned by the game-master in the countable hat game.) Consider an event which is a function of the sequence but is also independent of any finite subsequence. (For example, are infinitely many hats black?) Then Kolmogorov says this event either almost surely occurs or almost surely does not occur. That is, its probability is or . (It is easy to see that our example has probability , even by using very basic tools.)

I’ll illustrate the Kolmogorov zero-one law with an application I learned from a probability theory class with Professor Leonard Schulman. Suppose a drunk person is wandering on the integer number line, trying to find their way home. Each minute, they flip a weighted coin and travel one unit left if heads but one unit right if tails. In other words, let their coin flip at time be a random variable which is with probability or with probability . Then their position at time is the random variable .

Consider the window . What is the probability that the drunk stays inside this prison for an infinitely long time? Intuitively, this shouldn’t happen: it would equate to the existence of an invisible cage guiding the drunk’s coin flips. In order to escape the cage, the drunk need only to get consecutive flips of the same side. The chance that any particular consecutive flips come up the same is only , but if we try infinitely many times, it’ll surely happen at some point. (To formalize this, apply Borel-Cantelli, a very simple and very useful fact about probability.)

Rephrasing, we have that with probability , the drunk will be to the right of or to the left of infinitely often. (Otherwise, there would be a largest time at which they were outside of , and they’d stay inside for the rest of time!) Call these events and . Each of these events is independent of any finite number of steps of the walk, so we can apply Kolmogorov to say that they each have probability or . Since their union has probability , at least one of them must be probability .

When , we think of the drunk as having a rightward drift, so “obviously” the probability that they’re to right of infinitely many times should be bigger than the probability that they’re to the left of infinitely many times. This “obvious” fact can be proven precisely with a technique known as coupling. This requires a bit more about what a probability space is than we’re using in this post; see lecture 25 of Prof. Tamuz’s lecture notes for a demonstration of the idea. This shows that when and when .

When , we must have by symmetry. (Formally, consider the measuring-preserving transformation on the space of coin flips which reverses every coin.) Then we must have that both are equal to . This is enough to show that the random walk is recurrent, which is to say that no matter where their home is, with probability , the drunk will find it infinitely many times.

In the proof of the nonmeasurability of winning strategies in the countable hat game, we use Kolmogorov in the same way as in the preceding paragraph, but with one minor change. As above, we consider two events (that the first player guesses black and that the first player guesses white) which have the same probability and to which we can apply Kolmogorov. However, these events are mutually exclusive, so by the inequalities , we achieve a contradiction. We conclude that the first player’s strategy is non-measurable. To see this argument in full detail, check out my notes. There are two followup questions which I’ve yet to resolve:

We show that all measurable strategies win with probability strictly less than . Might it be true that all measurable strategies win with probability ?

Consider the game where we don’t allow the players to hear the guesses of others.

If you want to learn much more about hat games, do check out Hardin and Taylor’s book, The Mathematics of Coordinated Inference. The first three chapters are written in a fairly accessible manner and should be an enjoyable read for undergrads with a touch of set theory or topology under their belt.