Determinacy of Borel games IV

To recap briefly, we have defined a notion of lifting from one game to another and embarked on a proof that every Borel game can be lifted to a clopen game. The stages of that proof are to show that every closed game can be lifted to a clopen game, and that the set of games that can be lifted is closed under countable unions. It is straightforward to show that it is also closed under taking complements, so that will prove that all Borel games can be lifted to clopen games. As we saw earlier, if a game can be lifted to a determined game, then it is itself determined. Since clopen games are determined, this will be enough.

We are in the middle of the proof that closed games can be lifted to clopen games. We have defined the game to which we will lift, and shown how to map Player I strategies for the auxiliary game to strategies for the original game . So the first thing to do in this post is to show how to map Player II strategies for to Player II strategies for .

Recall that I defined the game as follows. Let be the game (that is, the game is played on the infinite tree and is the set of infinite paths for which Player I wins). Then Player I begins by playing a pair , where is a quasistrategy for . Then Player II either plays , where is a finite sequence that is consistent with , all of whose continuations belong to , or he plays , where is a quasistrategy for the game restricted to the tree that corresponds to (which is the tree of all possible finite sequences that can result if Player I plays consistently with ), such that all runs consistent with and result in wins for Player I. In the first case, both players play along the sequence and then Player I continues playing consistently with . In the second case, Player I continues playing consistently with and Player II continues playing consistently with . Player I wins if and only if the eventual sequence belongs to .

Mapping Player II strategies to Player II strategies

Let be a Player II strategy for . We need to find a strategy , defined in such a way that what it does to sequences of length at most depends only on what does to sequences of length at most , such that every possible run of the game, if Player II plays using the strategy , is a sequence that can arise from .

In the previous post we ended up with a useful way of thinking about this. Let be the set of all sequences that could possibly arise if Player II uses the strategy . Then we need to prove that Player I does not have a winning strategy for obtaining a sequence outside .

At first this looks odd, since it doesn’t mention , but the point is that isn’t necessarily a winning strategy. The definition of lifting is set up in such a way that if we have a winning strategy in , then it must map to a winning strategy in . Therefore, we don’t have to think about this when verifying the lifting property.

Why is this necessary for what we need to prove? Well, if Player I plays a strategy that guarantees to produce a sequence outside , then, by definition, whatever strategy Player II plays, the resulting run of the game will not be one that results from a play of the game with Player II using the strategy .

An obvious way to go about proving that Player I does not have such a strategy is to prove that Player II has a winning strategy for producing a sequence in, and that is what we shall do.

What is the set ? Well, a Player II strategy for works as follows. If Player I plays a quasistrategy , then Player II must decide, based on that (and the initial point ) whether to play a pair of the form or a pair of the form , and having done that he must decide how to play consistently with the choice he has made. So is the set of all sequences such that there exists a quasistrategy (for ) such that is a valid move for Player I and one of the following two conditions holds. Either (i) dictates that Player II’s next move is , where is some initial segment of , and the rest of the sequence is played consistently with and what dictates, or (ii) dictates that Player II’s next move is , and that the rest of the sequence should be consistent with , and what dictates.

We run into a serious difficulty here. A natural approach is to define to be the set of all finite sequences that can result from case (i). That is, the set of all such that there exists an opening move in that would cause to play the move . (Here, is the second term of .) Then we can say what happens if Player II has a winning strategy for producing a sequence in , and what happens if he doesn’t have one.

But suppose he does have one. The idea would be to play the game in a way that guarantees that eventually such a sequence will be reached, after which he is guaranteed to win the game , and then do pretty much anything. He can then deem Player I to have played the move , where is a strategy that causes to respond with . But the “doing pretty much anything” part is not an option: Player II needs the rest of the sequence to be consistent with , and there is no need for Player I to continue to play consistently with this quasistrategy.

On spotting this difficulty, I realized that I had not properly understood the definition of the game . I have deliberately not gone back and changed the definition to the correct one, since these mistakes are useful for my purposes: they help to explain why the definition is as it is. The right definition of the game is in fact this.

As I’ve already said, Player I’s first move is of the form , where is a quasistrategy, and Player II’s first move is either of the form or of the form , where all continuations of belong to and all sequences consistent with and are in . However, if Player II plays a move of the first kind, then after helping to create the sequence , Player I is not obliged to play consistently with. A move of type releases Player I from that obligation. However, if Player II plays a move of type , then both players must play consistently with the quasistrategies that they have declared.

This adjustment to the definition of deals with the difficulty above: if Player II has a winning strategy for obtaining a sequence , then he can deem Player I to have played a sequence that would have provoked the response (given that the strategy is being used for ) and once the sequence has finished, as long as Player II does what tells him to do in the game (given the moves so far, and assuming that the first two moves were and ), the resulting sequence is indeed one that comes from a run of with the strategy being employed. The run is precisely the one that continues after with Player I playing in whatever she actually chooses to play in .

What if Player II does not have a winning strategy for obtaining a sequence in ? In that case, he must again imagine a game of going on on the side. In this game, he will want to deem Player I to have played a move of the form that will be followed (if he plays the strategy in ) by a move of the form . Then he will want Player I’s subsequent moves to be consistent with . Of course, they may not be, so if they aren’t, then he will want to get maximum advantage from this somehow.

There is a natural candidate for . If Player II has not got a winning strategy for sequences in , then (by the Gale-Stewart theorem — the result that open games are determined) Player I has a winning strategy for sequences not in . Better still, she has a canonical quasistrategy for this game, namely the quasistrategy “don’t move to a winning position for Player II”. This quasistrategy has a very useful maximality property that we shall exploit, which is that if Player I ever departs from it, then Player II immediately has a winning strategy. Roughly speaking, this means that Player II can deem Player I to be playing according to the canonical quasistrategy, and if she ever does a move that is inconsistent with that quasistrategy, then Player II can change his mind and deem Player I to have played a quasistrategy that provokes a sequence in .

Let me say that more precisely. If Player II does not have a winning strategy for the set , then let be the canonical quasistrategy for Player I in the game defined by (by which I mean the closed game in which Player I aims to create a sequence with no initial segment in ). For as long as Player I plays consistently with the quasistrategy , Player II will play as follows: play the game as if Player I has begun with the move (where is the move that Player I begins with in ) and play the corresponding points in . Note that the move will not permit Player II to play a pair of the form in , since has to be consistent with , which is a quasistrategy that explicitly avoids all possible sequences .

If at any stage, Player I plays a move that is not consistent with (and this could even be the very first move ), then Player II is in a winning position for creating a sequence in . But that’s great! Player II can revert to the original plan, get to a sequence , deem Player I to have played a move that would have provoked the move , and so on.

The point here is this. At some point, Player II has to settle on an initial move for Player I in the game . There is nothing to stop him making a decision about this and later changing his mind. However, he mustn’t do this infinitely many times, since then it is no longer clear that there is a quasistrategy that can be deemed to be part of Player I’s first move. But in the argument above, Player II changed his mind at most once, and was therefore OK.

I’ve more or less done so already, but let me spell out what happens if Player I does play consistently with . In that case, Player II plays a game of on the side, responding to the move and all subsequent moves that Player I makes in (which can be interpreted as moves in ). Then he plays the corresponding moves in (where “corresponding” means “the same” except in the case of the first move , which involves forgetting the quasistrategy and just playing the point ).

-liftings

At the end of the previous post, I mentioned that we would need a generalization of the result that we have just finished proving, one that is amenable to diagonalizations. We will find that we want to build a single lifting out of a sequence of an infinite liftings, and to do that we need those liftings to “settle down”, just as if we want a sequence of 01-sequences to converge in the product topology, we need to be able to say, for every , that all the sequences from some point on agree in their first terms.

A –lifting of a game to a game is a lifting with the following additional properties, which basically say that is the same as for the first moves.

1. The first levels of the tree on which is played are identical to the first levels of the tree on which is played.

2. For every at distance at most from the root, .

3. If is a strategy for , then must do the same in as does in .

We have shown that every closed game has a -lifting to a clopen game. We actually need it to have a -lifting for every . This is a straightforward adaptation of what we have already proved, so I won’t say very much about it.

But I will say a little. The game is defined as follows. As before, the tree is almost the same of , but contains a little more information. However, this time the extra information is provided by the two players not on their first moves, but at moves and . (The case discussed earlier was the case .) Thus, after the two players have played , Player I plays a move of the form , where is a quasistrategy, and Player II plays a move either of the form , where is a finite sequence that begins with and has all its continuations in , or of the form , where is a quasistrategy that will guarantee that Player II loses, assuming that Player I plays consistently with . In the first case, the two players must play along and thereafter can play how they like, and in the second case they must play consistently with their quasistrategies.

The construction of the map taking strategies to strategies is basically the same as before, except that we play the first moves of the game before getting started.

Limits of -liftings

The rough idea of how to prove that a countable union of “liftable” games is a liftable game is this. Let be a tree and let be the sets corresponding to a whole lot of games on that belong to Borel classes for which we have already proved determinacy. We then lift to a tree that makes clopen. The inverse images of the remaining sets under the continuous map that takes to belong to the same Borel classes as before (since these are preserved by inverse images under continuous maps). We now repeat the process for , , and so on.

The idea is to create a lifting for which all the games are simultaneously clopen. If we just had a finite union, that would be straightforward: we would just repeat the process times. The resulting lifting would give us a union of clopen sets, which would be clopen. (There are little details to check, such as that a composition of liftings is a lifting.) However, we have a countable union, so a sequence , where I’m using to mean “can be lifted to”. Therefore, we need to construct some kind of inverse limit of all these lifted games. That’s where -liftings come in.

In this section I’ll show how this inverse limit works. After that, the proof will be almost finished.

Suppose then that we have a sequence , where for each is an -lifting of . (Later we’ll need to add to all these s, but again I prefer to prove the initial case to keep the notation simpler and wave my hands when it comes to the general case.) We now want to build a game that is simultaneously a lift for all the .

Incidentally, the small fact I mentioned earlier — that compositions of lifts are lifts — is indeed easy to prove. Suppose, for example, that and is a strategy for . Then maps first to a strategy and then to a strategy . Given a run of where is played, there exists a run of where is played that maps to it, and then a run of where is played that maps to that.

By the definition of -liftings, we have a sequence of trees , where for each the first levels of are the same as the first levels of . We therefore have an obvious definition of a limiting tree : its first levels are given by the first levels of . Then the first levels of every tree from onwards will agree with the first levels of .

How do we map to one of the trees ? Well, let us write for the restriction of a tree to the first levels. Then we need a way of mapping to . This we do by mapping to using the “identity” mapping (i.e., using the fact that the first levels of are the same as the first levels of ) and then composing that with the maps that take to to etc. all the way down to . (I could summarize what I’ve just said by saying “take the inverse limit in the obvious way”. I probably don’t even need the words “in the obvious way” but I’m trying to get away with a slightly imprecise understanding of the notion of inverse limits.)

Now suppose we are given a strategy for . (Note that I don’t need to say anything about payoff sets — a strategy is defined just in terms of the tree that the game is played on, regardless of which outcomes are judged to be wins for which player.) We need to map it to a strategy for .

Let us write for the part of the strategy that says what does for the first moves of the game (and thus the first moves of the player whose strategy it is). Then it is straightforward to map to a strategy for the first moves of the game , whenever : it just maps to the “same” strategy. And what about when ? In that case we have a map that takes strategies for to strategies for . Moreover, what does for the first moves depends only on what does for the first moves (at last we see this property coming in in an absolutely critical way, though it is a pretty natural property anyway), so it makes sense to talk about .

So the natural definition for a strategy for built out of the strategy for is as follows. If you want to know what does for the first moves of , you take the strategy , consider it as a strategy for the first moves of , and then map it to a strategy for the first moves of using the map that lifts to (which is a composition of maps that lift to ).

In a similar way we can define a strategy for each game (just by regarding as the beginning of the sequence and doing exactly what we did above — a -lifting is in particular a -lifting).

Of course, there is something to check if we define it this way: we need to know that what we claim the strategy does for the first moves is consistent with what we claim it does for the first moves when . In other words, we need to check that is the strategy for the first moves of . But this is indeed the case: to work out we take the strategy , consider it as a strategy for the first moves of , and map that to a strategy for the first moves of . Instead of the first stage, we could take the strategy , consider it as a strategy for the first moves of , map that to a strategy for the first moves of , restrict it to a strategy for the first moves, and then map it to a strategy for the first moves of . The result would be the same as both the partial strategies that we want to show are equal.

The same argument shows that all the strategies are well-defined.

How about the lifting property? Let’s suppose we run the game according to this strategy , producing a path in . We would now like to find a path in consistent with that maps to .

Recall that the first levels of are the same as the first levels of , and that for each the lifting from to is a -lifting. Therefore, to find the path we seek, we simply lift to a path in that is consistent with the strategy , lift that to a path in that is consistent with the strategy , and so on. The first points in the path do not change after we have reached , so we end up with a well-defined limiting sequence . For every , the first terms of this sequence are consistent with the strategy , which tells us precisely that the sequence as a whole is consistent with .

The above argument may look slightly complicated, but it’s really the definitions that are slightly complicated: there isn’t any point where the proof takes a surprising turn.

An additional remark we need to make is that because we need -liftings in our inductive hypothesis, we need them in the conclusion as well. This can be achieved as follows. What we have shown is that if each is -lifted to , then every is lifted to . A simple modification to the argument shows that if each is -lifted to , then every is -lifted to .

Dealing with countable unions

The rest of the argument is now quite easy. Suppose we have a tree and sets all of which give rise to games that can be lifted to clopen games. Suppose also that we know that any inverse image under a continuous map of one of these sets has the same property. (This is true in particular if belongs to some Borel class for which we have proved that all sets in the class can be lifted appropriately.) We then construct a sequence of games as follows. First we 1-lift to in such a way that becomes a clopen set. All the other are mapped to continuous inverse images of themselves in the new tree . We then 2-lift to in a way that makes (or rather, its inverse image) clopen. Then we 3-lift to to make clopen, and so on. Finally, we let be the inverse limit discussed in the previous section.

What we have now is a lifting of the original game to a game such that all the inverse images of the sets are clopen. It follows that the union of these inverse images is at least open. We can therefore do one final lift to a game that makes the union clopen. We have therefore lifted the original union in to a clopen set, which is what we were trying to do.

I didn’t say that last sentence very well, and in particular was mixing up my types: do we lift games, or trees, or sets, or what? But I hope the meaning is clear: as I remarked earlier, liftings don’t depend on payoff sets, so we found a lifting of the original tree to a new tree — where lifting includes the fact that strategies for map to strategies for — such that the inverse image of in is a clopen set.

Again, we need something a bit stronger: for any we need to be able to -lift to some . But that is straightforward, given the remarks above.

I haven’t actually given the final tiny step of the argument, which is to show that the lifting property is preserved when you take complements. But that follows trivially from the fact that inverse images preserve complements and the fact that complements of clopen sets are clopen.

I was planning to end this series of posts here, but have subsequently had some further thoughts about liftings that shed a little light — for me at least — on the proof. I have collected those into a fifth post, which I will put up in a few days.

Share this:

Like this:

LikeLoading...

Related

This entry was posted on September 10, 2013 at 9:31 am and is filed under complexity. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.