Computation, Economics, and Game Theory

Archive for September, 2009

In a previous post I discussed the communication complexity of reaching a pure Nash equilibrium. The communication complexity model aims to capture the basic information transfer bottleneck between the different players in a games, abstracting away the question of incentives, and focusing on the need of communicating the different preferences (utilities) of the players that are assumed to initially be privately known. The previous post introduced the model of communication complexity and applied it to the question of finding a pure Nash equilibrium (if it exists). The bottom line was that, in the general case, essentially all information about the utilities must be transferred and no “shortcuts” are possible. In a multi-player game this amount of information is exponential in the number of players, implying that convergence to equilibrium, in general, is impractical, and special properties of the game must be used in order to reach equilibrium in reasonable time (one such property was demonstrated: dominance-solvable games).

This post discusses the issue of convergence to a mixed Nash equilibrium, as studied by Sergiu Hart and Yishay Mansour in How Long to Equilibrium? The Communication Complexity of Uncoupled Equilibrium Procedures. The setting, as in my previous post, has each one of players holding his utility function , where each is the set of strategies of player , and for ease of notation lets have for all . We will assume that all utilities are finitely represented, i.e. are rational numbers, say with -bit numerator and denominator. Can a mixed Nash equilibrium be found by communicating significantly less than bits — the size of each utility function? Maybe it can be done by communicating only a polynomial (in , , and ) bits? This would be a necessary condition for efficient convergence of any (uncoupled) dynamics between the players.

Before we look deeper into at this question, let us look at the similar problem of finding a correlated equilibrium. In this case it turns out that a polynomial amount of communication suffices and thus only a tiny fraction of the private information needs to be transferred. You can see a previous post of mine for background on correlated equilibrium.

The Communication Complexity of Correlated Equilibrium

Let us recall that a correlated equilibrium is a probability distribution on the strategy profiles that satisfies linear inequalities of the following form: for every player , and every two strategies of , : , where the sum ranges over all strategy profiles of the other players. The interpretation is that is indeed a best-reply to the conditional distribution on .

[Edited on Oct 2nd: Albert Xin Jiang was kind enough to point out that the next paragraph is, well, wrong, so I’m striking it out. In order to show that finding a correlated equilibrium can be done with low communication complexity, one must run the Ellipsoid algorithm on the dual LP (which has polynomial many variables) rather than on the primal (which has exponentially many variables). How to do so effectively is shown in a paper by Papadimitriou and Roughgarden, and Hart and Mansour show that the algorithm can be implemented with low communication.]

As being a correlated equilibrium is defined by a set of linear inequalities, we can find a correlated equilibrium using linear programming. Let us see how the players can jointly run the Ellipsoid LP algorithm while keeping the amount of communication in check. The Ellipsoid algorithm runs in time polynomial in , , and as long as it has access to a separation oracle. Such an oracle must be able to answer queries of the following form: given an unfeasible point, in our case a candidate distribution , it must find a constraint, in our case a player and strategies , where the corresponding inequality is violated by . The main point is that each of the inequalities can be checked by a single player since it depends only on a single utility function. Thus to implement the separation oracle, each player must either report a violated constraint or a single bit specifying that all his constraints are satisfied — all together taking bits of communication. When the algorithm terminates — after a polynomial number of steps (polynomial in , , and ) — all players know the answer, which is a correlated equilibrium. (Note that even though a correlated equilibrium is an exponential-sized object, it will turn out to have a polynomial-sized support.)

While this low-communication algorithm can not be viewed as natural dynamics that efficiently converge to a correlated equilibrium, it does point out that some dynamics that converge efficiently do exist. The question of finding natural dynamics then gets more pressing and indeed it turns out that natural dynamics do exist as well: dynamics based on regret minimization (see my previous post.)

Mixed-Nash Equilibria

Now, once we have what to aim for — a similarly low-communication way of reaching a mixed-Nash equilibrium — let us look at the problem carefully again. First, we must recall that Nash equilibria may be irrational even if all utilities are rational, so a Nash equilibrium can not be “printed” as binary numbers in any finite time. Still, in our communication complexity formulation this shouldn’t be a problem since any representation of the equilibrium is in principle acceptable as long as it is uniquely determined by the communication. Closely related to this is the fact that the representation of a Nash equilibrium in an -player game may require exponentially many bits of precision, even if the utilities themselves have only short descriptions. As before, while the required precision itself is not a lower bound on the communication complexity, Hart and Mansour do show how to use this precision to obtain a lower bound. Below I give a different, somewhat stronger proof.

Let us start with the following 2-player bi-strategy game, where is an arbitrary parameter.

r, 0 0, r

0, 1-r 1-r, 0

It is easy to see that the only Nash equilibrium of this game has each player choosing his second strategy with probability exactly and his first with probability . This family of games suffices for giving a tight lower bound for the special case of two-player () bi-strategy () games.

Lemma: The communication complexity of finding a Nash equilibrium in two-player bi-strategy games, where all utilities are given by -bit integers, is .

The upper bound is trivial and the lower bound is implied by games of the previous form where ranges over all fractions of the form , for integer since each of these games has a different (unique) answer giving different such answers, which thus requires at least bits of communication just to get all possibilities.

The dependence of the communication complexity on , the number of strategies of each player, is still unclear. However, Hart and Mansour show that the dependence on the number of players, , is exponential. The basic idea is that with players we can get doubly-exponential many different answers. A clean way to show this is to “simulate” utilities of representation length , and then invoke the previous bound.

Win-Lose Simulation Lemma: For every -player game with utilities that are -bit integers, there exists an -player game, with the following properties:

The new game is a win-lose game, i.e. all utilities are 0 or 1.

Each of the first players has the same set of strategies as in the original game. The utility of each of these players is fully determined (in an easy manner) by his utility in the original game.

Each of the final players has 2 strategies. The utilities of each of these are constants not depending on the original game at all.

The Nash equilibria of the new game are in 1-1 correspondence with those of the original game: the mixed strategies of each of the first players are identical, while the strategies of the final players are fixed constants independent of the game.

From this we immediately get our theorem.

Theorem: The communication complexity of finding a Nash equilibrium in -player bi-strategy win-lose games is .

The upper bound is trivial and the lower bound is implied by the lower bound on 2-player games with -bit utilities via this reduction using players.

Proof of Win-Lose Simulation lemma

It remains to prove the simulation lemma. We will do so in three steps: first construct a fixed game whose unique equilibrium has exponentially low probabilities, then use this game to simulate games with utilities that are all powers of two, and finally use these to simulate general games with integer utilities.

The first step of the reduction is the following construction:

Construction: for each there exist -player win-lose bi-strategy games such that the unique Nash equilibrium has, for each , players and choosing their first strategy with probability exactly .

Proof: for , this is exactly “matching pennies”, i.e. the game described above for (after scaling by a factor of 2). Now for the induction step we start with pairs of players as promised by the induction hypothesis, and we keep these players’ utilities completely independent of the soon to be introduced 2 new players, so we know that in every Nash equilibrium of the currently constructed -player game these will still mix exactly as stated by the fact above for . In particular the last pair of players in the -player game choose their first strategy with probability . The point is that with these players in place, we can easily simulate the game above for . To simulate a “ entry”, we define the utility as 1 when the last pair of players in the -player game play their first strategy (which happens with probability ) and 0 otherwise, while to define a entry we define the opposite.

Our next step is to simulate (in the sense of the lemma) games where all utilities are powers of two in the range , equivalently, after scaling, in the range . This is done by adding to the original players players as defined in the construction above. We can now replace each utility of the original players of the form where ‘s binary representation is with a utility of 1 whenever, for every with , new player plays his first strategy,(and 0 otherwise). This happens with probability , as needed.

Our final step is simulating games with utilities that are general -bit integers by games that have all their utilities powers of two, as in the previous step. This is done by adding another bi-strategy players, paired into independent “matching pennies” games, and thus each of them always evenly mixes between his two strategies. To simulate a utility of in the original game, we give, in the new game, a utility of whenever the new players played a sequence of strategies which is the binary representation of (and 0 otherwise). The expected value is exactly , completing the simulation (as the scaling by a factor of does not matter).

What remains?

The previous lower bound leaves much to be desired: while it does give an exponential lower bound in the input parameters, this is done at the cost of having an exponentially long output. If we also take the output length into account (in any representation) then the bound is no more than linear — trivial. Indeed, in the formal description of the computational Nash equilibrium problem, one formally has an input parameter that specifies how close to an equilibrium do we demand the output to be, and the algorithm is allowed to run in time that is polynomial in . (See a previous post on the subtleties of this approximation parameter .) Taking this approach, the lower bound is trivial too as it is bounded from above by .

Thus, the main open problem that remains is that of determining the communication complexity of finding a mixed-Nash equilibrium when the precision required at the output is only polynomial in the other parameters.

What is considered academic work? I.e. what is society paying us (university professors) for? What should be considered as criteria for academic promotion and hiring?

It seems to me that there has been a continuing narrowing in the way that the academic establishment is answering this question. It is getting closer to being narrowly defined as “international journal publications” (in CS, also conference publications). The two other pillars of academic duty are losing their gravitas: teaching is often belittled in practice (despite much lip service to the contrary) and academic administrative duties (department chairing, editorships) are loosing their prestige. Other forms of publication (national journals, popular press, even academic books) are getting less credit. (It used to be standard for Israeli economists to publish in Hebrew on the Israeli economy, but little academic credit is given for this anymore.) Founding commercial companies or consulting for them is frowned upon in some places, and slightly encouraged in others, but certainly not considered part of the academic work. Having a wide disciplinary knowledge (or even being a Renaissance man) is not expected or rewarded. Being involved with public policy, K12 eduction, public opinion, or politics is definitely not considered part of one’s academic mandate.

This is all not new. Am just ranting here? Maybe. But what I don’t quite understand is why we, tenured professors, keep going in this direction. Regarding our own work, we can do whatever we want — we are tenured — why are we so “academic”? Regarding others — it is us who sit in all these promotion, hiring, and tenure committees — why don’t we take the wider view more often?

The DoubleClick Ad Exchange is a real-time marketplace to buy and sell display advertising space. By establishing an open marketplace where prices are set in a real-time auction, the Ad Exchange enables display ads and ad space to be allocated much more efficiently and easily across the web. It’s just like a stock exchange, which enables stocks to be traded in an open way.

There are some existing competitors, but Google’s entry may be game-changing. Lots of research problems beg themselves.

An intriguing paper titled “A Theory of Goal-Oriented Communication” by Oded Goldreich, Brendan Juba, and Madhu Sudan has recently been uploaded to the ECCC, expanding a line of work started by the last two authors here and here. The basic issue studied is how is it possible to effectively communicate without agreeing on a language in advance. The basic result obtained is that, as long as the parties can “sense” whether some progress is made toward their goals, prior agreement about a language is not necessary and a “universal” protocol exists. My nerdier side cannot help but thinking about the application to communicating with an alien species (which I bet the authors did not mention on purpose.)

It is obviously hard to tell yet whether this is a beginning of a great new intellectual journey or just a blind alley but, In any case, I suppose that this is the kind of work sought by the ICS conference. Despite the game-theoretic-sounding notion of the goals of the communicating parties that underlines the model, conspicuously absent are notions of rationality.

Much of the recent theoretical computational work on market equilibrium uses the, so called, Fisher Model (see chapters 5 and 6 of the AGT book or talks and papers on Vijay Vazirani’s home page.) The model is somewhat simpler than Arrow-Debreu markets in that one party, the seller, brings into the market all goods except for money, which is the only thing that the others bring. This seemingly small variation seems to make handling it more tractable. In his ALGO invited talk, Vijay opined that Irving Fisher, the model’s father, is somewhat forgotten today probably due to his famous words just days before the stock market crash of 1929: “Stock prices have reached what looks like a permanently high plateau.” Vijay pointed out that Fisher’s comments just a little later are captured on youtube: