Monday, 5 December 2016

The basic idea behind a linear public good game is as follows: You have a group of people, typically four in the lab, who are endowed with a certain of money, say $5. Each group member is independently asked how much they want to contribute towards a public good. Any dollar that is contributed results in everyone in the group getting a return of, say, $0.40.

From an individual's perspective contributing towards the group looks like a bad deal because you contribute $1 and only get back $0.40. Note, however, that from the group's perspective a contribution of $1 results in a total return of 4 x 0.4 = $1.60. So, from the group's perspective it is good to contribute. For instance, if all four group members contribute $5 then each gets 4 x 5 x 0.4 = $8. And $8 is better than $5.

'Standard economic theory' gives a very simple prediction in a linear public good game. Namely, the game has a unique Nash equilibrium in which everyone should contribute $0. This result follows from the basic logic that is not in a person's material interest to give $1 and get back $0.40. But, what actually happens in the lab? Well many people do contribute. Indeed the absurdity of the idea we should all contribute zero is captured in the famous Monty Python sketch about a merchant banker.

So, what happens if we try to capture people's desire to contribute for the good of the group? This depends on why people contribute. To illustrate, suppose that there are two types of individual - selfish and altruist. The selfish do best to contribute zero and maximize own payoff while the altruists do best to contribute $5 and maximize group payoff. In this case there is still a unique Nash equilibrium but now with positive contributions. In particular, the selfish contribute $0 and the altruists contribute $5. Exact contributions will depend on the number of altruists in the group.

Altruists, however, seem pretty rare in the lab. What we typically observe are conditional co-operators. These are people who will contribute if others do. Suppose that everyone in the group is a conditional co-operator. Then it is still a Nash equilibrium for everyone to contribute $0 - if nobody else contributes then I don't want to either. But, it is also a Nash equilibrium for everyone to contribute $5 - if others contribute $5 then I am willing to as well. So, we get multiple equilibria, including ones with positive contributions. In this case, exact contributions will depend on which equilibrium the group manages to coordinate on.

Finally, consider a more realistic setting in which we have a mix of selfish, altruists, conditional co-operators and everything in between. Then we have to work through what outcomes are Nash equilibria are which are not. For instance, in a group with one selfish person and three conditional co-operators, are the conditional co-operators willing to contribute and 'put up' with the selfish person in their midst?

In a recent paper on 'What are the equilibria in public-good experiments?' Irenaeus Wolff gives us some answers to this type of question. In order to do so, experimental subjects were asked to fill in a complete contribution table which says how much they would want to contribute as a function of what others are contributing. You can then group people together and work out what contributions would be consistent with Nash equilibrium.

A somewhat depressing result is that a small proportion of selfish people can result in zero contributions being the unique Nash equilibrium. For instance, in one treatment only 23% of subjects were classified as selfish and yet in 60% of groups the unique Nash equilibrium would be zero contributions. So, you only need one 'bad egg'. There is, however, a more optimistic side to this in that 40% of groups should be able to do better than zero contributions. Indeed, in the two other treatments considered the proportion of groups that can do better than zero contributions is 60 and 70%.

To say that groups can do better than zero contributions is, however, different to saying they will do better. In the lab we often see groups converge on zero contributions. Not only, therefore, do we need enough conditional co-operators in the group we also need them to somehow coordinate on a good outcome. The option of punishing those who free-ride is one way that has been shown to work.

Friday, 18 November 2016

One of the more bizarre aspects of the recent US Presidential election campaign was the ability of Donald Trump to tell more lies and half-truths than most of us would do in a lifetime and yet still claim that Hilary Clinton could not be trusted in office. Even more bizarre, was the fact that he got away with it! How can we possible make sense of this? Some might point to a dumb electorate. I think we can learn more by looking at guilt aversion.

The concept of guilt aversion was formally introduced into game theory by Pierpaolo Battigalli and Martin Dufwenberg with a paper published in the American Economic Review in 2007. (I should also mention a paper by Gary Charness and Martin Dufwenberg in Econometrica in 2006.) The basic idea is that a person only needs to feel guilt if they disappoint the expectations of others. To illustrate, consider Donald Trump's 'promise' to lock up Hilary Clinton. Nobody realistically expects Trump to fulfil this promise. But, because nobody realistically expects him to lock her up then he needs to feel no guilt making and breaking the promise. It is all cheap talk.

If everything is all cheap talk then Trump can say what he likes, nobody can believe it, he can predict that nobody will believe it, and everything works out fine! And it is noticeable that post election none of his supporters seem particularly upset that many campaign promises have fallen by the wayside. The only promise people seem to really care about is his promise of trying to make America great again. If he does not fulfil on that promise then he really should feel guilt.

Hilary Clinton, by contrast, seems to have been judged by different standards. She is expected to tell the truth and nothing but the truth. Which is presumably why the email controversy took on such a huge importance. Any sign that she had let down expectations was taken as a signal that she could not be trusted.

Ultimately, though, this meant we have a good idea what Hilary was intending to do as President. With Donald Trump, by contrast, we don't have a clue. We have a good idea about some things he will not do - lock up Hilary, build a wall on the Mexican border etc. - but beyond that it is uncertainty. Usually uncertainty is a bad thing and Trump would not have had a chance. We seem to live in a world, however, where uncertainty is becoming ever more appealing to voters. That opens the door for a whole lot more bullshit in the future.

Interestingly, the evidence of guilt aversion in the experimental lab is mixed. In particular, there is a fair amount of evidence for lie aversion. The basic idea here is that a person feels bad if they lie. In this case it is irrelevant whether or not anybody expects the person to keep their promise, the person simply does not like to break their promise. So, Donald Trump would feel averse to making a promise he knows he cannot keep. Which does not sound much like Donald Trump.

Wednesday, 19 October 2016

A few days ago we heard the story of how a waitress rescued an 86 year old lady who been stuck in her bath for four days. The waitress contacted the police after becoming concerned that Doreen had not come in for her usual lunch and wine. A story with a happy ending.

A story with a not so happy ending is the infamous murder of Kitty Genovese in 1964 in New York. This murder caught the public's attention because of the supposed number of witnesses who did nothing to stop the crime. The exact details of what happened are debated. One thing is, however, for certain: Several people must have seen or heard the attack and none of them called the police.

To try and make sense of these conflicting stories let us look at simple game theoretic model. Suppose that there is someone called Doreen that needs rescuing and there are n witnesses who can call the police. If (at least) one person calls the police then Doreen is rescued and all the witnesses feel relief equal to B. Calling the police incurs a cost of c < B. Note that calling the police is a public good because everyone, not just the caller, benefits.

If n = 1 and so there is only one witness then it is a simple decision. The witness should call the police because the benefit of doing so exceeds the cost. So, Doreen is saved!

If n > 1 and so there are multiple witnesses, then things become trickier because of a free-rider problem. In short, each witness would prefer that another witness calls the police. That way they get benefit B without incurring the cost c. If we assume that all of the witnesses think alike then we need to look for a symmetric Nash equilibrium where each witness independently calls the police with some probability p. In equilibrium we require p to be at a level where all the witnesses are indifferent between calling or not calling the police. (To see why we need this indifference, suppose witnesses prefer calling. Then everyone will call. But this cannot be an equilibrium because we only need one person to call. Similarly, if witnesses prefer not calling then nobody calls. But this cannot be an equilibrium because each witness would want to rescue Doreen.) Let us look at the incentives of a typical witness called Sonia.

If Sonia calls then she is guaranteed payoff B - c because Doreen is rescued.

If Sonia does not call the police then she avoids cost c but relies on someone else calling. There are n - 1 other witnesses and so the probability that none of them call is (1 – p)n – 1 . This means the probability that at least one calls is 1 –
(1 – p)n – 1 . The expected payoff from not calling is, therefore, B(1 – (1 – p)n – 1).

Equating the payoff from calling with the payoff from not calling we get an equilibrium probability of calling:

Unsurprisingly, this probability is decreasing in n. In other words the more witnesses there are then the lower the probability that any one witness will call. The following graph illustrates what happens when c/B = 0.1.

The really crucial question, though, is what happens to the overall probability of someone calling. What chance does Doreen have of being rescued? The probability that at least one person calls is:

This is also decreasing in n. So, the more witnesses there are the less likely it is that Doreen gets saved! The following graph plots the probability of her being saved when c/B = 0.1.

To many this seems like a counter-intuitive result. It shows, though, the dangers of free-riding. In terms of producing public goods, more is not necessarily better.

Saturday, 8 October 2016

In economics there are two diametrically opposed ways of viewing politicians. For the most part we assume the benevolent social planner who acts to maximize social welfare. But, when it comes to specifically analysing political decision making we typically assume that politicians are just like everyone else - out to maximize their own payoff. If a politician's objectives coincide with those of society then we have no problem. But, there are, of course, lots of reasons to suppose that political and societal objectives do not coincide.

A particularly important issue is that of electoral survival. Clearly, a politician needs to get elected in order to make a living. That means it is in a politicians interest to do things that go down well with the electorate. At first sight you might think that this aligns the incentives of the politician with those of society because the politician needs to do good things to get elected. There is, however, a problem of asymmetric information. In short, voters may not know what is good for them. This is not to say that voters a dumb. It is merely to reflect that voters have busy lives and cannot be expected to be informed about everything. That is why we have experts to advise and politicians to make informed decisions.

So, what happens if a gap emerges between what voters want and what is good for them? We can hope for the benevolent social planner who does what is best. More realistically, however, we might have to accept that politicians are going to do what is popular. This, unfortunately, seems to explain why the United Kingdom is slipping ever further into disaster/farce territory.

The big issue in the UK is that of immigration. Just about every report or bit of research on the topic has shown that immigration is good for the UK. Immigrants create jobs, pay taxes, provide vital skills; international students work hard for a better future etc. etc. The popular perception, however, is that immigration is bad. Indeed, immigration seemingly has to take the blame for just about all of society's ills. We have, therefore, a worrying gap between reality and popular perception. In such circumstances, we might hope for politicians who do what is right and defend immigration. Unfortunately, though, the UK seems to be veering even more towards popularity politics. The Conservative Party Conference last week, for instance, appeared to have a strong anti-immigration vibe.

In her conference speech, the Prime Minister, Theresa May, attacked the liberal elite who don't get Brexit. Surely, however, this is to miss the point. That 52 percent of voters wanted to leave the EU is a signal that something is wrong. But what? Given that immigration is good for this country then we surely we need to find the actual problem? The actual problem, I would suggest, has more to do with underfunding of public services and a basic miss-match between what people want and what they can realistically expect (given that money does not grow on trees). 'Tackling the immigration problem' is just going to make things worse. In particular, you do not tackle inequality with policies that will ultimately make the poor poorer.

A more nuanced view of things is, however, very hard to sell to the electorate. Moreover, Theresa May's popularity seems sky high at the moment and so who can blame her for playing popularity politics. From an economic perspective she is doing exactly what we would expect her to - maximizing her own payoff. More important is how she can use this popularity. The comparison with Margaret Thatcher is particularly interesting. Margaret Thatcher was quite clever in mixing popular politics (reclaiming the Falklands) with unpopular but sensible policies (taking on the unions). This allowed her to square the circle of winning elections and good policy. Let's hope for something similar again.

Sunday, 11 September 2016

Standard economic theory takes a very deterministic view of the world in that we solve for a unique equilibrium and expect that to describe what will happen. Game theory offers an alternative perspective in that most games have multiple equilibria and there is no reason to suppose that one of these equilibria is any more likely to describe what will happen than another.

For a long while the existence of multiple equilibria was seen as a 'problem'; surely the objective of economics was to say what would happen and not what might happen? Experimental economics, however, has shown that multiple equilibria are not so much a problem as a reflection on reality. When we observe two seemingly identical groups of people we often find they end up doing very different things. For instance, one group might end up cooperating and other not cooperating.

Another way of looking at the multiple equilibrium problem is to say that small, hidden, seemingly irrelevant things can make a big difference. This view was popularized by Paul Omerod and his book Butterfly Economics. But, the idea that economic events can turn on a knife-edge still seems relatively ignored by a profession that prefers a more straightforward, deterministic view of the world. This is a shame because asking 'what if' questions can be interesting and informative. As a case in point consider the mysterious role of confidence.

A few weeks ago Usain Bolt secured his position as the Greatest. Clearly Bolt is fit and strong. A crucial part in his success, though, is undoubtedly his ability to stay supremely relaxed under pressure. Indeed, a year ago, at the Athletics World Championships in Beijing, the Greatest tag looked under threat. Bolt stumbled through the semi-finals of the 100 metres and Justin Gatlin would surely win gold. Bolt, though, got it right when it mattered. Later, Bolt praised his coach for reminding him that he had nothing to fear. Confidence was the difference between winning and losing. How different things could have been?

As a second example consider the remarkable success of Leicester City in winning the Premier League. Early in the season Leicester played Aston Villa. With 20 minutes to go Aston Villa were cruising, two goals to the good, and playing great football. Then, out of nowhere, Leicester scored three goals and won. This was surely a pivotal moment in the season. Leicester grew in confidence and became unstoppable while Villa fell into despair and hardly won another point for the rest of the season. If Villa had held on to win that game how different would things have been? Confidence can turn an ordinary team into world-beaters or no-hopers.

And finally, let's look at Brexit. Before the referendum on Britain's membership of the EU, just about every economist in the land was predicting that Brexit will harm the UK economy. It surely will. For now, though, large parts of the economy still seem to be trundling along quite nicely. Which has given prominent Brexiters the chance to pour scorn on economists. The missing link is confidence. A majority of people voted for Brexit and so they, presumably, are happy and expecting great things. For the rest of us there is the realisation that we are still a long way from leaving the EU and so things might hold together for a while yet. The vote to leave the EU has not, therefore, caused a sudden and dramatic change of fortunes. Reality will have to kick in at some point but it is consumer and business confidence that will determine when. And there is no predicting when that might be.

Monday, 22 August 2016

One thing that causes a lot of confusion, both in popular and academic debate, is the distinction between a pure public good and common resource good. Concepts like tragedy of the commons, prisoners dilemma, and free-riding get used far too liberally, and often incorrectly. So, here is one way of trying to untangle the differences:

Suppose that I decide, out of the goodness of my heart, to provide a 'gift' to my local community. For example, I build a children's play-area, or renovate the village hall, or play very loud grunge music. This is a form of public good in the sense that it is non-excludable - anyone in the village is free to enjoy my gift. What distinguishes between a pure public good and common resource good is rivalry in consumption - does one person's enjoyment of my gift depend on the number of other people who make use of the good. Here are some examples:

Loud music is a pure public good (or public bad) because one person opening (or closing) their window wide to enjoy the music does not in any way change the amount of the good available to others. In this case there is no rivalry in consumption.

A children's play-area can be a pure public good or common resource good depending on how big it is. If it is big enough to easily accommodate all the children in the village then it is a pure public good because, again, one child using the play-area does not change the amount of good available to others. More realistically, the play-area will be of a size that it can get crowded. For instance, children might have to wait to get on the swings or slide. Then we have some rivalry in consumption and the play-area is a common resource good. Clearly, the amount of rivalry can vary from low to high depending on the size of the play-area and number of children who might want to use it.

If I drop a £10 note on the village green then it is a private good (with a price of zero). In this case there is very high rivalry in consumption because the first person to find the £10 note denies all others the chance to consume it.

More generally, we can see that there is a continuum between the very high rivalry of a private good to the no rivalry of a pure public good. So, why do we so often see confusion between pure public goods and common resource goods? I think it is driven a lot by a failure to recognise the distinct issues that arise:

When looking at a pure public good the key issue we study is how much of the good will be provided. How big a play-area we will build, how loud will be the music etc. To free-ride in this setting is to not contribute very much towards the public good. There is little point in us questioning how much people will use the public good because that is largely self-evident.

With a common resource good, by contrast, our focus is on how much people will use of an existing public good. How much will people use the play-area, fish in the oceans etc. To free-ride in the context is to consume a lot.

It is tempting to say that these two problems are similar. Indeed, the literature on framing often treats contributing to public goods and withdrawing from a common resource as strategically equivalent. They are, however, not equivalent, as pointed out by Jose Apesteguia and Frank Maier-Rigaud in an article in the Journal of Conflict Resolution. For instance, if I contribute to a public good then everyone benefits. If I withdraw from a common-resource then only those using the resource will suffer. This is a subtle but important difference.
Rather than attempt 'shortcuts' it seems better and safer, therefore, to treat to treat the issue of contributing towards public goods and withdrawing from common resource goods as distinct issues. Then we are less likely to ignore rivalry and draw illusory parallels.

Sunday, 10 July 2016

The Paris Climate Change Conference last December was hailed as a great success. As countries signed the Agreement in April there seemed an even stronger sense of optimism. UN Secretary-General Ban Ki-moon was quoted as saying: "Paris will shape the lives of all future generations in a profound way - it is their future that is at stake." A recent article in Nature entitled 'Paris Agreement climate proposals need a boost to keep warming well below 2°C' provides a more sombre and depressing view. And the sad reality, is that the Paris Agreement is probably a step backwards, not a historic step forwards, in the fight against climate change.

The goal of previous climate agreements, like the Kyoto Protocol, was to provide binding commitments to reduce dangerous emissions. Kyoto, and all attempts at a follow up agreement, failed dismally to work. But, at least the objective was a sound one - to tackle the problem. In Paris the objectives were far less ambitious. Essentially the goal was to agree what the problem is. And, truth be told there was not much agreement on this as countries argued whether humanity could survive a 1.5°C or 2°C rise in global temperatures.

In terms of how to actually tackle the problem there was very little in Paris to get excited about. Each country basically suggested ways that they could voluntarily cut emissions. Note that there is no viable compulsion for countries to do what they said. Moreover, even if every country sticks to their word there is still no chance of keeping temperatures below the 2°C cut-off. So, all we can basically take from the Paris Agreement is that we are doomed: we agree that there is a problem but have no plan to tackle it! This interpretation is, of course, not the one being espoused by politicians or campaigners and the media. And therein lies my biggest concern because we do not have time for false optimism if climate change is going to be brought under control.

So, why is it so difficult to reach a global climate agreement? Averting climate change is similar to providing a threshold public good. If we cut back on emissions enough then everything should be fine. If we do not cut back enough then expect the consequences. In principle, threshold public goods can be voluntarily provided. In particular, there are Nash equilibria where nations acting independently avert catastrophe. The intuition behind this result is one of criticality - if nations collectively do enough to just avert catastrophe then it is in each countries interest to do their bit because they are critical to avoiding catastrophe. Most of the economic literature on climate agreements has pretty much taken it as given that the existence of Nash equilibria where catastrophe is avoided is good news. Basically, we should be able to build an agreement that works. In reality, however, things are not so simple. I will give two reasons why.

First, the threshold around which catastrophe can be avoided is uncertain because scientists cannot say exactly how much emissions are too much. As pointed out by Scott Barrett in a 2013 paper in the Journal of Environmental Economics and Management, this is bad news. Essentially, threshold uncertainty undermines criticality in the sense that a country can no longer be sure that their contribution is critical to averting catastrophe. Maybe catastrophe would be avoided without them, or maybe their efforts can do nothing to avert catastrophe; without knowing more about the threshold we cannot say. If there is too much uncertainty then avoiding catastrophe is no longer a (Bayesian) Nash equilibrium. And clearly there is considerable uncertainty about climate change.

Second, even if we have a certain threshold there is no guarantee nations can turn a Nash equilibrium into reality. The problem here is a multiplicity of equilibria. There are lots of ways catastrophe could be avoided: the US cuts back a lot, China cuts back a lot, etc. The consequence of this is that any agreement to enact a particular set of targets can very easily unravel. In terms of criticality, the issue here is uncertainty over what other's will do. For instance, if the agreement says that the US should cut back a lot but the EU expects the US to not do that then we have uncertainty and things unravel. (Note that it is in the US interests to appear as though it will cut back less because this incentivizes the EU to cut back more, etc.) In a recent paper with Federica Alberti published in Public Choice we argue that something stronger than Nash equilibrium is needed to be truly confident public goods can be provided. We use the concept of a collectively rational recommendation. Unfortunately, our results are little use when applied to climate change because our approach requires someone who can credibly oversee any agreement. In other words, we need someone who can make sure the US, China etc. stick to their promises. And clearly, no such institution currently exists.

Do not, therefore, be fooled by bold pronouncements about the Paris Agreement. While it may give more clarity on the problems we face, it brings us no closer to actually solving those problems.

Friday, 8 July 2016

On the 23rd June the UK held a referendum on its membership of the EU and 52% of voters decided we should leave. To put things bluntly - this was the wrong decision. Politicians of all persuasions, however, have been queuing up after the vote to say that we must respect the 'democrat will' of the people. Clearly they do not have much choice, at this stage, given that a majority voted to leave. But, can we really talk of this referendum result as 'democratic'? I don't think so.

To make the case we can start with the impossibility theorems. These theorems, of which Kenneth Arrow's is the most famous prove that there is no voting mechanism that is guaranteed to produce outcomes satisfying some basic desirable properties. For instance, the choice made can be critically influenced by the options on the ballot paper (even if we add options people do not like). There are two basic ways to interpret the impossibility theorems. One is to say that they show there simply is no right decision; if we ask 100 people what they think and we get a 100 different answers then there is no way of determining the optimal thing to do. The other interpretation is to say that a right decision does exist but voting is a very imperfect way of finding it. With either interpretation, democracy does not come out looking particularly good.

There is though one context where voting has been shown to work. It will work, if the decision to be made is a binary one. (This is May's theorem.) The EU referendum was a binary decision - we either remain or leave - and so this surely gives us more faith in the result? Well, the key thing to observe is that framing the question in a binary way does not stop the fundamental issue being non-binary. And this was clearly illustrated in the EU referendum debate. In particular, there was at least three different versions of what would happen if we left the EU: (1) stop immigration, (2) liberalise the economy (meaning more immigration), (3) protect workers right (so de-liberalise the economy). Clearly, these three visions of the future are all incompatible. Which means that many who voted leave are not going to get what they wanted.

Suppose the ballot paper had four options - remain and the three versions of leave. What would the outcome have been then? Indeed, how would the vote have worked. For instance, would voters have been asked to rank choices or just vote for a preferred outcome? We are now firmly in the territory where Arrow's Theorem kicks in. And in all likelihood the outcome, the 'democratic will' of the people, would have been to remain in the EU.

There is a further reason why we should be sceptical of the election results. Standard models of political choice take it as given that voters know what is good for them. That was sadly lacking in this instance. For decades, politicians have found it easier to blame the EU, and then immigrants, for just about every ill that beset the country rather than give the more nuanced truth. This left an open door for leave campaigners to lie and scare. Turkey, for instance, is joining the EU and 17 million Turks are moving to Britain. Etc. If voters are ill-informed then how can we possibly talk of democracy. This is not to slur everyone who voted leave because there clearly are many well-informed people who had good reasons to vote leave. The sad reality, however, is that many of the people who voted leave are those who will suffer most from leaving the EU.

So, what lessons can we take from all this? The main lesson is probably to not have referendums on important issues! Where does that leave democracy? To me, democracy is about electing, in a fair and open way, people to make decisions on our behalf. It is not about ill informed people making decisions for everyone.

Friday, 22 April 2016

At the heart of game theory is the notion that we can strip away details of a particular setting to focus on the key strategic incentives that matter. Hence, two very different looking settings can give rise to the same game. For example, contributing to charity may be strategically equivalent to not dropping litter. This stripping away of detail makes game theory a 'general theory' rather than a list of case studies. But, the constant danger is to strip away details that matter.

Framing effects are one illustration of this danger. Daniel Kahneman and Amos Tversky very powerfully illustrated that people can make different decisions depending on how a decision is framed. (See, for example, their 1981 article in Science on 'the framing of decisions and psychology of choice'.) Strategically irrelevant aspects of a setting may, therefore, matter. Recent years have seen a burgeoning number of studies looking at framing effects in public good games. The basic question of interest is whether people more willing to cooperate in some frames than others? For instance, is someone more likely to give to charity than not drop litter?

It has become routine to say that the evidence of framing effects in public good games is mixed. And this, in a sense, is undeniable because there are some studies that show large framing effects and others that show none. But why are we getting these mixed results? There are lots of dimensions along which a public good game can differ, of which here are three:

In a pioneering study, James Andreoni focussed on an externality dimension. In a positive frame the positive externality of contributing is emphasized - if you contribute others gain - while in a negative frame the negative externality of not contributing is emphasized - if you do not contribute others lose.

Much attention has also focussed on a choice dimension. In a give frame the choice people have to make is how much to contribute while in a take frame the choice people have to make is how much to not contribute.

There is an initial allocation dimension. A person can initially start with money (private good) or public good.

Just putting these three dimensions together gives at least 2x2x2 = 8 different frames. For instance, we can have a positive-give-some frame where a person starts with money, can give-some to charity and the charity emphasizes the positive externality of doing so. Or, we can have a positive-keep-some frame where a person starts with money, can keep-some from not going to the taxman and the taxman emphasizes the positive externality of paying taxes. Or, we can have a negative-leave-some frame where a person starts with a clean river, they can leave-some cleanliness (by not polluting) and environment groups emphasize the negative externality from pollution.

To understand why framing results are mixed it seems clear that we should carefully distinguish between different framing dimensions. In all likelihood some framing dimensions matter and others don't. In a recent paper, published in the Journal of the Economic Science Association, I argue that currently not enough account is being taken of different framing dimensions; framing effects are being lumped together in a way that means we miss key insight. In particular, it has implicitly been assumed by many that the externality and choice dimensions are the same - positive going with give and negative with take. But these dimensions need to be kept separate - positive can go with take and negative with give.
Once we split apart dimensions the evidence seems to suggest that the externality dimension does matter while the choice dimension does not (in terms of average cooperation). This means that the mixed results in the literature become more easily understood, because some studies focus on the externality dimension (and find a strong effect) while others focus on the choice dimension (and find little effect).
In time, it would not surprise me if the 'externality dimension matters while choice dimension does not' result needs revision. For instance, only 3 of the 8 possible frames alluded to above have been widely studied. The more general argument, that we need to carefully distinguish between framing dimensions, must, however, be taken seriously if we are to find out what causes framing effects. And it is surprising how little we do know about what causes them. While the number of framing experiments grows by the month there is very little by way of theoretical work to make sense of the results. (A study by Martin Dufwenberg, Simon Gachter and Heike Hennig-Schmidt on 'the framing of games and the psychology of play' is one notable exception.) A deeper theoretical understanding should surely be a priority for future work.

Monday, 7 March 2016

In 2013 a system was introduced in Wales where restaurants and food outlets were required to display food hygiene ratings for all to see. This has seemingly led to an increase in food hygiene, prompting calls for the system to be extended across the UK. But is it necessary to force restaurants to display the rating? Textbook models of signalling would suggest not. Here's why:

Restaurants are rated on a 6 point scale ranging from a cockroach infested 0 to very good 5. All restaurants have to be rated and so this is not at issue. The question is whether they should be forced to prominently display the rating. With that in mind, consider the incentives of restaurant owners:

It is easiest to start with a restaurant that got a top, 5 rating. Clearly the owners have an incentive to display the high rating and show off how good the are. So they will likely display the rating whether forced to or not.

What about a restaurant with a 4 rating? You might think the owners would not want to display the rating as this will signal that they did not get a 5 rating. Suppose, however, the owners do not display the rating? The customer can reason that if the restaurant had got a 5 rating they would surely have displayed it. The lack of rating should, therefore, be interpreted as a signal the restaurant did not get a 5. And, who knows, it might have got a 0. It is, therefore, in the interests of the owners to display their 4 rating - it is not a 5 but it is better than a 3, or 0.
What about a restaurant with a 3 rating? These owners would surely rather hide the fact they only got a 3. Again, however, we need to consider what the absence of a rating would signal. If all restaurants with a 5 or 4 display their ratings then the absence of a rating is a signal of 3 or below. And it is better to admit a 3 than have customers infer the rating might have been 2, or 0.
This full-discloure principle extends to those with a 2 and 1 rating. The only restaurants that have no incentive to display their ratings are those with a 0. And we would hope that those are shut down anyway.
Naturally, we might question whether reality would match the prediction of voluntary disclosure. But, I think there is a lot of anecdotal evidence to suggest it will. Indeed, the success or rating sites like TripAdvisor would seem to be dependent on it. Suppose, for instance, that only the best hotels, restaurants etc. wanted to be featured on rating websites. Then the website would have so few rankings as to be essentially useless. Things work because it is in the interests of all (or just about all) hotels and restaurants to let people know their rating. A rating of 7.5 out of 10, for example, is not great but better the customer know this than think the rating is 6.5 or worse.

Saturday, 27 February 2016

There has been a lot of media coverage recently on the cost of attending Premier League football matches. One particular focal point was a protest by Liverpool supporters over plans to charge £77 for a ticket. In response to this protest the owners of Liverpool back-tracked and said they would keep prices pegged at current levels (where the highest price is £59). But, why should prices not go higher?

The data on Premier League attendances shows that, for the vast majority of games, stadia are full. And there is no doubt that many more would attend if they could get tickets. Such excess demand clearly means that plenty of people are willing to pay high prices. This gives a strong rationale for clubs to push prices higher and increase profit. Indeed, it is what the economic textbook says they should do.

But, the main consequence of an increase in prices is to extract surplus from supporters. Essentially supporters are pushed to the point where they are only just willing to pay to go to matches. This takes away the enjoyment of the football experience. And that is a problem because of the fairly unique nature of football.

The supporters of a club are an integral part of what that club is. This can manifest itself in many, many ways - support spurring the team on to victory, bailing out the club in hard times etc. In economic terms this means that the value of the club (and its ability to generate income) depends on its fans. Moreover, it means that fans can legitimately claim part ownership of the club. Note that this is different to most other goods. A restaurant, for example, fills up on a Saturday evening because it produces good food and not because it has a loyal fan base who cheer on the waiter.

Once we position supporters as having legitimate claim to part ownership of a club it becomes hard to justify high prices. Supporters deserve something and a ticket they can easily afford is probably a fair compensation. To say, therefore, that supporters can afford to pay high prices is not enough to justify them. And it is worth remembering that gate receipts are a relatively small proportion of revenue. At Liverpool, for instance, they are only around 20%. Much more money comes from TV rights. To squeeze every penny out of fans does not, therefore, make sense.

The protest at Liverpool showed the genuine bargaining power that fans have, if they can coordinate themselves. The owners U-turn showed good common sense.

Wednesday, 10 February 2016

A recent study by Maxwell Burton-Chelle, Claire El Mouden and Stuart West, published in the Proceedings of the National Academy of Sciences, challenges one of experimental economics most robust findings. They argue that conditional cooperation reflects subjects confusion over experimental instructions and not social preferences. So, what to make of this result?

Let me begin by providing a little background on conditional cooperation. A study by Urs Fischbacher, Simon Gachter and Ernst Fehr, published in Economics Letters 2001, looked at how people behave in a pubic good game when they get to see the contributions of others. More specifically:

They considered a setting with 4 people. Each person could contribute up to 20 tokens into a public project. Any tokens not contributed were worth, say, $1 to that person. Any tokens contributed were worth $0.40 to everyone in the group. For example, if a person keeps 5 tokens and total contributions to the group (including her 15) are 36 then she gets $5 + 0.4(35) = $19. This is the standard linear public good setting. The slight twist is to have 3 of the people choose their contribution before the 4th person. The 4th person can then condition her contribution on the contribution observed of others. The final thing to note is that a strategy method was used meaning that subjects were asked what they would do under any eventuality; that is, what would they do if the average contribution of others was 0, or 1, or 2, and so on up to 20.

Fischbacher, Gachter and Fehr found that around 50% of subjects were conditional co-operators. To a rough approximation these subjects contributed the same as the average contributed by others. So, if the average contribution of others was 15, they contributed 15, and so forth. This finding has been replicated many, many times, including a study of my own with Denise Lovett, published in Games. Moreover, the idea of conditional cooperation fits well with the more general behaviour we observe in public good games. For instance it can explain why contributions decline with repeated interaction - if 50% of people are free-riders and 50% conditional cooperators the average contribution naturally will fall over time.

The latest study in PNAS questions all this. Here's what they did: Having given subjects the same instructions as those used in the original study by Fischbacher, Gachter and Fehr. Then they wrote “Before you begin, you are going to play this game in a special case. In this special case, you will be in a group of just you and the COMPUTER; The computer will pick the decisions of the other 3 players. The computer will pick their decisions randomly and separately (so each computer player will make its own random decision); You are the only real person in the group, and only you will receive any money.”

In this special case there is absolutely no reason to contribute to the public good. If a person does contribute it merely benefits a computer, whatever that means. Yet the study found that behaviour against the computer was almost exactly the same as behaviour with people. This, it is claimed, is evidence that subjects don't understand the instructions. And conditional cooperation is an artefact of this misunderstanding. Why else would someone cooperate with a computer?

To cut to the chase, I don't buy this argument. There are several reasons. First, hundreds, if not thousands, of experimental subjects have behaved as if conditional co-operators over the years. Surely, at least some of these understood the instructions! After all, the subjects are typically students at top universities and the instructions are not particularly difficult to understand. Even so, we cannot ignore the fact that the subjects in this study did behave weirdly when playing against computers. How can we explain that?

There is a worrying circularity in the reasoning used by Burton-Chellew, El Mouden and West. In particular, they essentially claim that subjects cannot understand instructions about a public good game but can understand the bit of the instructions that tells them they are playing a computer. Well, public good games are ubiquitous in everyday life, while cooperating with a computer is an odd thing indeed. So, it seems to me much more plausible that subjects understood instructions about the public good game but did not react to the bit at the end telling them they were playing a computer.
Support for this latter interpretation is provided by the observation that behaviour against the computer was so similar to that against humans. My study with Denise Lovett and others have shown that the behaviour of conditional cooperators systematically changes with incentives. This would strongly suggest that some difference should have been observed when subjects played against computers. But, no difference was observed. The easiest explanation for this seems to be that subjects did not comprehend what it meant to play a computer.
The authors offer a counter-argument to this line of reasoning. They show that conditional cooperation correlates with 'misunderstanding of the game'. Their measure of misunderstanding is, however, open to interpretation; they asked subjects 'In the game, if a player wants to maximize his or her earnings in any one particular round, does the amount they should contribute depend on what other people in their group contribute?'. This question is carefully worded to have a unique answer - no. And I would want someone in a game theory exam to answer no. But this is not a game theory exam! Once we acknowledge that fact, this question looks more like a measure of free-riding than understanding. Indeed, the many conversations I have had with students and experimental subjects, on this kind of issue, would suggest that those answering yes or maybe are likely to understand the game better than those answering no - they just need a bit of training before sitting a game theory exam.
Clearly this latest study challenges the economist's interpretation of conditional cooperation and has to be taken seriously. But, I think the evidence is not compelling enough to ditch 15 years of accumulated evidence quite yet. Clearly, some conditional cooperation will be down to confusion. The claim that 100% is due to confusion, however, seems extreme. Hopefully, future work can narrow down the estimate.

Sunday, 31 January 2016

Evidence (albeit somewhat anecdotal) suggests than women are being asked to pay considerably more than men for almost identical consumer products. This seems to apply to clothing, toiletries, toys, even pens. Why? It is hard to believe that it costs more to produce products for women than men. So, I think we can safely discount the idea that the difference is being driven by costs. The far more likely explanation is price discrimination.

To illustrate consider a very simple example. Imagine you are the owner of a company making jeans. It costs £25 to produce a pair of jeans and you are currently selling them at £50 a pair. At this price you sell 100 a week to men and 50 a week to women. The key question you have to consider is what would happen to sales if you increase (or decrease) the price? Suppose that at a price of £55 a pair you estimate you would sell 80 to men and 45 to women. On the male side this is a bad deal because profit falls from (50 - 25)(100) = £2,500 to (55 - 25)(80) = £2400. But on the female side you do well because profits increase from £1,250 to £1,350.
In this example it clearly pays to charge women a higher price than men. But note that this is because women are less sensitive to price than men. This is different to saying that 'women are willing to pay more than men'. After all, you were selling less jeans to women than men. Generalizing from this example, optimal pricing is always driven by how sensitive a market will be to changes in prices. So, if firms are charging lower prices to men it would seem that men are more likely to react to price than women. This, though, is not the end of the story.
The story so far is one of 3rd degree price discrimination - different categories of consumer (namely male and female) are being charged a different price for an identical product. Textbooks will tell you that 3rd degree price discrimination can only succeed if there is no potential for arbitrage. In other words it must not be possible for women to buy from the male market. There appears, however, little to stop that happening. This suggests, therefore, that women, as well as being less sensitive to price, are also reluctant to shop around.
Arbitrage, though, is not just about buyers' willingness to shop around because someone else could do the shopping around for them. If, for instance, identical products sell for £5 in one location and £10 in another an enterprising individual can buy the product at £5 and sell it for £8 at the other location. There is simple money to be made. That this has not happened would suggest the firms involved have significant market power. Enough market power that no one can steal their market. This is unlikely to change any time soon.
But, the recent news may lead to more women shopping down the male aisle of supermarkets. Ultimately arbitrage should win through and lower the price gap between male and female products. Unless, that is, the price differential is actually being driven by something different. If, for instance, the products compared are not actually identical, and a pink pen is not really identical to a blue one, the price differential can persist. So, don't expect arbitrage to eliminate all of the gap between male and female prices.

Tuesday, 5 January 2016

On Saturday rail fares in the UK rose by an average of 1.1%. This is the latest instalment in a long running trend of fair increases. Indeed, average fares have risen by around 40% over the last decade. As usual passengers were queuing up to say how disgraceful it all is. This year the opposition leader Jeremy Corbyn joined the fray with his standard call for a renationalisation of the railways. But what is the problem with high rail fares?

The main reason I ask this question is one of revealed preference. At the same time as complaining about high fares, most passengers also complain about having to stand on over-crowded trains. Indeed, use of the railway has soared, with over 70% more journeys now than in 2002. Moreover, I don't think that anyone would seriously dispute that the UK rail network has just about reached the limit of its capacity in terms of the number of trains operating.

If a price rise is accompanied by a reduction in demand then we can start to think about firms exploiting a monopoly position to extract high profits. But, when a price rise is accompanied by an increase in demand then the only logical conclusion is that there is excess demand. And if supply, in the short term, is fixed by the extent of the railway network then there is a strong argument that prices are not high enough.

Put simply, if passengers continue to use the railways at higher prices then they reveal that prices are not too high. The response to this would no doubt be that passengers have no choice to use the railway. But that is not true. Passengers can change where they live, change job, change the time they commute etc. Furthermore, enterprising soles could no doubt come up to alternatives to high rail fares such as companies that allow working at home or have office hours that can exploit off-peak fares. From an economists perspective, passengers choose to pay the high fares.

Another objection to high fares is that it disadvantages the poor. This argument, though, does not stand up to any kind of scrutiny. The main users of the rail network are the relatively well off, not the poor. Also, despite high fares, the rail network is heavily subsidised by the taxpayer. To lower fares would require an inevitable increase in taxpayer subsidy and would, most likely, be a regressive policy that benefits the relatively well off. (A recent article in the Economist touches on these issues.)

If high rail fares are neither inefficient, nor inequitable, what is the problem? Suppose you spend £50 for a ticket on a commuter train and then have to stand up the whole journey. The economics textbook tells you that you reaction is supposed to be: I wish the fare had been £100 because then less people would have turned up and I would have got a seat. The common reaction, however, is presumably something like: when I pay £50 I expect a decent service and standing-up is not good enough. It just seems plain unfair to pay so much for a poor quality product. In a seminal article published in 1986 entitled 'Fairness as a constraint on profit seeking', Daniel Kahneman, Jack Knetsch and Richard Thaler show that people particularly dislike price increases that are due to shifts in demand. The outrage at rail fare increases seems to fit that picture.
So, what could we do? Lowering fares might make people feel less exploited but would only exasperate excess demand. So, this is not the solution. Increasing supply would help but is only a long run possibility. Another option is to push fares higher but put a headline grabbing tax on the rail companies. Then, at least, passengers would not feel exploited. The train operating companies, though, do not make particularly high profits and squeezing these further is unlikely to increase the quality of rail services. Which brings us finally to renationalisation. If this is to lower fares then the taxpayer will have to pay the difference. And as I have argued this benefits the rich at the expense of the poor, which hardly seems desirable. The status-quo, therefore, seems not so bad.