Monday, 26 January 2015

The willingness of skiers to leave their skis lying about outside of mountain restaurants has always intrigued me, even as a young child. Skis are expensive, there is a vibrant second hard market, and given that most skis look similar it would be easy enough to do the 'sorry I thought they were mine' trick. It would just seem so straightforward to 'make a business' out of stealing skis! Yet skiers still leave their skis lying about. And in Switzerland they seem to leave everything about - helmet, boots, rucksack. It is the equivalent of students at the university canteen leaving all their i-phones, laptops etc. laying about outside (without any passwords).

That skiers are trusting, and seemingly deserving of that trust is wonderful. It frees everyone up to enjoy the mountains rather than worry how to padlock their skis to a slopestyle rail. Our willingness to trust is, though, a real challenge to standard economic theory. The potential gain from stealing a pair of skis is positive for pretty much any level of risk aversion. So, we should all be out stealing skis. But we're not! Why?

The issue of lying and deception has largely gone under the radar of economists for a long, long while. Fortunately, the work of Dan Ariely and others has done a lot to change that in recent years. And dishonesty is now something of a hot topic. For instance, the recent Nature article by Alain Cohn, Ernst Fehr and Michel Marechal on dishonesty in business culture got a lot of media coverage. But, many intriguing questions about honesty and dishonesty remain. And I'm not convinced the current methods used to measure (dis)honesty are as good as they might be.
To help explain my scepticism it is useful to take a step back and look at a study on dishonesty I did with a student Matheus Menezes, published in Economics Letters. We were interested in whether competition increases dishonesty. Everyone seemed to assume it should but we demonstrated it need not - either in theory or practice. Basically, in a very competitive environment there is little chance of winning even if you cheat and so there is little point in cheating. This gives rise to an inverse U shaped relationship where cheating is highest for intermediate levels of competition.
In our experimental study we used a standard way of measuring (dis)honesty: We got the subjects to self report how many answers they got correct on a multiple choice general knowledge quiz. Because subjects self report they can cheat 'undetected' by falsifying the number of correct answers. Because we know roughly the probability of people knowing the correct answer we can detect cheating at an aggregate level. Most studies of dishonesty follow this approach, potentially substituting our quiz for some other random device like rolling a dice and counting sixes.
In a standard set up, like that of Alain Cohn and co-authors, subjects get paid for each 'correct' answer. Clearly this provides an incentive to exaggerate the number of correct answers. But, how good a measure of dishonesty is that? By cheating the subject takes money off the experimenter. And the experimenter has deliberately set things up this way. The subject, therefore, may consider it fair game to cheat. Taking money off an experimenter who 'asks you to do it' is a world away from robbing an old lady.
The set-up we used in our study was theoretically immune from this criticism. In our case a set number of people would win a prize. For example, in one treatment the two subjects with the most correct answers got a prize (with a random device to deal with ties). In this case a subject who cheated did not take money off us. Instead they took money off another subject. For example, the subject with the most correct answers denies the subject with the third most correct answers a prize. Even so, to get a prize it was pretty much essential to cheat and so we end with cheats taking money off other cheats. Again, this is a world away from robbing an old lady.
We know the standard measure of (dis)honesty must be picking up something because our study and others has found strong treatment effects - willingness to cheat depends on incentives. I'm sceptical, however, how much of this is down to dishonesty. That scepticism stems largely from a conversation I had with one of the subjects after our study. He was openly willing to admit he had cheated because 'that was the game'. This is a person I would trust to be an honest, cooperative type. His willingness to say he had lied presumably testifies to that! With this in mind, I think the standard measure of dishonesty may be picking up a general willingness 'to play the game'. And that is likely to correlate with intelligence just as much as dishonesty.
So, I would suggest there is work left to be done before we can convincingly say we have captured dishonesty in the lab. And until we do that it will be hard to know why most people, including skiers, are such an honest bunch.

Thursday, 8 January 2015

Before Christmas there was a bill brought before parliament that would force large companies to disclose any pay gap between men and women. The bill has no chance of being implemented before the general election and so was largely symbolic. It still, though, raises some interesting questions. The law already requires that men and women receive the same pay for the same job. The purpose of this new bill was largely to try and enforce that rule. Would it work?

I do not think so. To see why consider a hypothetical firm with the workforce summarized in the table below. There is a huge pay gap of £16,500 between the average man and woman. What's causing that? (a) Men are more likely to have 'higher' positions in the firm - senior managers are disproportionally male and cleaners disproportionally female. (b) In higher positions men get more than women - a male senior manager gets £20,000 more than a female counterpart.

One can make the argument that both (a) and (b) are 'unfair'. The key thing I want to highlight is that (a) has a bigger effect than (b). This can create perverse incentives. Suppose, for instance, the firm was required to publish data on gender equality. A gap of £16,500 does not look good. What could the firm do?

It could raise the salary of female senior managers to £100,000 and that of junior staff to £50,000. This looks like a good deal for women but only closes the pay gap to £12,000. The firm will still look bad.

Suppose instead the firm hires an extra 10 women senior managers and an extra 10 women junior staff. This equalizes the number of women in higher positions and again looks like a good deal for women. But it still leaves the pay gap above £12,000!

Finally, suppose the firm sacks 10 women cleaners and replaces them with 10 male cleaners. This closes the pay gap to £10,600. If the firm sacks 20 women cleaners and replaces them will 20 male cleaners the pay gap drops below £5,000. If the firm just hires an extra 20 male cleaners and sacks no female cleaners the pay gap still drops to £10,167.

What this toy example illustrates is that a firm will most easily be able to make itself 'look good' by playing around with the number of low skilled staff. There is very little incentive for the firm to target what are arguably the biggest causes for concern - the low pay and low number of women in higher positions. Indeed, the easiest way for the firm to improve its image may simply be to lay off women cleaners!

You might say that there is a simple solution to this - why not have firms publish the distribution of workers and wages, as in the table above. Well, I don't think the public or policy makers will go for that. We prefer things to be one-dimensional in order that we can have league tables and see who is best and worst. That leaves no room for two dimensional statistics.

Attempting to reduce the gender gap through name and shame incentives seems, therefore, a bad idea to me. It will reduce the gender gap. But, likely not in a way that benefits women or men.