Posts Tagged ‘Psychology’

One of the problems scientists face in the lab is that human beings are really, really bad at science. There are fundamental mechanisms in the way we think that help us enormously in our everyday lives, but are the exact opposite of what a scientist needs to understand the world.

Let’s play a game.

I have a rule in my head that generates numbers, three at a time. What I’ll do is give you the first three numbers. You have to guess what the next three numbers generated by the rule will be. You will ask me “Are the next three numbers x, y, and z?” and I will say yes or no. And we can repeat this process as many times as you like until you feel you know what the rule is. The game ends when you tell me what you think the rule is, and I tell you if you’re right or wrong.

All set? Okay, let’s go.

The first three numbers are 2, 4, 6.

Which numbers do you think come next? Unfortunately we can’t play this live, but you might try it with a friend afterwards; in the meantime, I’ll provide a transcript of the time I played this game with my old friend Clint.

The SI: “The first three numbers are 2, 4, 6.”

Clint: “Are the next three numbers 8, 10, 12?”

The SI: “Yes.”

Clint: “Are the next three numbers 14, 16, 18?”

The SI: “Yes.”

Clint: “Then 20, 22, 24, 26, 28…”

The SI: “Yes, to all.”

Clint: “Okay, then I know what the rule is. The rule is, generate the next number by adding 2 to the last one. That’s it, right?”

Actually, it doesn’t matter. Clint might well be right: his theory does account for all the observed data, and has successfully predicted future results. But he has approached the problem in completely the wrong way. Yes, the n+2 rule is a viable one. But so is “the numbers always go up”. If that had been the rule, then if Clint had gone on to ask “29, 30, 31”, he would have been correct; but Clint didn’t ask that. Most people don’t.

What Clint did ­– what most people do – is come up with a theory and seek confirmation. This is a good rule of thumb that works most of the time, but it is fragile and dangerous because it can lead you in the wrong direction. You might go a considerable distance, wasting time and precious resources, before realising you’ve had it wrong all along.

The correct way to approach the problem, indeed all problems in science, is to seek disconfirmation. You have an idea: how do you prove it wrong? What are the minimum standards of scrutiny to which your theory must be subjected? What wringer can you put it through to see if it comes out unharmed?

This is how science is done, but it is not a natural way of thinking: our brains just aren’t wired up to work in this way. We try our best, but often get fooled; and if you take it out of the context of a lab and present it as just a game about numbers, it’s amazing to see how badly we fare.

The Tragedy of the Commons occurs when a number of people using a finite resource realise that they each stand to gain from taking more than their fair share. You know everybody else benefits from taking more, so that’s probably what they are doing; the more likely they are to be taking more, the more sense it makes for you to take more. And since everybody knows that it makes sense for you to take more… and so on, until the resource is depleted.

Fishers often know that their overfishing will lead to extinction of certain species of fish, and farmers often know that overfarming will leave the land infertile; but cessation is simply not feasible for them as individuals competition.

In a way the problem of short-term benefits outweighing long-term disadvantages resembles addiction. An chocoholic knows chocolate will make him fat; but it’s just too tasty to say no to! His willpower isn’t strong enough.

But our chocoholic friend has an option: he can employ a willpower-assisting strategy. He might not feel he needs chocolate just now, but can imagine a future point when the craving really sets in, when his willpower won’t be enough. So he acts now, while he can, and flushes the chocolate down the toilet, removing the temptation in advance. He enacts policy that anticipates future temptations. Sometimes people entrust this to others. “Don’t bring me any chocolate, I’ll only end up eating it.”

This is one solution to the Tragedy of the Commons. The consumers make a pact: they all agree to how much they can safely extract from the resource, and agree to be punished if they take more, even if ­– especially if ­– it later becomes profitable in the short term to do so. This leads to the creation of national parks, protected areas, one-child policies, fishing quotas and so on.

This kind of contract exploits a very human difference in our valuation of rewards depending on how far away in the future they are. Agreeing to protect a resource is easy when the resource is plentiful.

The trouble is that a sufficiently plentiful resource might not even be seen as a resource. What value do you place, for example, on air? Not much, until you start to see it polluted.

There was a time when the Earth was seen by most people as an infinite source and an infinite sink. Why regulate things that will never run out? It is only after we realise that something is in danger that it makes sense to protect it ­– but if the realisation comes late, and we see that the resource is actually scarce, then the short-term benefits of looting it faster than the other fella become very real to us indeed.

There is a time window between thinking something too free to regulate and thinking it too precious to regulate, and the window is often narrow. What have we missed it for? And what do we still have time to protect from our future, greedier selves?

REFERENCES

This was written on a train having just finished The Logic of Life by Tim Harford, and with Dennett’s Freedom Evolves fresh in memory. Both very worth reading.