There’s a very common economics experiment called a public goods game, often used to study cooperation and altruistic behavior. I’m actually planning on running a variant of such an experiment for my second-year paper.

The game is quite simple, which is part of why it is used so frequently: You are placed into a group of people (usually about four), and given a little bit of money (say $10). Then you are offered a choice: You can keep the money, or you can donate some of it to a group fund. Money in the group fund will be multiplied by some factor (usually about two) and then redistributed evenly to everyone in the group. So for example if you donate $5, that will become $10, split four ways, so you’ll get back $2.50.

Donating more to the group will benefit everyone else, but at a cost to yourself. The game is usually set up so that the best outcome for everyone is if everyone donates the maximum amount, but the best outcome for you, holding everyone else’s choices constant, is to donate nothing and keep it all.

Yet it is a very robust finding that most people do neither of those things. There’s still a good deal of uncertainty surrounding what motivates people to donate what they do, but certain patterns that have emerged:

Most people donate something, but hardly anyone donates everything.

Increasing the multiplier tends to smoothly increase how much people donate.

The number of people in the group isn’t very important, though very small groups (e.g. 2) behave differently from very large groups (e.g. 50).

Letting people talk to each other tends to increase the rate of donations.

Repetition of the game, or experience from previous games, tends to result in decreasing donation over time.

Economists donate less than other people.

Number 6 is unfortunate, but easy to explain: Indoctrination into game theory and neoclassical economics has taught economists that selfish behavior is efficient and optimal, so they behave selfishly.

Number 3 is also fairly easy to explain: Very small groups allow opportunities for punishment and coordination that don’t exist in large groups. Think about how you would respond when faced with 2 defectors in a group of 4 as opposed to 10 defectors in a group of 50. You could punish the 2 by giving less next round; but punishing the 10 would end up punishing 40 others who had contributed like they were supposed to.

Number 4 is a very interesting finding. Game theory says that communication shouldn’t matter, because there is a unique Nash equilibrium: Donate nothing. All the promises in the world can’t change what is the optimal response in the game. But in fact, human beings don’t like to break their promises, and so when you get a bunch of people together and they all agree to donate, most of them will carry through on that agreement most of the time.

Number 5 is on the frontier of research right now. There are various theoretical accounts for why it might occur, but none of the models proposed so far have much predictive power.

But my focus today will be on findings 1 and 2.

If you’re not familiar with the underlying game theory, finding 2 may seem obvious to you: Well, of course if you increase the payoff for donating, people will donate more! It’s precisely that sense of obviousness which I am going to appeal to in a moment.

In fact, the game theory makes a very sharp prediction: For N players, if the multiplier is less than N, you should always contribute nothing. Only if the multiplier becomes larger than N should you donate—and at that point you should donate everything. The game theory prediction is not a smooth increase; it’s all-or-nothing. The only time game theory predicts intermediate amounts is on the knife-edge at exactly equal to N, where each player would be indifferent between donating and not donating.

But it feels reasonable that increasing the multiplier should increase donation, doesn’t it? It’s a “safer bet” in some sense to donate $1 if the payoff to everyone is $3 and the payoff to yourself is $0.75 than if the payoff to everyone is $1.04 and the payoff to yourself is $0.26. The cost-benefit analysis comes out better: In the former case, you can gain up to $2 if everyone donates, but would only lose $0.25 if you donate alone; but in the latter case, you would only gain $0.04 if everyone donates, and would lose $0.74 if you donate alone.

Human beings like to “sanity-check” our results against prior knowledge, making sure that everything fits together. And, of particular note for public goods games, human beings like to “hedge our bets”; we don’t like to over-commit to a single belief in the face of uncertainty.

I think this is what best explains findings 1 and 2. We don’t donate everything, because that requires committing totally to the belief that contributing is always better. We also don’t donate nothing, because that requires committing totally to the belief that contributing is always worse.

And of course we donate more as the payoffs to donating more increase; that also just seems reasonable. If something is better, you do more of it!

These choices could be modeled formally by assigning some sort of probability distribution over other’s choices, but in a rather unconventional way. We can’t simply assume that other people will randomly choose some decision and then optimize accordingly—that just gives you back the game theory prediction. We have to assume that our behavior and the behavior of others is in some sense correlated; if we decide to donate, we reason that others are more likely to donate as well.

Stated like that, this sounds irrational; some economists have taken to calling it “magical thinking”. Yet, as I always like to point out to such economists: On average, people who do that make more money in the games.Economists playing other economists always make very little money in these games, because they turn on each other immediately. So who is “irrational” now?

Indeed, if you ask people to predict how others will behave in these games, they generally do better than the game theory prediction: They say, correctly, that some people will give nothing, most will give something, and hardly any will give everything. The same “reasonableness” that they use to motivate their own decisions, they also accurately apply to forecasting the decisions of others.

Of course, to say that something is “reasonable” may be ultimately to say that it conforms to our heuristics well. To really have a theory, I need to specify exactly what those heuristics are.

“Don’t put all your eggs in one basket” seems to be one, but it’s probably not the only one that matters; my guess is that there are circumstances in which people would actually choose all-or-nothing, like if we said that the multiplier was 0.5 (so everyone giving to the group would make everyone worse off) or 10 (so that giving to the group makes you and everyone else way better off).

“Higher payoffs are better” is probably one as well, but precisely formulating that is actually surprisingly difficult. Higher payoffs for you? For the group? Conditional on what? Do you hold others’ behavior constant, or assume it is somehow affected by your own choices?

And of course, the theory wouldn’t be much good if it only worked on public goods games (though even that would be a substantial advance at this point). We want a theory that explains a broad class of human behavior; we can start with simple economics experiments, but ultimately we want to extend it to real-world choices.