The MSRP is a very simple concept: The manufacturer suggests that all retailers sell it (at least the initial run) at precisely this price.

Why would they want to do that? There is basically only one possible reason: They are trying to sustain tacit collusion.

The game theory of this is rather subtle: It requires that both manufacturers and retailers engage in long-term relationships with one another, and can pick and choose who to work with based on the history of past behavior. Both of these conditions hold in most real-world situations—indeed, the fact that they don’t hold very well in the agriculture industry is probably why we don’t see MSRP on produce.

If pricing were decided by random matching with no long-term relationships or past history, MSRP would be useless. Each firm would have little choice but to set their own optimal price, probably just slightly over their own marginal cost. Even if the manufacturer suggested an MSRP, retailers would promptly and thoroughly ignore it.

This is because the one-shot Bertrand pricing game has a unique Nash equilibrium, at pricing just above marginal cost. The basic argument is as follows: If I price cheaper than you, I can claim the whole market. As long as it’s profitable for me to do that, I will. The only time it’s not profitable for me to undercut you in this way is if we are both charging just slightly above marginal cost—so that is what we shall do, in Nash equilibrium. Human beings don’t always play according to the Nash equilibrium, but for-profit corporations do so quite consistently. Humans have limited attention and moral values; corporations have accounting departments and a fanatical devotion to the One True Profit.

But the iterated Bertrand pricing game is quite different. If instead of making only one pricing decision, we make many pricing decisions over time, always with a high probability of encountering the same buyers and sellers again in the future, then I may not want to undercut your price, for fear of triggering a price warthat will hurt both of our firms.

Much like how the Iterated Prisoner’s Dilemma can sustain cooperation in Nash equilibrium while the one-shot Prisoner’s Dilemma cannot, the iterated Bertrand game can sustain collusion as a Nash equilibrium.

There is in fact a vast number of possible equilibria in the iterated Bertrand game. If prices were infinitely divisible, there would be an infinite number of equilibria. In reality, there are hundreds or thousands of equilibria, depending on how finely divisible the price may be.

This makes the iterated Bertrand game a coordination game—there are many possible equilibria, and our task is to figure out which one to coordinate on.

If we had perfect information, we could deduce what the monopoly price would be, and then all choose the monopoly price; this would be what we call “payoff dominant”, and it’s often what people actually try to choose in real-world coordination games.

But in reality, the monopoly price is a subtle and complicated thing, and might not even be the same between different retailers. So if we each try to compute a monopoly price, we may end up with different results, and then we could trigger a price war and end up driving all of our profits down. If only there were some way to communicate with one another, and say what price we all want to set?

There are all sorts of subtler arguments about how MSRP is justifiable, but as far as I can tell they all fall flat. If you’re worried about retailers not promoting your product enough, enter into a contract requiring them to promote. Proposing a suggested price is clearly nothing but an attempt to coordinate tacit—frankly not even that tacit—collusion.

MSRP also probably serves another, equally suspect, function, which is to manipulate consumers using the anchoring heuristic: If the MSRP is $59.99, then when it does go on sale for $49.99 you feel like you are getting a good deal; whereas, if it had just been priced at $49.99 to begin with, you might still have felt that it was too expensive. I see no reason why this sort of crass manipulation of consumers should be protected under the law either, especially when it would be so easy to avoid.

There are all sorts of ways for firms to tacitly collude with one another, and we may not be able to regulate them all. But the MSRP is literally printed on the box. It’s so utterly blatant that we could very easily make it illegal with hardly any effort at all. The fact that we allow such overt price communication makes a mockery of our antitrust law.

This week includes Labor Day, the holiday where we are perhaps best justified in taking the whole day off from work and doing nothing. Labor Day is sort of the moderate social democratic counterpart to the explicitly socialist holiday May Day.

The right wing in this country has done everything in their power to expand the definition of “socialism”, which is probably why most young people now have positive views of socialism. There was a time when FDR was seen as an alternative to socialism; but now I’m pretty sure he’d just be called a socialist.

Because of this, I am honestly not sure whether I should be considered a socialist. I definitely believe in the social democratic welfare state epitomized by Scandinavia, but I definitely don’t believe in total collectivization of all means of production.

Indeed, I think there is reason to believe that a worker co-op is a much more natural outcome for free markets under a level playing field than a conventional corporation, and the main reason we have corporations is actually that capitalism arose out of (and in response to) feudalism.

Most things aren’t done by the top 1%. There are a handful of individuals (namely, scientists who make seminal breakthroughs: Charles Darwin, Marie Curie, Albert Einstein, Rosalind Franklin, Alan Turing, Jonas Salk) who are so super-productive that they might conceivably deserve billionaire-level compensation—but they are almost never the ones who are actually billionaires. If markets were really distributing capital to those who would use it most productively, there’s no reason to think that inequality would be so self-sustaining—much less self-enhancing as it currently seems to be.

But when you realize that capitalism emerged out of a system where the top 1% (or less) already owned most things, and did so by a combination of “divine right” ideology and direct, explicit violence, this inequality becomes a lot less baffling. We never had a free market on a level playing field. The closest we’ve ever gotten has always been through social-democratic reforms (like the New Deal and Scandinavia).

How does this result in corporations? Well, when all the wealth is held by a small fraction of individuals, how do you start a business? You have to borrow money from the people who have it. Borrowing makes you beholden to your creditors, and puts you at great risk if your venture fails (especially back in the days when there were debtor’s prisons—and we’re starting to go back that direction!). Equity provides an alternative: In exchange for giving them the downside risk if your venture fails, you also give your creditors—now shareholders—the upside risk if your venture succeeds. But at the end of the day when your business has succeeded, where did most of the profits go? Into the hands of the people who already had money to begin with, who did nothing to actually contribute to society. The world would be better off if those people had never existed and their wealth had simply been shared with everyone else.

Compare this to what would happen if we all started with similar levels of wealth. (How much would each of us have? Total US wealth of about $44 trillion, spread among a population of 328 million, is about $130,000 each. I don’t know about you, but I think I could do quite a bit with that.) When starting a business, you wouldn’t go heavily into debt or sign away ownership of your company to some billionaire; you’d gather a group of dedicated partners, each of whom would contribute money and effort into building the business. As you added on new workers, it would make sense to pool their assets, and give them a share of the company as well. The natural structure for your business would be not a shareholder corporation, but a worker-owned cooperative.

I think on some level the super-rich actually understand this. If you look closely at the sort of policies they fight for, they really aren’t capitalist. They don’t believe in free, unfettered markets where competition reigns. They believe in monopoly, lobbying, corruption, nepotism, and above all, low taxes. (There’s actually nothing in the basic principles of capitalism that says taxes should be low. Taxes should be as high as they need to be to cover public goods—no higher, and no lower.) They don’t want to provide nationalized healthcare, not because they believe that private healthcare competition is more efficient (no one who looks at the data for even a few minutes can honestly believe that—US healthcare is by far the most expensive in the world), but because they know that it would give their employees too much freedom to quit and work elsewhere. Donald Trump doesn’t want a world where any college kid with a brilliant idea and a lot of luck can overthrow his empire; he wants a world where everyone owes him and his family personal favors that he can call in to humiliate them and exert his power. That’s not capitalism—it’s feudalism.

Crowdfunding also provides an interesting alternative; we might even call it the customer-owned cooperative. Kickstarter and Patreon provide a very interesting new economic model—still entirely within the realm of free markets—where customers directly fund production and interact with producers to decide what will be produced. This might turn out to be even more efficient—and notice that it would run a lot more smoothly if we had all started with a level playing field.

Establishing such a playing field, of course, requires a large amount of redistribution of wealth. Is this socialism? If you insist. But I think it’s more accurate to describe it as reparations for feudalism (not to mention colonialism). We aren’t redistributing what was fairly earned in free markets; we are redistributing what was stolen, so that from now on, wealth can be fairly earned in free markets.

In this world there are people of primitive cultures, with a population that is slowly declining, trying to survive a constant threat of violence in the aftermath of colonialism. But you already knew that, of course.

What you may not have realized is that some of these people are actively hunted by other people, slaughtered so that their remains can be sold on the black market.

I am referring of course to elephants. Maybe those weren’t the people you first had in mind?

Elephants are not human in the sense of being Homo sapiens; but as far as I am concerned, they are people in a moral sense.

Elephants take as long to mature as humans, and spend most of their childhood learning. They are born with brains only 35% of the size of their adult brains, much as we are born with brains 28% the size of our adult brains. Their encephalization quotients range from about 1.5 to 2.4, comparable to chimpanzees.

But as economist, when I first learned about ivory-burning, it seemed like a really, really bad idea.

Why? Supply and demand. By destroying supply, you have just raised the market price of ivory. You have therefore increased the market incentives for poaching elephants and rhinos.

Yet it turns out I was wrong about this, as were many other economists. I looked at the empirical research, and changed my mind substantially. Ivory-burning is not such a bad idea after all.

Here was my reasoning before: If I want to reduce the incentives to produce something, what do I need to do? Lower the price. How do I do that? I need to increase the supply. Economists have made several proposals for how to do that, and until I looked at the data I would have expected them to work; but they haven’t.

The best way to increase supply is to create synthetic ivory that is cheap and very difficult to tell apart from the real thing. This has been done, but it didn’t work. For some reason, sellers try to hide the expensive real ivory in with the cheap synthetic ivory. I admit I actually have trouble understanding this; if you can’t sell it at full price, why even bother with the illegal real ivory? Maybe their customers have methods of distinguishing the two that the regulators don’t? If so, why aren’t the regulators using those methods? Another concern with increasing the supply of ivory is that it might reduce the stigma of consuming ivory, thereby also increasing the demand.

The most effective response to ivory trade is an absolutely categorical ban with no loopholes. To fight “ghost ivory”, we should remove exceptions for old ivory, offering buybacks for any antiques with a verifiable pedigree and a brief period of no-penalty surrender for anything with no such records. The only legal ivory must be for medical and scientific purposes, and its sourcing records must be absolutely impeccable—just as we do with human remains.

Even synthetic ivory must also be banned, at least if it’s convincing enough that real ivory could be hidden in it. You can make something you call “synthetic ivory” that serves a similar consumer function, but it must be different enough that it can be easily verified at customs inspections.

The need for a categorical ban is what makes the current US proposal dangerous. The particular exceptions it carves out are not all that large, but the fact that it carves out exceptions at all makes enforcement much more difficult. To his credit, Trump himself doesn’t seem very keen on the proposal, which may mean that it is dead in the water. I don’t get to say this often, but so far Trump seems to be making the right choice on this one.

Though the economic theory predicted otherwise, the empirical data is actually quite clear: The most effective way to save elephants from poaching is an absolutely categorical ban on ivory.

Ivory-burning is a signal of commitment to such a ban. Any ivory we find being sold, we will burn. Whoever was trying to sell it will lose their entire investment. Find more, and we will burn that too.

This post is going to be a bit different: Yes, we behave irrationally when it comes to probability. Why?

Why aren’t we optimal expected utility maximizers?
This question is not as simple as it sounds. Some of the ways that human beings deviate from neoclassical behavior are simply because neoclassical theory requires levels of knowledge and intelligence far beyond what human beings are capable of; basically anything requiring “perfect information” qualifies, as does any game theory prediction that involves solving extensive-form games with infinite strategy spaces by backward induction. (Don’t feel bad if you have no idea what that means; that’s kind of my point. Solving infinite extensive-form games by backward induction is an unsolved problem in game theory; just this past week I saw a new paper presented that offered a partial potential solution—and yet we expect people to do it optimally every time?)

I’m also not going to include questions of fundamental uncertainty, like “Will Apple stock rise or fall tomorrow?” or “Will the US go to war with North Korea in the next ten years?” where it isn’t even clear how we would assign a probability. (Though I will get back to them, for reasons that will become clear.)

No, let’s just look at the absolute simplest cases, where the probabilities are all well-defined and completely transparent: Lotteries and casino games. Why are we so bad at that?

Lotteries are not a computationally complex problem. You figure out how much the prize is worth to you, multiply it by the probability of winning—which is clearly spelled out for you—and compare that to how much the ticket price is worth to you. The most challenging part lies in specifying your marginal utility of wealth—the “how much it’s worth to you” part—but that’s something you basically had to do anyway, to make any kind of trade-offs on how to spend your time and money. Maybe you didn’t need to compute it quite so precisely over that particular range of parameters, but you need at least some idea how much $1 versus $10,000 is worth to you in order to get by in a market economy.

Casino games are a bit more complicated, but not much, and most of the work has been done for you; you can look on the Internet and find tables of probability calculations for poker, blackjack, roulette, craps and more. Memorizing all those probabilities might take some doing, but human memory is astonishingly capacious, and part of being an expert card player, especially in blackjack, seems to involve memorizing a lot of those probabilities.

Furthermore, by any plausible expected utility calculation, lotteries and casino games are a bad deal. Unless you’re an expert poker player or blackjack card-counter, your expected income from playing at a casino is always negative—and the casino set it up that way on purpose.

Why, then, can lotteries and casinos stay in business? Why are we so bad at such a simple problem?

Clearly we are using some sort of heuristic judgment in order to save computing power, and the people who make lotteries and casinos have designed formal models that can exploit those heuristics to pump money from us. (Shame on them, really; I don’t fully understand why this sort of thing is legal.)

But why use this particular heuristic? Indeed, why use a heuristic at all for such a simple problem?

I think it’s helpful to keep in mind that these simple problems are weird; they are absolutely not the sort of thing a tribe of hunter-gatherers is likely to encounter on the savannah. It doesn’t make sense for our brains to be optimized to solve poker or roulette.

The sort of problems that our ancestors encountered—indeed, the sort of problems that we encounter, most of the time—were not problems of calculable probability risk; they were problems of fundamental uncertainty. And they were frequently matters of life or death (which is why we’d expect them to be highly evolutionarily optimized): “Was that sound a lion, or just the wind?” “Is this mushroom safe to eat?” “Is that meat spoiled?”

In fact, many of the uncertainties most important to our ancestors are still important today: “Will these new strangers be friendly, or dangerous?” “Is that person attracted to me, or am I just projecting my own feelings?” “Can I trust you to keep your promise?” These sorts of social uncertainties are even deeper; it’s not clear that any finite being could ever totally resolve its uncertainty surrounding the behavior of other beings with the same level of intelligence, as the cognitive arms race continues indefinitely. The better I understand you, the better you understand me—and if you’re trying to deceive me, as I get better at detecting deception, you’ll get better at deceiving.

Personally, I think that it was precisely this sort of feedback loop that resulting in human beings getting such ridiculously huge brains in the first place. Chimpanzees are pretty good at dealing with the natural environment, maybe even better than we are; but even young children can outsmart them in social tasks any day. And once you start evolving for social cognition, it’s very hard to stop; basically you need to be constrained by something very fundamental, like, say, maximum caloric intake or the shape of the birth canal. Where chimpanzees look like their brains were what we call an “interior solution”, where evolution optimized toward a particular balance between cost and benefit, human brains look more like a “corner solution”, where the evolutionary pressure was entirely in one direction until we hit up against a hard constraint. That’s exactly what one would expect to happen if we were caught in a cognitive arms race.

What sort of heuristic makes sense for dealing with fundamental uncertainty—as opposed to precisely calculable probability? Well, you don’t want to compute a utility function and multiply by it, because that adds all sorts of extra computation and you have no idea what probability to assign. But you’ve got to do something like that in some sense, because that really is the optimal way to respond.

So here’s a heuristic you might try: Separate events into some broad categories based on how frequently they seem to occur, and what sort of response would be necessary.

Some things, like the sun rising each morning, seem to always happen. So you should act as if those things are going to happen pretty much always, because they do happen… pretty much always.

Other things, like rain, seem to happen frequently but not always. So you should look for signs that those things might happen, and prepare for them when the signs point in that direction.

Still other things, like being attacked by lions, happen very rarely, but are a really big deal when they do. You can’t go around expecting those to happen all the time, that would be crazy; but you need to be vigilant, and if you see any sign that they might be happening, even if you’re pretty sure they’re not, you may need to respond as if they were actually happening, just in case. The cost of a false positive is much lower than the cost of a false negative.

And still other things, like people sprouting wings and flying, never seem to happen. So you should act as if those things are never going to happen, and you don’t have to worry about them.

This heuristic is quite simple to apply once set up: It can simply slot in memories of when things did and didn’t happen in order to decide which category they go in—i.e. availability heuristic. If you can remember a lot of examples of “almost never”, maybe you should move it to “unlikely” instead. If you get a really big number of examples, you might even want to move it all the way to “likely”.

Another large advantage of this heuristic is that by combining utility and probability into one metric—we might call it “importance”, though Bayesian econometricians might complain about that—we can save on memory space and computing power. I don’t need to separately compute a utility and a probability; I just need to figure out how much effort I should put into dealing with this situation. A high probability of a small cost and a low probability of a large cost may be equally worth my time.

How might these heuristics go wrong? Well, if your environment changes sufficiently, the probabilities could shift and what seemed certain no longer is. For most of human history, “people walking on the Moon” would seem about as plausible as sprouting wings and flying away, and yet it has happened. Being attacked by lions is now exceedingly rare except in very specific places, but we still harbor a certain awe and fear before lions. And of course availability heuristic can be greatly distorted by mass media, which makes people feel like terrorist attacks and nuclear meltdowns are common and deaths by car accidents and influenza are rare—when exactly the opposite is true.

How many categories should you set, and what frequencies should they be associated with? This part I’m still struggling with, and it’s an important piece of the puzzle I will need before I can take this theory to experiment. There is probably a trade-off between more categories giving you more precision in tailoring your optimal behavior, but costing more cognitive resources to maintain. Is the optimal number 3? 4? 7? 10? I really don’t know. Even I could specify the number of categories, I’d still need to figure out precisely what categories to assign.

There’s a very common economics experiment called a public goods game, often used to study cooperation and altruistic behavior. I’m actually planning on running a variant of such an experiment for my second-year paper.

The game is quite simple, which is part of why it is used so frequently: You are placed into a group of people (usually about four), and given a little bit of money (say $10). Then you are offered a choice: You can keep the money, or you can donate some of it to a group fund. Money in the group fund will be multiplied by some factor (usually about two) and then redistributed evenly to everyone in the group. So for example if you donate $5, that will become $10, split four ways, so you’ll get back $2.50.

Donating more to the group will benefit everyone else, but at a cost to yourself. The game is usually set up so that the best outcome for everyone is if everyone donates the maximum amount, but the best outcome for you, holding everyone else’s choices constant, is to donate nothing and keep it all.

Yet it is a very robust finding that most people do neither of those things. There’s still a good deal of uncertainty surrounding what motivates people to donate what they do, but certain patterns that have emerged:

Most people donate something, but hardly anyone donates everything.

Increasing the multiplier tends to smoothly increase how much people donate.

The number of people in the group isn’t very important, though very small groups (e.g. 2) behave differently from very large groups (e.g. 50).

Letting people talk to each other tends to increase the rate of donations.

Repetition of the game, or experience from previous games, tends to result in decreasing donation over time.

Economists donate less than other people.

Number 6 is unfortunate, but easy to explain: Indoctrination into game theory and neoclassical economics has taught economists that selfish behavior is efficient and optimal, so they behave selfishly.

Number 3 is also fairly easy to explain: Very small groups allow opportunities for punishment and coordination that don’t exist in large groups. Think about how you would respond when faced with 2 defectors in a group of 4 as opposed to 10 defectors in a group of 50. You could punish the 2 by giving less next round; but punishing the 10 would end up punishing 40 others who had contributed like they were supposed to.

Number 4 is a very interesting finding. Game theory says that communication shouldn’t matter, because there is a unique Nash equilibrium: Donate nothing. All the promises in the world can’t change what is the optimal response in the game. But in fact, human beings don’t like to break their promises, and so when you get a bunch of people together and they all agree to donate, most of them will carry through on that agreement most of the time.

Number 5 is on the frontier of research right now. There are various theoretical accounts for why it might occur, but none of the models proposed so far have much predictive power.

But my focus today will be on findings 1 and 2.

If you’re not familiar with the underlying game theory, finding 2 may seem obvious to you: Well, of course if you increase the payoff for donating, people will donate more! It’s precisely that sense of obviousness which I am going to appeal to in a moment.

In fact, the game theory makes a very sharp prediction: For N players, if the multiplier is less than N, you should always contribute nothing. Only if the multiplier becomes larger than N should you donate—and at that point you should donate everything. The game theory prediction is not a smooth increase; it’s all-or-nothing. The only time game theory predicts intermediate amounts is on the knife-edge at exactly equal to N, where each player would be indifferent between donating and not donating.

But it feels reasonable that increasing the multiplier should increase donation, doesn’t it? It’s a “safer bet” in some sense to donate $1 if the payoff to everyone is $3 and the payoff to yourself is $0.75 than if the payoff to everyone is $1.04 and the payoff to yourself is $0.26. The cost-benefit analysis comes out better: In the former case, you can gain up to $2 if everyone donates, but would only lose $0.25 if you donate alone; but in the latter case, you would only gain $0.04 if everyone donates, and would lose $0.74 if you donate alone.

Human beings like to “sanity-check” our results against prior knowledge, making sure that everything fits together. And, of particular note for public goods games, human beings like to “hedge our bets”; we don’t like to over-commit to a single belief in the face of uncertainty.

I think this is what best explains findings 1 and 2. We don’t donate everything, because that requires committing totally to the belief that contributing is always better. We also don’t donate nothing, because that requires committing totally to the belief that contributing is always worse.

And of course we donate more as the payoffs to donating more increase; that also just seems reasonable. If something is better, you do more of it!

These choices could be modeled formally by assigning some sort of probability distribution over other’s choices, but in a rather unconventional way. We can’t simply assume that other people will randomly choose some decision and then optimize accordingly—that just gives you back the game theory prediction. We have to assume that our behavior and the behavior of others is in some sense correlated; if we decide to donate, we reason that others are more likely to donate as well.

Stated like that, this sounds irrational; some economists have taken to calling it “magical thinking”. Yet, as I always like to point out to such economists: On average, people who do that make more money in the games.Economists playing other economists always make very little money in these games, because they turn on each other immediately. So who is “irrational” now?

Indeed, if you ask people to predict how others will behave in these games, they generally do better than the game theory prediction: They say, correctly, that some people will give nothing, most will give something, and hardly any will give everything. The same “reasonableness” that they use to motivate their own decisions, they also accurately apply to forecasting the decisions of others.

Of course, to say that something is “reasonable” may be ultimately to say that it conforms to our heuristics well. To really have a theory, I need to specify exactly what those heuristics are.

“Don’t put all your eggs in one basket” seems to be one, but it’s probably not the only one that matters; my guess is that there are circumstances in which people would actually choose all-or-nothing, like if we said that the multiplier was 0.5 (so everyone giving to the group would make everyone worse off) or 10 (so that giving to the group makes you and everyone else way better off).

“Higher payoffs are better” is probably one as well, but precisely formulating that is actually surprisingly difficult. Higher payoffs for you? For the group? Conditional on what? Do you hold others’ behavior constant, or assume it is somehow affected by your own choices?

And of course, the theory wouldn’t be much good if it only worked on public goods games (though even that would be a substantial advance at this point). We want a theory that explains a broad class of human behavior; we can start with simple economics experiments, but ultimately we want to extend it to real-world choices.

Lately I’ve been so occupied by Trump and politics and various ideas from environmentalists that I haven’t really written much about the cognitive economics that was originally planned to be the core of this blog. So, I thought that this week I would take a step out of the political fray and go back to those core topics.

Why do we procrastinate? Why do we overeat? Why do we fail to exercise? It’s quite mysterious, from the perspective of neoclassical economic theory. We know these things are bad for us in the long run, and yet we do them anyway.

The reason has to do with the way our brains deal with time. We value the future less than the present—but that’s not actually the problem. The problem is that we do so inconsistently.

A perfectly-rational neoclassical agent would use time-consistent discounting; what this means is that the effect of a given time interval won’t change or vary based on the stakes. If having $100 in 2019 is as good as having $110 in 2020, then having $1000 in 2019 is as good as having $1100 in 2020; and if I ask you in 2019, you’ll still agree that having $100 in 2019 is as good as having $1100 in 2020. A perfectly-rational individual would have a certain discount rate (in this case, 10% per year), and would apply it consistently at all times on all things.

This is of course not how human beings behave at all.

A much more likely pattern is that you would agree, in 2018, that having $100 in 2019 is as good as having $110 in 2020 (a discount rate of 10%). But then if I wait until 2019, and then offer you the choice between $100 immediately and $120 in a year, you’ll probably take the $100 immediately—even though a year ago, you told me you wouldn’t. Your discount rate rose from 10% to at least 20% in the intervening time.

The leading model in cognitive economics right now to explain this is called hyperbolic discounting. The precise functional form of a hyperbola has been called into question by recent research, but the general pattern is definitely right: We act as though time matters a great deal when discussing time intervals that are close to us, but treat time as unimportant when discussing time intervals that are far away.

How does this explain procrastination and other failures of self-control over time? Let’s try an example.

Let’s say that you have a project you need to finish by the end of the day Friday, which has a benefit to you, received on Saturday, that I will arbitrarily scale at 1000 utilons.

Then, let’s say it’s Monday. You have five days to work on it, and each day of work costs you 100 utilons. If you work all five days, the project will get done.

If you skip a day of work, you will need to work so much harder that one of the following days your cost of work will be 300 utilons instead of 100. If you skip two days, you’ll have to pay 300 utilons twice. And if you skip three or more days, the project will not be finished and it will all be for naught.

If you don’t discount time at all (which, over a week, is probably close to optimal), the answer is obvious: Work all five days. Pay 100+100+100+100+100 = 500, receive 1000. Net benefit: 500.

But even if you discount time, as long as you do so consistently, you still wouldn’t procrastinate.

Let’s say your discount rate is extremely high (maybe you’re dying or something), so that each day is only worth 80% as much as the previous. Benefit that’s worth 1 on Monday is worth 0.8 if it comes on Tuesday, 0.64 if it comes on Wednesday, 0.512 if it comes on Thursday, 0.4096 if it comes on Friday,a and 0.32768 if it comes on Saturday. Then instead of paying 100+100+100+100+100 to get 1000, you’re paying 100+80+64+51+41=336 to get 328. It’s not worth doing the project; you should just enjoy your last few days on Earth. That’s not procrastinating; that’s rationally choosing not to undertake a project that isn’t worthwhile under your circumstances.

Procrastinating would look more like this: You skip the first two days, then work 100 the third day, then work 300 each of the last two days, finishing the project. If you didn’t discount at all, you would pay 100+300+300=700 to get 1000, so your net benefit has been reduced to 300.

There’s no consistent discount rate that would make this rational. If it was worth giving up 200 on Thursday and Friday to get 100 on Monday and Tuesday, you must be discounting at least 26% per day. But if you’re discounting that much, you shouldn’t bother with the project at all.

There is however an inconsistent discounting by which it makes perfect sense. Suppose that instead of consistently discounting some percentage each day, psychologically it feels like this: The value is the inverse of the length of time (that’s what it means to be hyperbolic). So the same amount of benefit on Monday which is worth 1 is only worth 1/2 if it comes on Tuesday, 1/3 if on Wednesday, 1/4 if on Thursday, and 1/5 if on Friday.

So, when thinking about your weekly schedule, you realize that by pushing back Monday’s work to Thursday, you can gain 100 today at a cost of only 200/4 = 50, since Thursday is 4 days away. And by pushing back Tuesday’s work to Friday, you can gain 100/2=50 today at a cost of only 200/5=40. So now it makes perfect sense to have fun on Monday and Tuesday, start working on Wednesday, and cram the biggest work into Thursday and Friday. And yes, it still makes sense to do the project, because 1000/6 = 166 is more than the 100/3+200/4+200/5 = 123 it will cost to do the work.

But now think about what happens when you come to Wednesday. The work today costs 100. The work on Thursday costs 200/2 = 100. The work on Friday costs 200/3 = 66. The benefit of completing the project will be 1000/4 = 250. So you are paying 100+100+66=266 to get a benefit of only 250. It’s not worth it anymore! You’ve changed your mind. So you don’t work Wednesday.

At that point, it’s too late, so you don’t work Thursday, you don’t work Friday, and the project doesn’t get done. You have procrastinated away the benefits you could have gotten from doing this project. If only you could have done the work on Monday and Tuesday, then on Wednesday it would have been worthwhile to continue: 100/1+100/2+100/3 = 183 is less than the benefit of 250.

What went wrong? The key event was the preference reversal: While on Monday you preferred having fun on Monday and working on Thursday to working on both days, when the time came you changed your mind. Someone with time-consistent discounting would never do that; they would either prefer one or the other, and never change their mind.

One way to think about this is to imagine future versions of yourself as different people, who agree with you on most things, but not on everything. They’re like friends or family; you want the best for them, but you don’t always see eye-to-eye.

Generally we find that our future selves are less rational about choices than we are. To be clear, this doesn’t mean that we’re all declining in rationality over time. Rather, it comes from the fact that future decisions are inherently closer to our future selves than they are to our current selves, and the closer a decision gets the more likely we are to use irrational time discounting.

This is why it’s useful to plan and make commitments. If starting on Monday you committed yourself to working every single day, you’d get the project done on time and everything would work out fine. Better yet, if you committed yourself last week to starting work on Monday, you wouldn’t even feel conflicted; you would be entirely willing to pay a cost of 100/8+100/9+100/10+100/11+100/12=51 to get a benefit of 1000/13=77. So you could set up some sort of scheme where you tell your friends ahead of time that you can’t go out that week, or you turn off access to social media sites (there are apps that will do this for you), or you set up a donation to an “anti-charity” you don’t like that will trigger if you fail to complete the project on time (there are websites to do that for you).

There is even a simpler way: Make a promise to yourself. This one can be tricky to follow through on, but if you can train yourself to do it, it is extraordinarily powerful and doesn’t come with the additional costs that a lot of other commitment devices involve. If you can really make yourself feel as bad about breaking a promise to yourself as you would about breaking a promise to someone else, then you can dramatically increase your own self-control with very little cost. The challenge lies in actually cultivating that sort of attitude, and then in following through with making only promises you can keep and actually keeping them. This, too, can be a delicate balance; it is dangerous to over-commit to promises to yourself and feel too much pain when you fail to meet them.
But given the strong correlations between self-control and long-term success, trying to train yourself to be a little better at it can provide enormous benefits.
If you ever get around to it, that is.

But really what bothers me is not the DSGE but the GTFO (“get the [expletive] out”); it’s not that DSGE models are used, but that it’s almost impossible to get published as a macroeconomic theorist using anything else. Defenders of DSGE typically don’t even argue anymore that it is good; they argue that there are no credible alternatives. They characterize their opponents as “dilettantes” who aren’t opposing DSGE because we disagree with it; no, it must be because we don’t understand it. (Also, regarding that post, I’d just like to note that I now officially satisfy the Athreya Axiom of Absolute Arrogance: I have passed my qualifying exams in a top-50 economics PhD program. Yet my enmity toward DSGE has, if anything, only intensified.)

Of course, that argument only makes sense if you haven’t been actively suppressing all attempts to formulate an alternative, which is precisely what DSGE macroeconomists have been doing for the last two or three decades. And yet despite this suppression, there are alternatives emerging, particularly from the empirical side. There are now empirical approaches to macroeconomics that don’t use DSGE models. Regression discontinuity methods and other “natural experiment” designs—not to mention actual experiments—are quickly rising in popularity as economists realize that these methods allow us to actually empirically test our models instead of just adding more and more mathematical complexity to them.

But there still seems to be a lingering attitude that there is no other way to do macro theory. This is very frustrating for me personally, because deep down I think what I would like to do as a career is macro theory: By temperament I have always viewed the world through a very abstract, theoretical lens, and the issues I care most about—particularly inequality, development, and unemployment—are all fundamentally “macro” issues. I left physics when I realized I would be expected to do string theory. I don’t want to leave economics now that I’m expected to do DSGE. But I also definitely don’t want to do DSGE.

Fortunately with economics I have a backup plan: I can always be an “applied micreconomist” (rather the opposite of a theoretical macroeconomist I suppose), directly attached to the data in the form of empirical analyses or even direct, randomized controlled experiments. And there certainly is plenty of work to be done along the lines of Akerlof and Roth and Shiller and Kahneman and Thaler in cognitive and behavioral economics, which is also generally considered applied micro. I was never going to be an experimental physicist, but I can be an experimental economist. And I do get to use at least some theory: In particular, there’s an awful lot of game theory in experimental economics these days. Some of the most exciting stuff is actually in showing how human beings don’t behave the way classical game theory predicts (particularly in the Ultimatum Game and the Prisoner’s Dilemma), and trying to extend game theory into something that would fit our actual behavior. Cognitive science suggests that the result is going to end up looking quite different from game theory as we know it, and with my cognitive science background I may be particularly well-positioned to lead that charge.

Still, I don’t think I’ll be entirely satisfied if I can’t somehow bring my career back around to macroeconomic issues, and particularly the great elephant in the room of all economics, which is inequality. Underlying everything from Marxism to Trumpism, from the surging rents in Silicon Valley and the crushing poverty of Burkina Faso, to the Great Recession itself, is inequality. It is, in my view, the central question of economics: Who gets what, and why?

That is a fundamentally macro question, but you can’t even talk about that issue in DSGE as we know it; a “representative agent” inherently smooths over all inequality in the economy as though total GDP were all that mattered. A fundamentally new approach to macroeconomics is needed. Hopefully I can be part of that, but from my current position I don’t feel much empowered to fight this status quo. Maybe I need to spend at least a few more years doing something else, making a name for myself, and then I’ll be able to come back to this fight with a stronger position.

In the meantime, I guess there’s plenty of work to be done on cognitive biases and deviations from game theory.