In neoclassical theory, it is assumed (explicitly or implicitly) that human beings judge probability in something like the optimal Bayesian way: We assign prior probabilities to events, and then when confronted with evidence we infer using the observed data to update our prior probabilities to posterior probabilities. Then, when we have to make decisions, we maximize our expected utility subject to our posterior probabilities.

This, of course, is nothing like how human beings actually think. Even very intelligent, rational, numerate people only engage in a vague approximation of this behavior, and only when dealing with major decisions likely to affect the course of their lives. (Yes, I literally decide which universities to attend based upon formal expected utility models. Thus far, I’ve never been dissatisfied with a decision made that way.) No one decides what to eat for lunch or what to do this weekend based on formal expected utility models—or at least I hope they don’t, because that point the computational cost far exceeds the expected benefit.

So how do human beings actually think about probability? Well, a good place to start is to look at ways in which we systematically deviate from expected utility theory.

In game A, you get $1 million dollars, guaranteed.
In game B, you have a 10% chance of getting $5 million, an 89% chance of getting $1 million, but now you have a 1% chance of getting nothing.

Which do you prefer, game A or game B?

In game C, you have an 11% chance of getting $1 million, and an 89% chance of getting nothing.

In game D, you have a 10% chance of getting $5 million, and a 90% chance of getting nothing.

Which do you prefer, game C or game D?

I have to think about it for a little bit and do some calculations, and it’s still very hard because it depends crucially on my projected lifetime income (which could easily exceed $3 million with a PhD, especially in economics) and the precise form of my marginal utility (I think I have constant relative risk aversion, but I’m not sure what parameter to use precisely), but in general I think I want to choose game A and game C, but I actually feel really ambivalent, because it’s not hard to find plausible parameters for my utility where I should go for the gamble.

But if you’re like most people, you choose game A and game D.

There is no coherent expected utility by which you would do this.

Why? Either a 10% chance of $5 million instead of $1 million is worth risking a 1% chance of nothing, or it isn’t. If it is, you should play B and D. If it’s not, you should play A and C. I can’t tell you for sure whether it is worth it—I can’t even fully decide for myself—but it either is or it isn’t.

Yet most people have a strong intuition that they should take game A but game D. Why? What does this say about how we judge probability?
The leading theory in behavioral economics right now is cumulative prospect theory, developed by the great Kahneman and Tversky, who essentially founded the field of behavioral economics. It’s quite intimidating to try to go up against them—which is probably why we should force ourselves to do it. Fear of challenging the favorite theories of the great scientists before us is how science stagnates.

I wrote about it more in a previous post, but as a brief review, cumulative prospect theory says that instead of judging based on a well-defined utility function, we instead consider gains and losses as fundamentally different sorts of thing, and in three specific ways:

First, we are loss-averse; we feel a loss about twice as intensely as a gain of the same amount.

Second, we are risk-averse for gains, but risk-seekingfor losses; we assume that gaining twice as much isn’t actually twice as good (which is almost certainly true), but we also assume that losing twice as much isn’t actually twice as bad (which is almost certainly false and indeed contradictory with the previous).

Third, we judge probabilities as more important when they are close to certainty. We make a large distinction between a 0% probability and a 0.0000001% probability, but almost no distinction at all between a 41% probability and a 43% probability.

That last part is what I want to focus on for today. In Kahneman’s model, this is a continuous, monotonoic function that maps 0 to 0 and 1 to 1, but systematically overestimates probabilities below but near 1/2 and systematically underestimates probabilities above but near 1/2.

It looks something like this, where red is true probability and blue is subjective probability:

I don’t believe this is actually how humans think, for two reasons:

It’s too hard. Humans are astonishingly innumerate creatures, given the enormous processing power of our brains. It’s true that we have some intuitive capacity for “solving” very complex equations, but that’s almost all within our motor system—we can “solve a differential equation” when we catch a ball, but we have no idea how we’re doing it. But probability judgments are often made consciously, especially in experiments like the Allais paradox; and the conscious brain is terrible at math. It’s actually really amazing how bad we are at math. Any model of normal human judgment should assume from the start that we will not do complicated math at any point in the process. Maybe you can hypothesize that we do so subconsciously, but you’d better have a good reason for assuming that.

There is no reason to do this. Why in the world would any kind of optimization system function this way? You start with perfectly good probabilities, and then instead of using them, you subject them to some bizarre, unmotivated transformation that makes them less accurate and costs computing power? You may as well hit yourself in the head with a brick.

So, why might it look like we are doing this? Well, my proposal, admittedly still rather half-baked, is that human beings don’t assign probabilities numerically at all; we assign them categorically.

You may call this, for lack of a better term, categorical prospect theory.

My theory is that people don’t actually have in their head “there is an 11% chance of rain today” (unless they specifically heard that from a weather report this morning); they have in their head “it’s fairly unlikely that it will rain today”.

That is, we assign some small number of discrete categories of probability, and fit things into them. I’m not sure what exactly the categories are, and part of what makes my job difficult here is that they may be fuzzy-edged and vary from person to person, but roughly speaking, I think they correspond to the sort of things psychologists usually put on Likert scales in surveys: Impossible, almost impossible, very unlikely, unlikely, fairly unlikely, roughly even odds, fairly likely, likely, very likely, almost certain, certain. If I’m putting numbers on these probability categories, they go something like this: 0, 0.001, 0.01, 0.10, 0.20, 0.50, 0.8, 0.9, 0.99, 0.999, 1.

Notice that this would preserve the same basic effect as cumulative prospect theory: You care a lot more about differences in probability when they are near 0 or 1, because those are much more likely to actually shift your category. Indeed, as written, you wouldn’t care about a shift from 0.4 to 0.6 at all, despite caring a great deal about a shift from 0.001 to 0.01.

How does this solve the above problems?

It’s easy. Not only don’t you compute a probability and then recompute it for no reason; you never even have to compute it precisely. Just get it within some vague error bounds and that will tell you what box it goes in. Instead of computing an approximation to a continuous function, you just slot things into a small number of discrete boxes, a dozen at the most.

That explains why we would do it: It’s easy. Our brains need to conserve their capacity, and they did especially in our ancestral environment when we struggled to survive. Rather than having to iterate your approximation to arbitrary precision, you just get within 0.1 or so and call it a day. That saves time and computing power, which saves energy, which could save your life.

What new problems have I introduced?

It’s very hard to know exactly where people’s categories are, if they vary between individuals or even between situations, and whether they are fuzzy-edged.

If you take the model I just gave literally, even quite large probability changes will have absolutely no effect as long as they remain within a category such as “roughly even odds”.

With regard to 2, I think Kahneman may himself be able to save me, with his dual process theoryconcept of System 1 and System 2. What I’m really asserting is that System 1, the fast, intuitive judgment system, operates on these categories. System 2, on the other hand, the careful, rational thought system, can actually make use of proper numerical probabilities; it’s just very costly to boot up System 2 in the first place, much less ensure that it actually gets the right answer.

How might we test this? Well, I think that people are more likely to use System 1 when any of the following are true:

They are under harsh time-pressure

The decision isn’t very important

The intuitive judgment is fast and obvious

And conversely they are likely to use System 2 when the following are true:

They have plenty of time to think

The decision is very important

The intuitive judgment is difficult or unclear

So, it should be possible to arrange an experiment varying these parameters, such that in one treatment people almost always use System 1, and in another they almost always use System 2. And then, my prediction is that in the System 1 treatment, people will in fact not change their behavior at all when you change the probability from 15% to 25% (fairly unlikely) or 40% to 60% (roughly even odds).

To be clear, you can’t just present people with this choice between game E and game F:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

People will obviously choose game E. If you can directly compare the numbers and one game is strictly better in every way, I think even without much effort people will be able to choose correctly.

Instead, what I’m saying is that if you make the following offers to two completely different sets of people, you will observe little difference in their choices, even though under expected utility theory you should.
Group I receives a choice between game E and game G:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game G: You get a 100% chance of $20.

Group II receives a choice between game F and game G:

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

Game G: You get a 100% chance of $20.

Under two very plausible assumptions about marginal utility of wealth, I can fix what the rational judgment should be in each game.

The first assumption is that marginal utility of wealth is decreasing, so people are risk-averse (at least for gains, which these are). The second assumption is that most people’s lifetime income is at least two orders of magnitude higher than $50.

By the first assumption, group II should choose game G. The expected income is precisely the same, and being even ever so slightly risk-averse should make you go for the guaranteed $20.

By the second assumption, group I should choose game E. Yes, there is some risk, but because $50 should not be a huge sum to you, your risk aversion should be small and the higher expected income of $30 should sway you.

But I predict that most people will choose game G in both cases, and (within statistical error) the same proportion will choose F as chose E—thus showing that the difference between a 40% chance and a 60% chance was in fact negligible to their intuitive judgments.

However, this doesn’t actually disprove Kahneman’s theory; perhaps that part of the subjective probability function is just that flat. For that, I need to set up an experiment where I show discontinuity. I need to find the edge of a category and get people to switch categories sharply. Next week I’ll talk about how we might pull that off.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discriminationand taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.

Congratulations, America. You literally elected the candidate that was supported by Vladimir Putin, Kim Jong-un, the American Nazi Party, and the Klu Klux Klan. Now, reversed stupidity is not intelligence; being endorsed by someone horrible doesn’t necessarily mean you are horrible. But when this many horrible people endorse you, and start giving the same reasons, and those reasons are based on things you particularly have in common with those horrible people like bigotry and authoritarianism… yeah, I think it does say something about you.

Now, to be fair, much of the blame here goes to the Electoral College.

But even that is only possible because Hillary Clinton did not win the overwhelming landslide she deserved. The Electoral College should have been irrelevant, because she should have won at least 60% of every demographic in every state. Our whole nation should have declared together in one voice that we will not tolerate bigotry and authoritarianism. The fact that that didn’t happen is reason enough to be ashamed; even if Clinton will slightly win the popular vote that still says something truly terrible about our country.

Indeed, this is what it says:

We slightly preferred democracy over fascism.

We slightly preferred liberty over tyranny.

We slightly preferred justice over oppression.

We slightly preferred feminism over misogyny.

We slightly preferred equality over racism.

We slightly preferred reason over instinct.

We slightly preferred honesty over fraud.

We slightly preferred sustainability over ecological devastation.

We slightly preferred competence over incompetence.

We slightly preferred diplomacy over impulsiveness.

We slightly preferred humility over narcissism.

We were faced with the easiest choice ever given to us in any election, and just a narrow majority got the answer right—and then under the way our system works that wasn’t even enough.

Yes, I sincerely hope that he is not as bad as we think he is, though I remember saying that George W. Bush was not as bad as we thought when he was elected—and he was. He was. His Iraq War killed hundreds of thousands of people based on lies. His economy policy triggered the worst economic collapse since the Great Depression. So now I have to ask: What if he is as bad as we think?

Fortunately, I do not believe that Trump will literally trigger a global nuclear war.

I’ve been hearing some disturbing sentiments from some surprising places lately, things like “Economics is not a science, it’s just an extension of politics” and “There’s no such thing as a true model”. I’ve now met multiple economists who speak this way, who seem to be some sort of “subjectivists” or “anti-realists” (those links are to explanations of moral subjectivism and anti-realism, which are also mistaken, but in a much less obvious way, and are far more common views to express). It is possible to read most of the individual statements in a non-subjectivist way, but in the context of all of them together, it really gives me the general impression that many of these economists… don’t believe in economics. (Nor do they even believe in believing it, or they’d put up a better show.)

I think what has happened is that in the wake of the Second Depression, economists have had a sort of “crisis of faith”. The models we thought were right were wrong, so we may as well give up; there’s no such thing as a true model. The science of economics failed, so maybe economics was never a science at all.

Never really thought I’d be in this position, but in such circumstances actually feel strongly inclined to defend neoclassical economics. Neoclassical economics is wrong; but subjectivism is not even wrong.

If a model is wrong, you can fix it. You can make it right, or at least less wrong. But if you give up on modeling altogether, your theory avoids being disproven only by making itself totally detached from reality. I can’t prove you wrong, but only because you’ve given up on the whole idea of being right or wrong.

As Isaac Asimov wrote, “when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

What we might call “folk economics”, what most people seem to believe about economics, is like thinking the Earth is flat—it’s fundamentally wrong, but not so obviously inaccurate on an individual scale that it can’t be a useful approximation for your daily life. Neoclassical economics is like thinking the Earth is spherical—it’s almost right, but still wrong in some subtle but important ways. Thinking that economics isn’t a science is wronger than both of them put together.

The sense in which “there’s no such thing as a true model” is true is a trivial one: There’s no such thing as a perfect model, because by the time you included everything you’d just get back the world itself. But there are better and worse models, and some of our very best models (quantum mechanics, Darwinian evolution) are really good enough that I think it’s quite perverse not to call them simply true. Economics doesn’t have such models yet for more than a handful of phenomena—but we’re working on it (at least, I thought that’s what we were doing!).

Indeed, a key point I like to make about realism—in science, morality, or whatever—is that if you think something can be wrong, you must be a realist. In order for an idea to be wrong, there must be some objective reality to compare it to that it can fail to match. If everything is just subjective beliefs and sociopolitical pressures, there is no such thing as “wrong”, only “unpopular”. I’ve heard many people say things like “Well, that’s just your opinion; you could be wrong.” No, if it’s just my opinion, then I cannot possibly be wrong. So choose a lane! Either you think I’m wrong, or you think it’s just my opinion—but you can’t have it both ways.

But of course, the same is true of many other fields, particularly in social science. Sociologists also get blinded by their pet theories; psychologists also abuse statistics because the journals make them do it; political scientists are influenced by their funding sources; anthropologists also choose what to work on based on what’s prestigious in the field.

Moreover, natural sciences do this too. String theorists are (almost by definition) blinded by their favorite theory. Biochemists are manipulated by the financial pressures of the pharmaceutical industry. Neuroscientists publish all sorts of statistically nonsensical research. I’d be very surprised if even geologists were immune to the social norms of academia telling them to work on the most prestigious problems. If this is enough reason to abandon a field as a science, it is a reason to abandon science, full stop. That is what you are arguing for here.

And really, this should be fairly obvious, actually. Are workers and factories and televisions actual things that are actually here? Obviously they are. Therefore you can be right or wrong about how they interact. There is an obvious objective reality here that one can have more or less accurate beliefs about.

For socially-constructed phenomena like money, markets, and prices, this isn’t as obvious; if everyone stopped believing in the US Dollar, like Tinkerbell the US Dollar would cease to exist. But there does remain some objective reality (or if you like, intersubjectivereality) here: I can be right or wrong about the price of a dishwasher or the exchange rate from dollars to pounds.

So, in order to abandon the possibility of scientifically accurate economics, you have to say that even though there is this obvious physical reality of workers and factories and televisions, we can’t actually study that scientifically, even when it sure looks like we’re studying it scientifically by performing careful observations, rigorous statistics, and even randomized controlled experiments. Even when I perform my detailed Bayesian analysis of my randomized controlled experiment, nope, that’s not science. It doesn’t count, for some reason.

The only at all principled way I can see you could justify such a thing is to say that once you start studying other humans you lose all possibility of scientific objectivity—but notice that by making such a claim you haven’t just thrown out psychology and economics, you’ve also thrown out anthropology and neuroscience. The statements “DNA evidence shows that all modern human beings descend from a common migration out of Africa” and “Human nerve conduction speed is approximately 300 meters per second” aren’t scientific? Then what in the world are they?

Or is it specifically behavioral sciences that bother you? Now perhaps you can leave out biological anthropology and basic neuroscience; there’s some cultural anthropology and behavioral neuroscience you have to still include, but maybe that’s a bullet you’re willing to bite. There is perhaps something intuitively appealing here: Since science is a human behavior, you can’t use science to study human behavior without an unresolvable infinite regress.

But there are still two very big problems with this idea.

First, you’ve got to explain how there can be this obvious objective reality of human behavior that is nonetheless somehow forever beyond our understanding. Even though people actually do things, and we can study those things using the usual tools of science, somehow we’re not really doing science, and we can never actually learn anything about how human beings behave.

Second, you’ve got to explain why we’ve done as well as we have. For some reason, people seem to have this impression that psychology and especially economics have been dismal failures, they’ve brought us nothing but nonsense and misery.

Of course, we’ve made a lot of mistakes. We will continue to make mistakes. Many of our existing models are seriously flawed in very important ways, and many economists continue to use those models incautiously, blind to their defects. The Second Depression was largely the fault of economists, because it was economists who told everyone that markets are efficient, banks will regulate themselves, leave it alone, don’t worry about it.

But we can do better. We will do better. And we can only do that because economics is a science, it does reflect reality, and therefore we make ourselves less wrong.