I am now back in Long Beach, hence the return to Pacific Time. Today’s post is a little less economic than most, though it’s certainly still within the purview of social science and public policy. It concerns a question that many academic researchers and in general reasonable, thoughtful people have to deal with: How do we remain unbiased and nonpartisan?

This would not be so difficult if the world were as the most devoted “centrists” would have you believe, and it were actually the case that both sides have their good points and bad points, and both sides have their scandals, and both sides make mistakes or even lie, so you should never take the side of the Democrats or the Republicans but always present both views equally.

Sadly, this is not at all the world in which we live. While Democrats are far from perfect—they are human beings after all, not to mention politicians—Republicans have become completely detached from reality. As Stephen Colbert has said, “Reality has a liberal bias.” You know it’s bad when our detractors call us the reality-based community. Treating both sides as equal isn’t being unbiased—it’s committing a balance fallacy.

Don’t believe me? Here is a list of objective, scientific facts that the Republican Party (and particularly its craziest subset, the Tea Party) has officially taken political stances against:

The craziest belief by a standing Democrat I can think of is Dennis Kucinich’s belief that he saw an alien spacecraft. And to be perfectly honest, alien spacecraft are about a thousand times more plausible than Christianity in general, let alone Creationism. There almost certainly are alien spacecraft somewhere in the universe—just most likely so far away we’ll need FTL to encounter them. Moreover, this is not Kucinich’s official position as a member of Congress and it’s not something he has ever made policy based upon.

If you get to include Marxists on the left, then we get to include Nazis on the right. Or, we could be reasonable and say that only the official positions of elected officials or mainstream pundits actually count, in which case Democrats have views that are basically accurate and reasonable while the majority of Republicans have views that are still completely objectively wrong.

There’s no balance here. For every Democrat who is wrong, there is a Republicans who is totally delusional. For every Democrat who distorts the truth, there is a Republican who blatantly lies about basic facts. Not to mention that for every Democrat who has had an ill-advised illicit affair there is a Republican who has committed war crimes.

Actually war crimes are something a fair number of Democrats have done as well, but the difference still stands out in high relief: Barack Obama has ordered double-tap drone strikes that are in violation of the Geneva Convention, but George W. Bush orchestrated a worldwide mass torture campaign and launched pointless wars that slaughtered hundreds of thousands of people. Bill Clinton ordered some questionable CIA operations, but George H.W. Bush was the director of the CIA.

I wish we had two parties that were equally reasonable. I wish there were two—or three, or four—proposals on the table in each discussion, all of which had merits and flaws worth considering. Maybe if we somehow manage to get the Green Party a significant seat in power, or the Social Democrat party, we can actually achieve that goal. But that is not where we are right now. Right now, we have the Democrats, who have some good ideas and some bad ideas; and then we have the Republicans, who are completely out of their minds.

There is an important concept in political science called the Overton window; it is the range of political ideas that are considered “reasonable” or “mainstream” within a society. Things near the middle of the Overton window are considered sensible, even “nonpartisan” ideas, while things near the edges are “partisan” or “political”, and things near but outside the window are seen as “extreme” and “radical”. Things far outside the window are seen as “absurd” or even “unthinkable”.

Right now, our Overton window is in the wrong place. Things like Paul Ryan’s plan to privatize Social Security and Medicare are seen as reasonable when they should be considered extreme. Progressive income taxes of the kind we had in the 1960s are seen as extreme when they should be considered reasonable. Cutting WIC and SNAP with nothing to replace them and letting people literally starve to death are considered at most partisan, when they should be outright unthinkable. Opposition to basic scientific facts like climate change and evolution is considered a mainstream political position—when in terms of empirical evidence Creationism should be more intellectually embarrassing than being a 9/11 truther or thinking you saw an alien spacecraft.And perhaps worst of all, military tactics like double-tap strikes that are literally war crimes are considered “liberal”, while the “conservative” position involves torture, worldwide surveillance and carpet bombing—if not outright full-scale nuclear devastation.

I want to restore reasonable conversation to our political system, I really do. But that really isn’t possible when half the politicians are totally delusional. We have but one choice: We must vote them out.

I say this particularly to people who say “Why bother? Both parties are the same.” No, they are not the same. They are deeply, deeply different, for all the reasons I just outlined above. And if you can’t bring yourself to vote for a Democrat, at least vote for someone! A Green, or a Social Democrat, or even a Libertarian or a Socialist if you must. It is only by the apathy of reasonable people that this insanity can propagate in the first place.

I thought about making today’s post about the crazy currency crisis in Switzerland, but currency exchange rates aren’t really my area of expertise; this is much more in Krugman’s bailiwick, so you should probably read what Krugman says about the situation. There is one thing I’d like to say, however: I think there is a really easy way to create credible inflation and boost aggregate demand, but for some reason nobody is ever willing to do it: Give people money. Emphasis here on the people—not banks. Don’t adjust interest rates or currency pegs, don’t engage in quantitative easing. Give people money. Actually write a bunch of checks, presumably in the form of refundable tax rebates.

The only reason I can think of that economists don’t do this is they are afraid of helping poor people. They wouldn’t put it that way; maybe they’d say they want to avoid “moral hazard” or “perverse incentives”. But those fears didn’t stop them from loaning $2 trillion to banks or adding $4 trillion to the monetary base; they didn’t stop them from fighting for continued financial deregulation when what the world economy most desperately needs is stronger financial regulation. Our whole derivatives market practically oozes moral hazard and perverse incentives, but they aren’t willing to shut down that quadrillion-dollar con game. So that can’t be the actual fear. No, it has to be a fear of helping poor people instead of rich people, as though “capitalism” meant a system in which we squeeze the poor as tight as we can and heap all possible advantages upon those who are already wealthy. No, that’s called feudalism. Capitalism is supposed to be a system where markets are structured to provide free and fair competition, with everyone on a level playing field.

A basic income is a fundamentally capitalist policy, which maintains equal opportunity with a minimum of government intervention and allows the market to flourish. I suppose if you want to say that all taxation and government spending is “socialist”, fine; then every nation that has ever maintained stability for more than a decade has been in this sense “socialist”. Every soldier, firefighter and police officer paid by a government payroll is now part of a “socialist” system. Okay, as long as we’re consistent about that; but now you really can’t say that socialism is harmful; on the contrary, on this definition socialism is necessary for capitalism. In order to maintain security of property, enforcement of contracts, and equality of opportunity, you need government. Maybe we should just give up on the words entirely, and speak more clearly about what specific policies we want. If I don’t get to say that a basic income is “capitalist”, you don’t get to say financial deregulation is “capitalist”. Better yet, how about you can’t even call it “deregulation”? You have to actually argue in front of a crowd of people that it should be legal for banks to lie to them, and there should be no serious repercussions for any bank that cheats, steals, colludes, or even launders money for terrorists. That is, after all, what financial deregulation actually does in the real world.

Okay, that’s enough about that.

My birthday is coming up this Monday; thus completes my 27th revolution around the Sun. With birthdays come thoughts of ancestry: Though I appear White, I am legally one-quarter Native American, and my total ethnic mix includes English, German, Irish, Mohawk, and Chippewa.

Maybe you could even make a case for the “races” East Asian, South Asian, Pacific Islander, and Native American, since the indigenous populations of these geographic areas largely do come from the same genetic clusters. Or you could make a bigger category and call them all “Asian”—but if you include Papuan and Aborigine in “Asian” you’d pretty much have to include Chippewa and Najavo as well.

But I think it tells you a lot about what “race” really means when you realize that the two “race” categories which are most salient to Americans are in fact the categories that are genetically most meaningless. “White” and “Black” are totally nonsensical genetic categorizations.

What about “White”? Well, there are some fairly well-defined European genetic populations, so if we clustered those together we might be able to get something worth calling “White”. The problem is, that’s not how it happened. “White” is a club.The definition of who gets to be “White” has expanded over time, and even occasionally contracted. Originally Hebrew, Celtic, Hispanic, and Italian were not included (and Hebrew, for once, is actually a fairly sensible genetic category, as long as you restrict it to Ashkenazi), but then later they were. But now that we’ve got a lot of poor people coming in from Mexico, we don’t quite think of Hispanics as “White” anymore. We actually watched Arabs lose their “White” card in real-time in 2001; before 9/11, they were “White”; now, “Arab” is a separate thing. And “Muslim” is even treated like a race now, which is like making a racial category of “Keynesians”—never forget that Islam is above all a belief system.

Actually, “White privilege” is almosta tautology—the privilege isn’t given to people who were already defined as “White”, the privilege is to be called “White”. The privilege is to have your ancestors counted in the “White” category so that they can be given rights, while people who are not in the category are denied those rights. There does seem to be a certain degree of restriction by appearance—to my knowledge, no population with skin as dark as Kenyans has ever been considered “White”, and Anglo-Saxons and Nordics have always been included—but the category is flexible to political and social changes.

But really I hate that word “privilege”, because it gets the whole situation backwards. When you talk about “White privilege”, you make it sound as though the problem with racism is that it gives unfair advantages to White people (or to people arbitrarily defined as “White”). No, the problem is that people who are not White are denied rights. It isn’t what White people have that’s wrong; it’s what Black people don’t have. Equating those two things creates a vision of the world as zero-sum, in which each gain for me is a loss for you.

Here’s the thing about zero-sum games: All outcomes are Pareto-efficient. Remember when I talked about Pareto-efficiency? As a quick refresher, an outcome is Pareto-efficient if there is no way for one person to be made better off without making someone else worse off. In general, it’s pretty hard to disagree that, other things equal, Pareto-efficiency is a good thing, and Pareto-inefficiency is a bad thing. But if racism were about “White privilege” and the game were zero-sum, racism would have to be Pareto-efficient.

In fact, racism is Pareto-inefficient, and that is part of why it is so obviously bad. It harms literally billions of people, and benefits basically no one. Maybe there are a few individuals who are actually, all things considered, better off than they would have been if racism had not existed. But there are certainly not very many such people, and in fact I’m not sure there are any at all. If there are any, it would mean that technically racism is not Pareto-inefficient—but it is definitely very close. At the very least, the damage caused by racism is several orders of magnitude larger than any benefits incurred.

That’s why the “privilege” language, while well-intentioned, is so insidious; it tells White people that racism means taking things away from them. Many of these people are already in dire straits—broke, unemployed, or even homeless—so taking away what they have sounds particularly awful. Of course they’d be hostile to or at least dubious of attempts to reduce racism. You just told them that racism is the only thing keeping them afloat! In fact, quite the opposite is the case: Poor White people are, second only to poor Black people, those who stand the most to gain from a more just society. David Koch and Donald Trump should be worried; we will probably have to take most of their money away in order to achieve social justice. (Bill Gates knows we’ll have to take most of his money away, but he’s okay with that; in fact he may end up giving it away before we get around to taking it.) But the average White person will almost certainly be better off than they were.

Why does it seem like there are benefits to racism? Again, because people are accustomed to thinking of the world as zero-sum. One person is denied a benefit, so that benefit must go somewhere else right? Nope—it can just disappear entirely, and in this case typically does.

When a Black person is denied a job in favor of a White person who is less qualified, doesn’t that White person benefit? Uh, no, actually, not really. They have been hired for a job that isn’t an optimal fit for them; they aren’t working to their comparative advantage, and that Black person isn’t either and may not be working at all. The total output of the economy will be thereby reduced slightly. When this happens millions of times, the total reduction in output can be quite substantial, and as a result that White person was hired at $30,000 for an unsuitable job when in a racism-free world they’d have been hired at $40,000 for a suitable one. A similar argument holds for sexism; men don’t benefit from getting jobs women are denied if one of those women would have invented a cure for prostate cancer.

Indeed, the empowerment of women and minorities is kind of the secret cheat code for creating a First World economy. The great successes of economic development—Korea, Japan, China, the US in WW2—had their successes precisely at a time when they suddenly started including women in manufacturing, effectively doubling their total labor capacity. Moreover, it’s pretty clear that the causation ran in this direction. Periods of economic growth are associated with increases in solidarity with other groups—and downturns with decreased solidarity—but the increase in women in the workforce was sudden and early while the increase in growth and total output was prolonged.

Racism is irrational. Indeed it is so obviously irrational that for decades now neoclassical economists have been insisting that there is no need for civil rights policy, affirmative action, etc. because the market will automatically eliminate racism by the rational profit motive. A more recent literature has attempted to show that, contrary to all appearances, racism actually is rational in some cases. Inevitably it relies upon either the background of a racist society (maybe Black people are, on average, genuinely less qualified, but it would only be because they’ve been given poorer opportunities), or an assumption of “discriminatory tastes”, which is basically giving up and redefining the utility function so that people simply get direct pleasure from being racists. Of course, on that sort of definition, you can basically justify any behavior as “rational”: Maybe he just enjoys banging his head against the wall! (A similar slipperiness is used by egoists to argue that caring for your children is actually “selfish”; well, it makes you happy, doesn’t it? Yes, but that’s not why we do it.)

There’s a much simpler way to understand this situation: Racism is irrational, and so is human behavior.

That isn’t a complete explanation, of course; and I think one major misunderstanding neoclassical economists have of cognitive economists is that they think this is what we do—we point out that something is irrational, and then high-five and go home. No, that’s not what we do. Finding the irrationality is just the start; next comes explaining the irrationality, understanding the irrationality, and finally—we haven’t reached this point in most cases—fixing the irrationality.

So what explains racism? In short, the tribal paradigm. Human beings evolved in an environment in which the most important factor in our survival and that of our offspring was not food supply or temperature or predators, it was tribal cohesion. With a cohesive tribe, we could find food, make clothes, fight off lions. Without one, we were helpless. Millions of years in this condition shaped our brains, programming them to treat threats to tribal cohesion as the greatest possible concern. We even reached the point where solidarity for the tribe actually began to dominate basic survival instincts: For a suicide bomber the unity of the tribe—be it Marxism for the Tamil Tigers or Islam for Al-Qaeda—is more important than his own life. We will do literally anything if we believe it is necessary to defend the identities we believe in.

And no, we rationalists are no exception here. We are indeed different from other groups; the beliefs that define us, unlike the beliefs of literally every other group that has ever existed, are actually rationally founded. The scientific method really isn’t just another religion, for unlike religion it actually works. But still, if push came to shove and we were forced to kill and die in order to defend rationality, we would; and maybe we’d even be right to do so. Maybe the French Revolution was, all things considered, a good thing—but it sure as hell wasn’t nonviolent.

This is the background we need to understand racism. It actually isn’t enough to show people that racism is harmful and irrational, because they are programmed not to care. As long as racial identification is the salient identity, the tribe by which we define ourselves, we will do anything to defend the cohesion of that tribe. It is not enough to show that racism is bad; we must in fact show that race doesn’t matter. Fortunately, this is easy, for as I explained above, race does not actually exist.

That makes racism in some sense easier to deal with than sexism, because the very categories of races upon which it is based are fundamentally faulty. Sexes, on the other hand, are definitely a real thing. Males and females actually are genetically different in important ways. Exactly how different in what ways is an open question, and what we do know is that for most of the really important traits like intelligence and personality the overlap outstrips the difference. (The really big, categorical differences all appear to be physical: Anatomy, size, testosterone.) But conquering sexism may always be a difficult balance, for there are certain differences we won’t be able to eliminate without altering DNA. That no more justifies sexism than the fact that height is partly genetic would justify denying rights to short people (which, actually, is something we do); but it does make matters complicated, because it’s difficult to know whether an observed difference (for instance, most pediatricians are female, while most neurosurgeons are male) is due to discrimination or innate differences.

Racism, on the other hand, is actually quite simple: Almost any statistically significant difference in behavior or outcome between races must be due to some form of discrimination somewhere down the line. Maybe it’s not discrimination right here, right now; maybe it’s discrimination years ago that denied opportunities, or discrimination against their ancestors that led them to inherit less generations later; but it almost has to be discrimination against someone somewhere, because it is only by social construction that races exist in the first place. I do say “almost” because I can think of a few exceptions: Black people are genuinely less likely to use tanning salons and genuinely more likely to need vitamin D supplements, but both of those things are directly due to skin pigmentation. They are also more likely to suffer from sickle-cell anemia, which is another convergent trait that evolved in tropical climates as a response to malaria. But unless you can think of a reason why employment outcomes would depend upon vitamin D, the huge difference in employment between Whites and Blacks really can’t be due to anything but discrimination.

Unfortunately, I don’t yet know exactly what it takes to change people’s concept of group identification. Obviously it can be done, for group identities change all the time, sometimes quite rapidly; but we simply don’t have good research on what causes those changes or how they might be affected by policy. That’s actually a major part of the experiment I’ve been trying to get funding to run since 2009, which I hope can now become my PhD thesis. All I can say is this: I’m working on it.

Whenever you introduce yourself to someone as an economist, you will typically be asked a single question: “How is the economy doing?” I’ve already experienced this myself, and I don’t have very many dinner parties under my belt.

It’s an odd question, for a couple of reasons: First, I didn’t say I was a macroeconomic forecaster. That’s a very small branch of economics—even a small branch of macroeconomics. Second, it is widely recognized among economists that our forecasters just aren’t very good at what they do. But it is the sort of thing that pops into people’s minds when they hear the word “economist”, so we get asked it a lot.

And of course that’s not what I do at all. I am a cognitive economist; I study how economic systems behave when they are run by actual human beings, rather than by infinite identical psychopaths. I’m particularly interested in what I call the tribal paradigm, the way that people identify with groups and act in the interests of those groups, how much solidarity people feel for each other and why, and what role ideology plays in that identification. I’m hoping to one day formally model solidarity and make directly testable predictions about things like charitable donations, immigration policies and disaster responses.

I do have a more macroeconomic bent than most other cognitive economists; I’m not just interested in how human irrationality affects individuals or corporations, I’m also interested in how it affects society as a whole. But unlike most macroeconomists I care more about inequality than unemployment, and hardly at all about inflation. Unless you start getting 40% inflation per year, inflation really isn’t that harmful—and can you imagine what 40% unemployment would be like? (Also, while 100% inflation is awful, 100% unemployment would be no economy at all.) If we’re going to have a “misery index“, it should weight unemployment at least 10 times as much as inflation—and it should also include terms for poverty and inequality. Frankly maybe we should just use poverty, since I’d be prepared to accept just about any level of inflation, unemployment, or even inequality if it meant eliminating poverty. This is of course is yet another reason why a basic income is so great! An anti-poverty measure can really only be called a failure if it doesn’t actually reduce poverty; the only way that could happen with a basic income is if it somehow completely destabilized the economy, which is extremely unlikely as long as the basic income isn’t something ridiculous like $100,000 per year.

I could probably talk about my master’s thesis; the econometric models are relatively arcane, but the basic idea of correlating the income concentration of the top 1% of 1% and the level of corruption is something most people can grasp easily enough.

Of course, that wouldn’t be much of an answer to “How is the economy doing?”; usually my answer is to repeat what I’ve last read from mainstream macroeconomic forecasts, which is usually rather banal—but maybe that’s the idea? Most small talk is pretty banal I suppose (I never was very good at that sort of thing). It sounds a bit like this: No, we’re not on the verge of horrible inflation—actually inflation is currently too low. (At this point someonewill probably bring up the gold standard, and I’ll have to explain that the gold standard is an unequivocally terrible idea on so, so many levels. The gold standard caused the Great Depression.) Unemployment is gradually improving, and actually job growth is looking pretty good right now; but wages are still stagnant, which is probably what’s holding down inflation. We could have prevented the Second Depression entirely, but we didn’t because Republicans are terrible at managing the economy—all of the 10 most recent recessions and almost 80% of the recessions in the last century were under Republican presidents. Instead the Democrats did their best to implement basic principles of Keynesian macroeconomics despite Republican intransigence, and we muddled through. In another year or two we will actually be back at an unemployment rate of 5%, which the Federal Reserve considers “full employment”. That’s already problematic—what about that other 5%?—but there’s another problem as well: Much of our reduction in unemployment has come not from more people being employed but instead by more people dropping out of the labor force. Our labor force participation rate is the lowest it’s been since 1978, and is still trending downward. Most of these people aren’t getting jobs; they’re giving up. At best we may hope that they are people like me, who gave up on finding work in order to invest in their own education, and will return to the labor force more knowledgeable and productive one day—and indeed, college participation rates are also rising rapidly. And no, that doesn’t mean we’re becoming “overeducated”; investment in education, so-called “human capital”, is literally the single most important factor in long-term economic output, by far. Education is why we’re not still in the Stone Age. Physical capital can be replaced, and educated people will do so efficiently. But all the physical capital in the world will do you no good if nobody knows how to use it. When everyone in the world is a millionaire with two PhDs and all our work is done by robots, maybe then you can say we’re “overeducated”—and maybe then you’d still be wrong. Being “too educated” is like being “too rich” or “too happy”.

That’s usually enough to placate my interlocutor. I should probably count my blessings, for I imagine that the first confrontation you get at a dinner party if you say you are a biologist involves a Creationist demanding that you “prove evolution”. I like to think that some mathematical biologists—yes, that’s a thing—take their request literally and set out to mathematically prove that if allele distributions in a population change according to a stochastic trend then the alleles with highest expected fitness have, on average, the highest fitness—which is what we really mean by “survival of the fittest”. The more formal, the better; the goal is to glaze some Creationist eyes. Of course that’s a tautology—but so is literally anything that you can actually prove. Cosmologists probably get similar demands to “prove the Big Bang”, which sounds about as annoying. I may have to deal with gold bugs, but I’ll take them over Creationists any day.

What do other scientists get? When I tell people I am a cognitive scientist (as a cognitive economist I am sort of both an economist and a cognitive scientist after all), they usually just respond with something like “Wow, you must be really smart.”; which I suppose is true enough, but always strikes me as an odd response. I think they just didn’t know enough about the field to even generate a reasonable-sounding question, whereas with economists they always have “How is the economy doing?” handy. Political scientists probably get “Who is going to win the election?” for the same reason. People have opinions about economics, but they don’t have opinions about cognitive science—or rather, they don’t think they do. Actually most people have an opinion about cognitive science that is totally and utterly ridiculous, more on a par with Creationists than gold bugs: That is, most people believe in a soul that survives after death. This is rather like believing that after your computer has been smashed to pieces and ground back into the sand from whence it came, all the files you had on it are still out there somewhere, waiting to be retrieved. No, they’re long gone—and likewise your memories and your personality will be long gone once your brain has rotted away. Yes, we have a soul, but it’s made of lots of tiny robots; when the tiny robots stop working the soul is no more. Everything you are is a result of the functioning of your brain. This does not mean that your feelings are not real or do not matter; they are just as real and important as you thought they were. What it means is that when a person’s brain is destroyed, that person is destroyed, permanently and irrevocably. This is terrifying and difficult to accept; but it is also most definitely true. It is as solid a fact as any in modern science. Many people see a conflict between evolution and religion; but the Pope has long since rendered that one inert. No, the real conflict, the basic fact that undermines everything religion is based upon, is not in biology but in cognitive science. It is indeed the Basic Fact of Cognitive Science: We are our brains, no more and no less. (But I suppose it wouldn’t be polite to bring that up at dinner parties.)

The “You must be really smart.” response is probably what happens to physicists and mathematicians. Quantum mechanics confuses basically everyone, so few dare go near it. The truly bold might try to bring up Schrodinger’s Cat, but are unlikely to understand the explanation of why it doesn’t work. General relativity requires thinking in tensors and four-dimensional spaces—perhaps they’ll be asked the question “What’s inside a black hole?”, which of course no physicist can really answer; the best answer may actually be, “What do you mean, inside?” And if a mathematician tries to explain their work in lay terms, it usually comes off as either incomprehensible or ridiculous: Stokes’ Theorem would be either “the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole manifold” or else something like “The swirliness added up inside an object is equal to the swirliness added up around the edges.”

Economists, however, always seem to get this one: “How is the economy doing?”

Right now, the answer is this: “It’s still pretty bad, but it’s getting a lot better. Hopefully the new Congress won’t screw that up.”

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimonia—the joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preference—use the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisfice—choosing the first option that is above a certain threshold—or engage in constrained optimization—choosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effectsthe same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?