“People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.”

Today I’d like to talk about the fact that “to profess” is a very important phrase in that sentence. Part of understanding ridiculous beliefs, I think, is understanding that many, if not most, of them are not actually proper beliefs. They are what Daniel Dennett calls “belief in belief”, and has elsewhere been referred to as “anomalous belief”. They are not beliefs in the ordinary sense that we would line up with the other beliefs in our worldview and use them to anticipate experiences and motivate actions. They are something else, lone islands of belief that are not weaved into our worldview. But all the same they are invested with importance, often moral or even ultimate importance; this one belief may not make any sense with everyone else, but you must believe it, because it is a vital part of your identity and your tribe. To abandon it would not simply be mistaken; it would be heresy, it would be treason.

How do I know this? Mainly because nobody has tried to stone me to death lately.

Yet I have met many people who profess to be “Bible-believing Christians”, and even may oppose some of these activities (chiefly sodomy, blasphemy, and nonbelief) on the grounds that they are against what the Bible says—and yet not one has tried to arrange my execution, nor have I ever seriously feared that they might.

Is this because we live in a secular society? Well, yes—but not simply that. It isn’t just that these people are afraid of being punished by our secular government should they murder me for my sins; they believe that it is morally wrong to murder me, and would rarely even consider the option. Someone could point them to the passage in Leviticus (20:16, as it turns out) that explicitly says I should be executed, and it would not change their behavior toward me.

On first glance this is quite baffling. If I thought you were about to drink a glass of water that contained cyanide, I would stop you, by force if necessary. So if they truly believe that I am going to be sent to Hell—infinitely worse than cyanide—then shouldn’t they be willing to use any means necessary to stop that from happening? And wouldn’t this be all the more true if they believe that they themselves will go to Hell should they fail to punish me?

If these “Bible-believing Christians” truly believed in Hell the way that I believe in cyanide—that is, as proper beliefs which anticipate experience and motivate action—then they would in fact try to force my conversion or execute me, and in doing so would believe that they are doing right. This used to be quite common in many Christian societies (most infamously in the Salem Witch Trials), and still is disturbingly common in many Muslim societies—ISIS doesn’t just throw gay men off rooftops and stone them as a weird idiosyncrasy; it is written in the Hadith that they’re supposed to. Nor is this sort of thing confined to terrorist groups; the “legitimate” government of Saudi Arabia routinely beheads atheists or imprisons homosexuals (though has a very capricious enforcement system, likely so that the monarchy can trump up charges to justify executing whomever they choose). Beheading people because the book said so is what your behavior would look like if you honestly believed, as a proper belief, that the Qur’an or the Bible or whatever holy book actually contained the ultimate truth of the universe. The great irony of calling religion people’s “deeply-held belief” is that it is in almost all circumstances the exact opposite—it is their most weakly held belief, the one that they could most easily sacrifice without changing their behavior.

Yet perhaps we can’t even say that to people, because they will get equally defensive and insist that they really do hold this very important anomalous belief, and how dare you accuse them otherwise. Because one of the beliefs they really do hold, as a proper belief, and a rather deeply-held one, is that you must always profess to believe your religion and defend your belief in it, and if anyone catches you not believing it that’s a horrible, horrible thing. So even though it’s obvious to everyone—probably even to you—that your behavior looks nothing like what it would if you actually believed in this book, you must say that you do, scream that you do if necessary, for no one must ever, ever find out that it is not a proper belief.

Another common trick is to try to convince people that their beliefs do affect their behavior, even when they plainly don’t. We typically use the words “religious” and “moral” almost interchangeably, when they are at best orthogonal and arguably even opposed. Part of why so many people seem to hold so rigidly to their belief-in-belief is that they think that morality cannot be justified without recourse to religion; so even though on some level they know religion doesn’t make sense, they are afraid to admit it, because they think that means admitting that morality doesn’t make sense. If you are even tempted by this inference, I present to you the entire history of ethical philosophy. Divine Command theory has been a minority view among philosophers for centuries.

Indeed, it is precisely because your moral beliefs are not based on your religion that you feel a need to resort to that defense of your religion. If you simply believed religion as a proper belief, you would base your moral beliefs on your religion, sure enough; but you’d also defend your religion in a fundamentally different way, not as something you’re supposed to believe, not as a belief that makes you a good person, but as something that is just actually true. (And indeed, many fanatics actually do defend their beliefs in those terms.) No one ever uses the argument that if we stop believing in chairs we’ll all become murderers, because chairs are actually there. We don’t believe in belief in chairs; we believe in chairs.

And really, if such a belief were completely isolated, it would not be a problem; it would just be this weird thing you say you believe that everyone really knows you don’t and it doesn’t affect how you behave, but okay, whatever. The problem is that it’s never quite isolated from your proper beliefs; it does affect some things—and in particular it can offer a kind of “support” for other real, proper beliefs that you do have, support which is now immune to rational criticism.

For example, as I already mentioned: Most of these “Bible-believing Christians” do, in fact, morally oppose homosexuality, and say that their reason for doing so is based on the Bible. This cannot literally be true, because if they actually believed the Bible they wouldn’t want gay marriage taken off the books, they’d want a mass pogrom of 4-10% of the population (depending how you count), on a par with the Holocaust. Fortunately their proper belief that genocide is wrong is overriding. But they have no such overriding belief supporting the moral permissibility of homosexuality or the personal liberty of marriage rights, so the very tenuous link to their belief-in-belief in the Bible is sufficient to tilt their actual behavior.

Similarly, if the people I meet who say they think maybe 9/11 was an inside job by our government really believed that, they would most likely be trying to organize a violent revolution; any government willing to murder 3,000 of its own citizens in a false flag operation is one that must be overturned and can probably only be overturned by force. At the very least, they would flee the country. If they lived in a country where the government is actually like that, like Zimbabwe or North Korea, they wouldn’t fear being dismissed as conspiracy theorists, they’d fear being captured and executed. The very fact that you live within the United States and exercise your free speech rights here says pretty strongly that you don’t actually believe our government is that evil. But they wouldn’t be so outspoken about their conspiracy theories if they didn’t at least believe in believing them.

I also have to wonder how many of our politicians who lean on the Constitution as their source of authority have actually read the Constitution, as it says a number of rather explicit things against, oh, say, the establishment of religion (First Amendment) or searches and arrests without warrants (Fourth Amendment) that they don’t much seem to care about. Some are better about this than others; Rand Paul, for instance, actually takes the Constitution pretty seriously (and is frequently found arguing against things like warrantless searches as a result!), but Ted Cruz for example says he has spent decades “defending the Constitution”, despite saying things like “America is a Christian nation” that directly violate the First Amendment. Cruz doesn’t really seem to believe in the Constitution; but maybe he believes in believing the Constitution. (It’s also quite possible he’s just lying to manipulate voters.)

Every subculture of humans has words, attitudes, and ideas that hold it together. The obvious example is religions, but the same is true of sports fandoms, towns, and even scientific disciplines. (I would estimate that 40-60% of scientific jargon, depending on discipline, is not actually useful, but simply a way of exhibiting membership in the tribe. Even physicists do this: “quantum entanglement” is useful jargon, but “p-brane” surely isn’t. Statisticians too: Why say the clear and understandable “unequal variance” when you could show off by saying “heteroskedasticity”? In certain disciplines of the humanities this figure can rise as high as 90%: “imaginary” as a noun leaps to mind.)

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

In this original formulation by Bostrom, the argument actually makes some sense. It can be escaped, because it makes some subtle anthropic assumptions that need to be considered more carefully (in short, there could be ancestor-simulations but we could still know we aren’t in one); but it deserves to be taken seriously. Indeed, I think proposition (2) is almost certainly true, and proposition (1) might be as well; thus I have no problem accepting the disjunction.

Of course, the typical form of the argument isn’t nearly so cogent. In popular outlets as prestigious as the New York Times, Scientific American and theNew Yorker, the idea is simply presented as “We are living in a simulation.”The only major outlet I could find that properly presented Bostrom’s disjunction was PBS. Indeed, there are now some Silicon Valley billionaires who believe the argument, or at least think it merits enough attention to be worth funding research into how we might escape the simulation we are in. (Frankly, even if we were inside a simulation, it’s not clear that “escaping” would be something worthwhile or even possible.)

Yet most people, when presented with this idea, think it is profoundly silly and a waste of time.

I believe this is the correct response. I am 99.9% sure we are not living in a simulation.

But it’s one thing to know that an argument is wrong, and quite another to actually show why; in that respect the Simulation Argument is a lot like the Ontological Argument for God:

However, as Bertrand Russell observed, it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.

To resolve this problem, I am writing this post (at the behest of my Patreons) to provide you now with a concise and persuasive argument directly against the Simulation Argument. No longer will you have to rely on your intuition that it can’t be right; you actually will have compelling logical reasons to reject it.

Note that I will not deny the core principle of cognitive science that minds are computational and therefore in principle could be simulated in such a way that the “simulations” would be actual minds. That’s usually what defenders of the Simulation Argument assume you’re denying, and perhaps in many cases it is; but that’s not what I’m denying. Yeah, sure, minds are computational (probably). There’s still no reason to think we’re living in a simulation.

To make this refutation, I should definitely address the strongest form of the argument, which is Nick Bostrom’s original disjunction. As I already noted, I believe that the disjunction is in fact true; at least one of those propositions is almost certainly correct, and perhaps two of them.

Indeed, I can tell you which one: Proposition (2). That is, I see no reason whatsoever why an advanced “posthuman” species would want to create simulated universes remotely resembling our own.

First of all, let’s assume that we do make it that far and posthumans do come into existence. I really don’t have sufficient evidence to say this is so, and the combination of millions of racists and thousands of nuclear weapons does not bode particularly well for that probability. But I think there is at least some good chance that this will happen—perhaps 10%?—so, let’s concede that point for now, and say that yes, posthumans will one day exist.

To be fair, I am not a posthuman, and cannot say for certain what beings of vastly greater intelligence and knowledge than I might choose to do. But since we are assuming that they exist as the result of our descendants more or less achieving everything we ever hoped for—peace, prosperity, immortality, vast knowledge—one thing I think I can safely extrapolate is that they will be moral. They will have a sense of ethics and morality not too dissimilar from our own. It will probably not agree in every detail—certainly not with what ordinary people believe, but very likely not with what even our greatest philosophers believe. It will most likely be better than our current best morality—closer to the objective moral truth that underlies reality.

I say this because this is the pattern that has emerged throughout the advancement of civilization thus far, and the whole reason we’re assuming posthumans might exist is that we are projecting this advancement further into the future. Humans have, on average, in the long run, become more intelligent, more rational, more compassionate. We have given up entirely on ancient moral concepts that we now recognize to be fundamentally defective, such as “witchcraft” and “heresy”; we are in the process of abandoning others for which some of us see the flaws but others don’t, such as “blasphemy” and “apostasy”. We have dramatically expanded the rights of women and various minority groups. Indeed, we have expanded our concept of which beings are morally relevant, our “circle of concern”, from only those in our tribe on outward to whole nations, whole races of people—and for some of us, as far as all humans or even all vertebrates. Therefore I expect us to continue to expand this moral circle, until it encompasses all sentient beings in the universe. Indeed, on some level I already believe that, though I know I don’t actually live in accordance with that theory—blame me if you will for my weakness of will, but can you really doubt the theory? Does it not seem likely that this it the theory to which our posthuman descendants will ultimately converge?

If that is the case, then posthumans would never make a simulation remotely resembling the universe I live in.

Maybe not me in particular, for I live relatively well—though I must ask why the migraines were really necessary. But among humans in general, there are many millions who live in conditions of such abject squalor and suffering that to create a universe containing them can only be counted as the gravest of crimes, morally akin to the Holocaust.

Indeed, creating this universe must, by construction, literally include the Holocaust. Because the Holocaust happened in this universe, you know.

So unless you think that our posthuman descendants are monsters—demons really, immortal beings of vast knowledge and power who thrive on the death and suffering of other sentient beings, you cannot think that they would create our universe. They might create a universe of some sort—but they would not create this one. You may consider this a corollary of the Problem of Evil, which has always been one of the (many) knockdown arguments against the existence of God as depicted in any major religion.

To deny this, you must twist the simulation argument quite substantially, and say that only some of us are actual people, sentient beings instantiated by the simulation, while the vast majority are, for lack of a better word, NPCs. The millions of children starving in southeast Asia and central Africa aren’t real, they’re just simulated, so that the handful of us who are real have a convincing environment for the purposes of this experiment. Even then, it seems monstrous to deceive us in this way, to make us think that millions of children are starving just to see if we’ll try to save them.

Bostrom presents it as obvious that any species of posthumans would want to create ancestor-simulations, and to make this seem plausible he compares to the many simulations we already create with our current technology, which we call “video games”. But this is such a severe equivocation on the word “simulation” that it frankly seems disingenuous (or for the pun perhaps I should say dissimulation).

This universe can’t possiblybea simulation in the sense that Halo 4 is a simulation. Indeed, this is something that I know with near-perfect certainty, for I am a sentient being (“Cogito ergo sum” and all that). There is at least one actual sentient person here—me—and based on my observations of your behavior, I know with quite high probability that there are many others as well—all of you.

Whereas, if I thought for even a moment there was even a slight probability that Halo 4 contains actual sentient beings that I am murdering, I would never play the game again; indeed I think I would smash the machine, and launch upon a global argumentative crusade to convince everyone to stop playing violent video games forevermore. If I thought that these video game characters that I explode with virtual plasma grenades were actual sentient people—or even had a non-negligible chance of being such—then what I am doing would be literally murder.

So whatever else the posthumans would be doing by creating our universe inside some vast computer, it is not “simulation” in the sense of a video game. If they are doing this for amusement, they are monsters. Even if they are doing it for some higher purpose such as scientific research, I strongly doubt that it can be justified; and I even more strongly doubt that it could be justified frequently. Perhaps once or twice in the whole history of the civilization, as a last resort to achieve some vital scientific objective when all other methods have been thoroughly exhausted. Furthermore it would have to be toward some truly cosmic objective, such as forestalling the heat death of the universe. Anything less would not justify literally replicating thousands of genocides.

But the way Bostrom generates a nontrivial probability of us living in a simulation is by assuming that each posthuman civilization will create many simulations similar to our own, so that the prior probability of being in a simulation is so high that it overwhelms the much higher likelihood that we are in the real universe. (This a deeply Bayesian argument; of that part, I approve. In Bayesian reasoning, the likelihoodis the probability that we would observe the evidence we do given that the theory is true, while the prioris the probability that the theory is true, before we’ve seen any evidence. The probability of the theory actually being true is proportional to the likelihood multiplied by the prior.) But if the Foundation IRB will only approve the construction of a Synthetic Universe in order to achieve some cosmic objective, then the prior probability is something like 2/3, or 9/10; and thus it is no match whatsoever for the some 10^12 evidence in favor of this being actual reality.

Just what is this so compelling likelihood? That brings me to my next point, which is a bit more technical, but important because it’s really where the Simulation Argument truly collapses.

How do I know we aren’t in a simulation?

The fundamental equations of the laws of nature do not have closed-form solutions.

Take a look at the Schrodinger Equation, the Einstein field equations, the Navier-Stokes Equations, even Maxwell’s Equations (which are relatively well-behaved all things considered). These are second-order partial differential equations all, extremely complex to solve. They are all defined over continuous time and space, which has uncountably many points in every interval (though there are some physicists who believe that spacetime may be discrete on the order of 10^-44 seconds.) Not one of them has a general closed-form solution, by which I mean a formula that you could just plug in numbers for the parameters on one side of the equation and output an answer on the other. (x^3 + y^3 = 3 is not a closed-form solution, but y = (3 – x^3)^(1/3) is.) They have such exact solutions in certain special cases, but in general we can only solve them approximately, if at all.

This is not particularly surprising if you assume we’re in the actual universe. I have no particular reason to think that the fundamental laws underlying reality should be of a form that is exactly solvable to minds like my own, or even solvable at all in any but a trivial sense. (They must be “solvable” in the sense of actually resulting in something in particular happening at any given time, but that’s all.)

But it is extremely surprising if you assume we’re in a universe that is simulated by posthumans. If posthumans are similar to us, but… more so I guess, then when they set about to simulate a universe, they should do so in a fashion not too dissimilar from how we would do it. And how would we do it? We’d code in a bunch of laws into a computer in discrete time (and definitely not with time-steps of 10^-44 seconds either!), and those laws would have to be encoded as functions, not equations. There could be many inputs in many different forms, perhaps even involving mathematical operations we haven’t invented yet—but each configuration of inputs would have to yield precisely one output, if the computer program is to run at all.

Indeed, if they are really like us, then their computers will probably only be capable of one core operation—conditional bit flipping, 1 to 0 or 0 to 1 depending on some state—and the rest will be successive applications of that operation. Bit shifts are many bit flips, addition is many bit shifts, multiplication is many additions, exponentiation is many multiplications. We would therefore expect the fundamental equations of the simulated universe to have an extremely simple functional form, literally something that can be written out as many successive steps of “if A, flip X to 1” and “if B, flip Y to 0”. It could be a lot of such steps mind you—existing programs require billions or trillions of such operations—but one thing it could never be is a partial differential equation that cannot be solved exactly.

What fans of the Simulation Argument seem to forget is that while this simple set of operations is extremely general, capable of generating quite literally any possible computable function (Turing proved that), it is not capable of generating any function that isn’t computable, much less any equation that can’t be solved into a function. So unless the laws of the universe can actually be reduced to computable functions, it’s not even possible for us to be inside a computer simulation.

What is the probability that all the fundamental equations of the universe can be reduced to computable functions? Well, it’s difficult to assign a precise figure of course. I have no idea what new discoveries might be made in science or mathematics in the next thousand years (if I did, I would make a few and win the Nobel Prize). But given that we have been trying to get closed-form solutions for the fundamental equations of the universe and failing miserably since at least Isaac Newton, I think that probability is quite small.

Then there’s the fact that (again unless you believe some humans in our universe are NPCs) there are 7.3 billion minds (and counting) that you have to simulate at once, even assuming that the simulation only includes this planet and yet somehow perfectly generates an apparent cosmos that even behaves as we would expect under things like parallax and redshift. There’s the fact that whenever we try to study the fundamental laws of our universe, we are able to do so, and never run into any problems of insufficient resolution; so apparently at least this planet and its environs are being simulated at the scale of nanometers and femtoseconds. This is a ludicrously huge amount of data, and while I cannot rule out the possibility of some larger universe existing that would allow a computer large enough to contain it, you have a very steep uphill battle if you want to argue that this is somehow what our posthuman descendants will consider the best use of their time and resources. Bostrom uses the video game comparison to make it sound like they are just cranking out copies of Halo 917 (“Plasma rifles? How quaint!”) when in fact it amounts to assuming that our descendants will just casually create universes of 10^50 particles running over space intervals of 10^-9 meters and time-steps of 10^-15 seconds that contain billions of actual sentient beings and thousands of genocides, and furthermore do so in a way that somehow manages to make the apparent fundamental equations inside those universes unsolvable.

Indeed, I think it’s conservative to say that the likelihood ratio is 10^12—observing what we do is a trillion times more likely if this is the real universe than if it’s a simulation. Therefore, unless you believe that our posthuman descendants would have reason to create at least a billion simulations of universes like our own, you can assign a probability that we are in the actual universe of at least 99.9%.

One of the most unfortunate facts in the world—indeed, perhaps the most unfortunate fact, from which most other unfortunate facts follow—is that it is quite possible for a human brain to sincerely and deeply hold a belief that is, by any objective measure, totally and utterly ridiculous.

And to be clear, I don’t just mean false; I mean ridiculous. People having false beliefs is an inherent part of being finite beings in a vast and incomprehensible universe. Monetarists are wrong, but they are not ludicrous. String theorists are wrong, but they are not absurd. Multiregionalism is wrong, but it is not nonsensical. Indeed, I, like anyone else, am probably wrong about a great many things, though of course if I knew which ones I’d change my mind. (Indeed, I admit a small but nontrivial probability of being wrong about the three things I just listed.)

I mean ridiculous beliefs. I mean that any rational, objective assessment of the probability of that belief being true would be vanishingly small, 1 in 1 million at best. I’m talking about totally nonsensical beliefs, beliefs that go against overwhelming evidence; some of them are outright incoherent. Yet millions of people go on believing them.

I love the term “extrasensory perception” because it is such an oxymoron; if you’re perceiving, it is via senses. “Sixth sense” is better, except that we actually already have at least nine senses: The ones you probably know, vision (sight), audition (hearing), olfaction (smell), gustation (taste), and tactition (touch)—and the ones you may not know, thermoception (heat), proprioception (body position), vestibulation (balance), and nociception (pain). These can probably be subdivided further—vision and spatial reasoning are dissociated in blind people, heat and cold are separate nerve pathways, pain and itching are distinct systems, and there are a variety of different sensors used for proprioception. So we really could have as many as twenty senses, depending on how you’re counting.

What about telepathy? Well, that is not actually impossible in principle; it’s just that there’s no evidence that humans actually do it. Smartphones do it almost literally constantly, transmitting data via high-frequency radio waves back and forth to one another. We could have evolved some sort of radio transceiver organ (perhaps an offshoot of an electric defense organ such as that of electric eels), but as it turns out we didn’t. Actually in some sense—which some might say is trivial, but I think it’s actually quite deep—we do have telepathy; it’s just that we transmit our thoughts not via radio waves or anything more exotic, but via sound waves (speech) and marks on paper (writing) and electronic images (what you’re reading right now). Human beings really do transmit our thoughts to one another, and this truly is a marvelous thing we should not simply take for granted (it is one of our most impressive feats of Mundane Magic); but somehow I don’t think that’s what people mean when they say they believe in psychic telepathy.

And lest you think this is a uniquely American phenomenon: The particular beliefs vary from place to place, but bizarre beliefs abound worldwide, from conspiracy theories in the UK to 9/11 “truthers” in Canada to HIV denialism in South Africa (fortunately on the wane). The American examples are more familiar to me and most of my readers are Americans, but wherever you are reading from, there are probably ridiculous beliefs common there.

I could go on, listing more objectively ridiculous beliefs that are surprisingly common; but the more I do that, the more I risk alienating you, in case you should happen to believe one of them. When you add up the dizzying array of ridiculous beliefs one could hold, odds are that most people you’d ever meet will have at least one of them. (“Not me!” you’re thinking; and perhaps you’re right. Then again, I’m pretty sure that the 4% or so of people who believe in the Reptilians think the same thing.)

Which brings me to my real focus: How do we reach these people?

One possible approach would be to just ignore them, leave them alone, or go about our business with them as though they did not have ridiculous beliefs. This is in fact the right thing to do under most circumstances, I think; when a stranger on the bus starts blathering about how the lizard people are going to soon reveal themselves and establish the new world order, I don’t think it’s really your responsibility to persuade that person to realign their beliefs with reality. Nodding along quietly would be acceptable, and it would be above and beyond the call of duty to simply say, “Um, no… I’m fairly sure that isn’t true.”

But this cannot always be the answer, if for no other reason than the fact that we live in a democracy, and people with ridiculous beliefs frequently vote according to them. Then people with ridiculous beliefs can take office, and make laws that affect our lives. Actually this would be true even if we had some other system of government; there’s nothing in particular to stop monarchs, hereditary senates, or dictators from believing ridiculous things. If anything, the opposite; dictators are known for their eccentricity precisely because there are no checks on their behavior.

So we really do need to find a way to talk to people who have ridiculous beliefs, and engage with them, understand why they think the way they do, and then—hopefully at least—tilt them a little bit back toward rational reality. You will not be able to change their mind completely right away, but if each of us can at least chip away at their edifice of absurdity, then all together perhaps we can eventually bring them to enlightenment.

Of course, a good start is probably not to say you think that their beliefs are ridiculous, because people get very defensive when you do that, even—perhaps especially—when it’s true. People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.

This is the link that we must somehow break. We must show people that they are not defined by their beliefs, that it is okay to change your mind. We must be patient and compassionate—sometimes heroically so, as people spout offensive nonsense in our faces, sometimes offensive nonsense that directly attacks us personally. (“Atheists deserve Hell”, taken literally, would constitute something like a death threat except infinitely worse. While to them it very likely is just reciting a slogan, to the atheist listening it says that you believe that they are so evil, so horrible that they deserve eternal torture for believing what they do. And you get mad when we say your beliefs are ridiculous?)

We must also remind people that even very smart people can believe very dumb things—indeed, I’d venture a guess that most dumb things are in fact believed by smart people. Even the most intelligent human beings can only glimpse a tiny fraction of the universe, and all human brains are subject to the same fundamental limitations, the same core heuristics and biases. Make it clear that you’re saying you think their beliefs are false, not that they are stupid or crazy. And indeed, make it clear to yourself that this is indeed what you believe, because it ought to be. It can be tempting to think that only an idiot would believe something so ridiculous—and you are safe, for you are no idiot!—but the truth is far more humbling: Human brains are subject to many flaws, and guarding the fortress of the mind against error and deceit is a 24-7 occupation. Indeed, I hope that you will ask yourself: “What beliefs do I hold that other people might find ridiculous? Are they, in fact, ridiculous?”

Even then, it won’t be easy. Most people are strongly resistant to any change in belief, however small, and it is in the nature of ridiculous beliefs that they require radical changes in order to restore correspondence with reality. So we must try in smaller steps.

Maybe don’t try to convince them that 9/11 was actually the work of Osama bin Laden; start by pointing out that yes, steel does bend much more easily at the temperature at which jet fuel burns. Maybe don’t try to persuade them that astrology is meaningless; start by pointing out the ways that their horoscope doesn’t actually seem to fit them, or could be made to fit anybody. Maybe don’t try to get across the real urgency of climate change just yet, and instead point out that the “study” they read showing it was a hoax was clearly funded by oil companies, who would perhaps have a vested interest here. And as for ESP? I think it’s a good start just to point out that we have more than five senses already, and there are many wonders of the human brain that actual scientists know about well worth exploring—so who needs to speculate about things that have no scientific evidence?

Eleizer Yudkowsky (founder of the excellent blog forum Less Wrong) has a term he likes to use to distinguish his economic policy views from either liberal, conservative, or even libertarian: “econoliterate”, meaning the sort of economic policy ideas one comes up with when one actually knows a good deal about economics.

In general I think Yudkowsky overestimates this effect; I’ve known some very knowledgeable economists who disagree quite strongly over economic policy, and often following the conventional political lines of liberal versus conservative: Liberal economists want more progressive taxation and more Keynesian monetary and fiscal policy, while conservative economists want to reduce taxes on capital and remove regulations. Theoretically you can want all these things—as Miles Kimball does—but it’s rare. Conservative economists hate minimum wage, and lean on the theory that says it should be harmful to employment; liberal economists are ambivalent about minimum wage, and lean on the empirical data that shows it has almost no effect on employment. Which is more reliable? The empirical data, obviously—and until more economists start thinking that way, economics is never truly going to be a science as it should be.

Not unemployment, which both economists and almost everyone else agree is bad; but people losing their jobs. The general consensus among the public seems to be that people losing jobs is always bad, while economists generally consider it a sign of an economy that is run smoothly and efficiently.

To be clear, of course losing your job is bad for you; I don’t mean to imply that if you lose your job you shouldn’t be sad or frustrated or anxious about that, particularly not in our current system. Rather, I mean to say that policy which tries to keep people in their jobs is almost always a bad idea.

I think the problem is that most people don’t quite grasp that losing your job and not having a job are not the same thing. People not having jobs who want to have jobs—unemployment—is a bad thing. But losing your job doesn’t mean you have to stay unemployed; it could simply mean you get a new job. And indeed, that is what it should mean, if the economy is running properly.

The red line shows hires—people getting jobs. The blue line shows separations—people losing jobs or leaving jobs. During a recession (the most recent two are shown on this graph), people don’t actually leave their jobs faster than usual; if anything, slightly less. Instead what happens is that hiring rates drop dramatically. When the economy is doing well (as it is right now, more or less), both hires and separations are at very high rates.

Why is this? Well, think about what a job is, really: It’s something that needs done, that no one wants to do for free, so someone pays someone else to do it. Once that thing gets done, what should happen? The job should end. It’s done. The purpose of the job was not to provide for your standard of living; it was to achieve the task at hand. Once it doesn’t need done, why keep doing it?

We tend to lose sight of this, for a couple of reasons. First, we don’t have a basic income, and our social welfare system is very minimal; so a job usually is the only way people have to provide for their standard of living, and they come to think of this as the purpose of the job. Second, many jobs don’t really “get done” in any clear sense; individual tasks are completed, but new ones always arise. After every email sent is another received; after every patient treated is another who falls ill.

But even that is really only true in the short run. In the long run, almost all jobs do actually get done, in the sense that no one has to do them anymore. The job of cleaning up after horses is done (with rare exceptions). The job of manufacturing vacuum tubes for computers is done. Indeed, the job of being a computer—that used to be a profession, young women toiling away with slide rules—is very much done. There are no court jesters anymore, no town criers, and very few artisans (and even then, they’re really more like hobbyists). There are more writers now than ever, and occasional stenographers, but there are no scribes—no one powerful but illiterate pays others just to write things down, because no one powerful is illiterate (and even few who are not powerful, and fewer all the time).

When a job “gets done” in this long-run sense, we usually say that it is obsolete, and again think of this as somehow a bad thing, like we are somehow losing the ability to do something. No, we are gaining the ability to do something better. Jobs don’t become obsolete because we can’t do them anymore; they become obsolete because we don’t need to do them anymore. Instead of computers being a profession that toils with slide rules, they are thinking machines that fit in our pockets; and there are plenty of jobs now for software engineers, web developers, network administrators, hardware designers, and so on as a result.

Soon, there will be no coal miners, and very few oil drillers—or at least I hope so, for the sake of our planet’s climate. There will be far fewer auto workers (robots have already done most of that already), but far more construction workers who install rail lines. There will be more nuclear engineers, more photovoltaic researchers, even more miners and roofers, because we need to mine uranium and install solar panels on rooftops.

Yet even by saying that I am falling into the trap: I am making it sound like the benefit of new technology is that it opens up more new jobs. Typically it does do that, but that isn’t what it’s for. The purpose of technology is to get things done.

Remember my parable of the dishwasher. The goal of our economy is not to make people work; it is to provide people with goods and services. If we could invent a machine today that would do the job of everyone in the world and thereby put us all out of work, most people think that would be terrible—but in fact it would be wonderful.

Or at least it could be, if we did it right. See, the problem right now is that while poor people think that the purpose of a job is to provide for their needs, rich people think that the purpose of poor people is to do jobs. If there are no jobs to be done, why bother with them? At that point, they’re just in the way! (Think I’m exaggerating? Why else would anyone put a work requirement on TANF and SNAP? To do that, you must literally think that poor people do not deserve to eat or have homes if they aren’t, right now, working for an employer. You can couch that in cold economic jargon as “maximizing work incentives”, but that’s what you’re doing—you’re threatening people with starvation if they can’t or won’t find jobs.)

What would happen if we tried to stop people from losing their jobs? Typically, inefficiency. When you aren’t allowed to lay people off when they are no longer doing useful work, we end up in a situation where a large segment of the population is being paid but isn’t doing useful work—and unlike the situation with a basic income, those people would lose their income, at least temporarily, if they quit and tried to do something more useful. There is still considerable uncertainty within the empirical literature on just how much “employment protection” (laws that make it hard to lay people off) actually creates inefficiency and reduces productivity and employment, so it could be that this effect is small—but even so, likewise it does not seem to have the desired effect of reducing unemployment either. It may be like minimum wage, where the effect just isn’t all that large. But it’s probably not saving people from being unemployed; it may simply be shifting the distribution of unemployment so that people with protected jobs are almost never unemployed and people without it are unemployed much more frequently. (This doesn’t have to be based in law, either; while it is made by custom rather than law, it’s quite clear that tenure for university professors makes tenured professors vastly more secure, but at the cost of making employment tenuous and underpaid for adjuncts.)

There are other policies we could make that are better than employment protection, active labor market policies like those in Denmark that would make it easier to find a good job. Yet even then, we’re assuming that everyone needs jobs–and increasingly, that just isn’t true.

So, when we invent a new technology that replaces workers, workers are laid off from their jobs—and that is as it should be. What happens next is what we do wrong, and it’s not even anybody in particular; this is something our whole society does wrong: All those displaced workers get nothing. The extra profit from the more efficient production goes entirely to the shareholders of the corporation—and those shareholders are almost entirely members of the top 0.01%. So the poor get poorer and the rich get richer.

And, perhaps most instructively, here are the quintiles of people who own their homes versus renting (The rent is too damn high!)

All that is just within the US, and already they are ranging from the mean net wealth of the lowest quintile of people under 35 (-$45,000, yes negative—student loans) to the mean net wealth of the highest quintile of people with graduate degrees ($3.8 million). All but the top quintile of renters are poorer than all but the bottom quintile of homeowners. And the median Black or Hispanic person has less than one-tenth the wealth of the median White or Asian person.

If we look worldwide, wealth inequality is even starker. Based on UN University figures, 40% of world wealth is owned by the top 1%; 70% by the top 5%; and 80% by the top 10%. There is less total wealth in the bottom 80% than in the 80-90% decile alone. According to Oxfam, the richest 85 individuals own as much net wealth as the poorest 3.7 billion. They are the 0.000,001%.

If we had an equal distribution of capital ownership, people would be happy when their jobs became obsolete, because it would free them up to do other things (either new jobs, or simply leisure time), while not decreasing their income—because they would be the shareholders receiving those extra profits from higher efficiency. People would be excited to hear about new technologies that might displace their work, especially if those technologies would displace the tedious and difficult parts and leave the creative and fun parts. Losing your job could be the best thing that ever happened to you.

The business cycle would still be a problem; we have good reason not to let recessions happen. But stopping the churn of hiring and firing wouldn’t actually make our society better off; it would keep people in jobs where they don’t belong and prevent us from using our time and labor for its best use.

Perhaps the reason most people don’t even think of this solution is precisely because of the extreme inequality of capital distribution—and the fact that it has more or less always been this way since the dawn of civilization. It doesn’t seem to even occur to most people that capital income is a thing that exists, because they are so far removed from actually having any amount of capital sufficient to generate meaningful income. Perhaps when a robot takes their job, on some level they imagine that the robot is getting paid, when of course it’s the shareholders of the corporations that made the robot and the corporations that are using the robot in place of workers. Or perhaps they imagine that those shareholders actually did so much hard work they deserve to get paid that money for all the hours they spent.

Because pay is for work, isn’t it? The reason you get money is because you’ve earned it by your hard work?

No. This is a lie, told to you by the rich and powerful in order to control you. They know full well that income doesn’t just come from wages—most of their income doesn’t come from wages! Yet this is even built into our language; we say “net worth” and “earnings” rather than “net wealth” and “income”. (Parade magazine has a regular segment called “What People Earn”; it should be called “What People Receive”.) Money is not your just reward for your hard work—at least, not always.

The reason you get money is that this is a useful means of allocating resources in our society. (Remember, money was created by governments for the purpose of facilitating economic transactions. It is not something that occurs in nature.) Wages are one way to do that, but they are far from the only way; they are not even the only way currently in use. As technology advances, we should expect a larger proportion of our income to go to capital—but what we’ve been doing wrong is setting it up so that only a handful of people actually own any capital.

Fix that, and maybe people will finally be able to see that losing your job isn’t such a bad thing; it could even be satisfying, the fulfillment of finally getting something done.

If you’ve been reading my blogs for awhile, you likely have noticed me occasionally drop the hashtag #ScandinaviaIsBetter; I am in fact quite enamored of the Scandinavian (or Nordic more generally) model of economic and social policy.

But this is not a consensus view (except perhaps within Scandinavia itself), and I haven’t actually gotten around to presenting a detailed argument for just what it is that makes these countries so great.

I was inspired to do this by discussion with a classmate of mine (who shall remain nameless) who emphatically disagreed; he actually seems to think that American economic policy is somewhere near optimal (and to be fair, it might actually be near optimal, in the broad space of all possible economic policies—we are not Maoist China, we are not Somalia, we are not a nuclear wasteland). He couldn’t disagree with the statistics on how wealthy and secure and happy Scandinavian countries are, so instead he came up with this: “They are parasites.”

What he seemed to mean by this is that somehow Scandinavian countries achieve their success by sapping wealth from other countries, perhaps the rest of Europe, perhaps the world more generally. On this view, it’s not that Norway and Denmark aren’t rich because they economic policy basically figured out; no, they are somehow draining those riches from elsewhere.

This could scarcely be further from the truth.

But first, consider a couple of countries that are parasites, at least partially: Luxembourg and Singapore.

No, what makes Luxembourg a parasite is the fact that 36% of their GDP is due to finance. Compare the US, where 12% of our GDP is finance—and we are clearly overfinancialized. Over a third of Luxembourg’s income doesn’t involve actually… doing anything. They hold onto other people’s money and place bets with it. Even insofar as finance can be useful, it should be only very slightly profitable, and definitely not more than 10% of GDP. As Stiglitz and Krugman agree (and both are Nobel Laureate economists), banking should be boring.

And at least oil actually does things. Oil exporting countries aren’t parasites so much as they are drug dealers. The world is “rolling drunk on petroleum”, and until we manage to get sober we’re going to continue to need that sweet black crude. Better we buy it from Norway than Saudi Arabia.

But in general, I think if you assembled a general index of overall prosperity of a country (or simply used one that already exists like the Human Development Index), you would find that Scandinavian countries are disproportionately represented at the very highest rankings. This calls out for some sort of explanation.

Is it simply that they are so small? They are certainly quite small; Norway and Denmark each have fewer people than the core of New York City, and Sweden has slightly more people than the Chicago metropolitan area. Put them all together, add in Finland and Iceland (which aren’t quite Scandinavia), and all together you have about the population of the New York City Combined Statistical Area.

But some of the world’s smallest countries are also its poorest. Samoa and Kiribati each have populations comparable to the city of Ann Arbor and per-capita GDPs 1/10 that of the US. Eritrea is the same size as Norway, and 70 times poorer. Burundi is slightly larger than Sweden, and has a per-capita GDP PPP of only $3.14 per day.

There’s actually a good statistical reason to expect that the smallest countries should vary the most in their incomes; you’re averaging over a smaller sample so you get more variance in the estimate. But this doesn’t explain why Norway is rich and Eritrea is poor. Incomes aren’t assigned randomly. This might be a reason to try comparing Norway to specifically New York City or Los Angeles rather than to the United States as a whole (Norway still does better, in case you were wondering—especially compared to LA); but it’s not a reason to say that Norway’s wealth doesn’t really count.

Moreover, there are some very ethnically homogeneous countries that are in horrible shape. North Korea is almost completely ethnically homogeneous, for example, as is Haiti. There does seem to be a correlation between higher ethnic diversity and lower economic prosperity, but Canada and the US are vastly more diverse than Japan and South Korea yet significantly richer. So clearly ethnicity is not the whole story here.

I do think ethnic homogeneity can partly explain why Scandinavian countries have the good policies they do; because humans are tribal, ethnic homogeneity engenders a sense of unity and cooperation, a notion that “we are all in this together”. That egalitarian attitude makes people more comfortable with some of the policies that make Scandinavia what it is, which I will get into at the end of this post.

But this difficulty in falsification is a reason to be cautious about such a hypothesis; it should be a last resort when all the more testable theories have been ruled out. I’m not saying culture doesn’t matter; it clearly does. But unless you can test it, “culture” becomes a theory that can explain just about anything—which means that it really explains nothing.

I can’t really disagree with “good diet”, except to say that almost everywhere eats a better diet than the United States. The homeland of McDonald’s and Coca-Cola is frankly quite dystopian when it comes to rates of heart disease and diabetes. Given our horrible diet and ludicrously inefficient healthcare system, the only reason we live as long as we do is that we are an extremely rich country (so we can afford to pay the most for healthcare, for certain definitions of “afford”), and almost no one here smokes anymore. But good diet isn’t so much Scandinavian as it is… un-American.

But none of these things adequately explains why poverty and inequality is so much lower in Scandinavia than it is in the United States, and there’s really a quite simple explanation.

Why is it that #ScandinaviaIsBetter? They’re not afraid to make rich people pay higher taxes so they can help poor people.

In the US, this idea of “redistribution of wealth” is anathema, even taboo; simply accusing a policy of being “redistributive” or “socialist” is for many Americans a knock-down argument against that policy. In Denmark, “socialist” is a meaningful descriptor; some policies are “socialist”, others “capitalist”, and these aren’t particularly weighted terms; it’s like saying here that a policy is “Keynesian” or “Monetarist”, or if that’s too obscure, saying that it’s “liberal” or “conservative”. People will definitely take sides, and it is a matter of political importance—but it’s inside the Overton Window. It’s not almost unthinkable as it is here.

If culture has an effect here, it likely comes from Scandinavia’s long traditions of egalitarianism. Going at least back to the Vikings, in theory at least (clearly not always in practice), people—or at least fellow Scandinavians—were considered equal participants in society, no one “better” or “higher” than anyone else. Even today, it is impolite in Denmark to express pride at your own accomplishments; there’s a sense that you are trying to present yourself as somehow more deserving than others. Honestly this attitude seems unhealthy to me, though perhaps preferable to the unrelenting narcissism of American society; but insofar as culture is making Scandinavia better, it’s almost certainly because this thoroughgoing sense of egalitarianism underlies all their economic policy. In the US, the rich are brilliant and the poor are lazy; in Denmark, the rich are fortunate and the poor are unlucky. (Which theory is more accurate? Donald Trump. I rest my case.)

To be clear, Scandinavia is not communist; and they are certainly not Stalinist. They don’t believe in total collectivization of industry, or complete government control over the economy. They don’t believe in complete, total equality, or even a hard cap on wealth: Stefan Persson is an 11-figure billionaire. Does he pay high taxes, living in Sweden? Yes he does, considerably higher than he’d pay in the US. He seems to be okay with that. Why, it’s almost like his marginal utility of wealth is now negligible.

In fact, because Scandinavian countries tax differently, it’s not necessarily the case that people always pay higher taxes there. But they pay more transparent taxes, and taxes with sharper incidence. Denmark’s corporate tax rate is only 22% compared to 35% in the US; but their top personal income tax bracket is 59% while ours is only 39.6% (though it can rise over 50% with some state taxes). Denmark also has a land value tax and a VAT, both of which most economists have clamored for for generations. (The land value tax I totally agree with; the VAT I’m a little more ambivalent about.) Moreover, filing your taxes in Denmark is not a month-long stress marathon of gathering paperwork, filling out forms, and fearing that you’ll get something wrong and be audited as it is in the US; they literally just send you a bill. You can contest it, but most people don’t. You just pay it and you’re done.

Now, that does mean the government is keeping track of your income; and I might think that Americans would never tolerate such extreme surveillance… and then I remember that PRISM is a thing. Apparently we’re totally fine with the NSA reading our emails, but God forbid the IRS just fill out our 1040s for us (that they are going to read anyway). And there’s no surveillance involved in requiring retail stores to incorporate sales tax into listed price like they do in Europe instead of making us do math at the cash register like they do here. It’s almost like Americans are trying to make taxes as painful as possible.

Indeed, I think Scandanavian socialism is a good example of how high taxes are a sign of a free society, not an authoritarian one. Taxes are a minimal incursion on liberty. High taxes are how you fund a strong government and maintain extensive infrastructure and public services while still being fair and following the rule of law. The lowest tax rates in the world are in North Korea, which has ostensibly no taxes at all; the government just confiscates whatever they decide they want. Taxes in Venezuela are quite low, because the government just owns all the oil refineries (and also uses multiple currency exchange rates to arbitrage seigniorage). US taxes are low by First World standards, but not by world standards, because we combine a free society with a staunch opposition to excessive taxation. Most of the rest of the free world is fine with paying a lot more taxes than we do. In fact, even using Heritage Foundation data, there is a clear positive correlation between higher tax rates and higher economic freedom:

What’s really strange, though, is that most Americans actually support higher taxes on the rich. They often have strange or even incoherent ideas about what constitutes “rich”; I have extended family members who have said they think $100,000 is an unreasonable amount of money for someone to make, yet somehow are totally okay with Donald Trump making $300,000,000. The chant “we are the 99%” has always been off by a couple orders of magnitude; the plutocrat rentier class is the top 0.01%, not the top 1%. The top 1% consists mainly of doctors and lawyers and engineers; the top 0.01%, to a man—and they are nearly all men, in fact White men—either own corporations or work in finance. But even adjusting for all this, it seems like at least a bare majority of Americans are all right with “redistributive” “socialist” policies—as long as you don’t call them that.

So I suppose that’s sort of what I’m trying to do; don’t think of it as “socialism”. Think of it as #ScandinaviaIsBetter.