Some fairly serious scientists have endorsed predictions of imminent collapse that haven’t panned out, and many continue to do so. This Guardian article should be hilarious to statisticians, as it literally takes trends that are going one direction, maps them onto a theory that arbitrarily decides they’ll suddenly reverse, and then says “the theory fits the data”. This should be taught in statistics courses as a lesson in how not to fit models. More data distortion occurs in this Scientific American article, which contains the phrase “food per capita is decreasing”; well, that’s true if you just look at the last couple of years, but according to FAOSTAT, food production per capita in 2012 (the most recent data in FAOSTAT) was higher than literally every other year on record except 2011. So if you allow for even the slightest amount of random fluctuation, it’s very clear that food per capita is increasing, not decreasing.

Still, when I sat down to study this it was remarkable to me just how good the outlook is for future sustainability. The Index of Sustainable Economic Welfare was created essentially in an attempt to show how our economic growth is largely an illusion driven by our rapacious natural resource consumption, but it has since been discontinued, perhaps because it didn’t show that. Using the US as an example, I reconstructed the index as best I could from World Bank data, and here’s what came out for the period since 1990:

The top line is US GDP as normally measured. The bottom line is the ISEW. The gap between those lines expands on a linear scale, but not on a logarithmic scale; that is to say, GDP and ISEW grow at almost exactly the same rate, so ISEW is always a constant (and large) proportion of GDP. By construction it is necessarily smaller (it basically takes GDP and subtracts out from it), but the fact that it is growing at the same rate shows that our economic growth is not being driven by depletion of natural resources or the military-industrial complex; it’s being driven by real improvements in education and technology.

Of course, I don’t deny that there are serious environmental problems, and we need to make policies to combat them; but we are doing that. Humanity is not mindlessly plunging headlong into an abyss; we are taking steps to improve our future.

And who knows, maybe the extremist doomsayers are necessary to set the Overton Window for the rest of us. I think we of the center-left (toward which reality has a well-known bias) often underestimate how much we rely upon the radical left to pull the discussion away from the radical right and make us seem more reasonable by comparison. It could well be that “climate change will kill tens of millions of people unless we act now to institute a carbon tax and build hundreds of nuclear power plants” is easier to swallow after hearing “climate change will destroy humanity unless we act now to transform global capitalism to agrarian anarcho-socialism.” Ultimately I wish people could be persuaded simply by the overwhelming scientific evidence in favor of the carbon tax/nuclear power argument, but alas, humans are simply not rational enough for that; and you must go to policy with the public you have. So maybe irrational levels of pessimism are a worthwhile corrective to the irrational levels of optimism coming from the other side, like the execrable sophistry of “in praise of fossil fuels” (yes, we know our economy was built on coal and oil—that’s the problem. We’re “rolling drunk on petroleum”; when we’re trying to quit drinking, reminding us how much we enjoy drinking is not helpful.).

But I worry that this sort of irrational pessimism carries its own risks. First there is the risk of simply giving up, succumbing to learned helplessness and deciding there’s nothing we can possibly do to save ourselves. Second is the risk that we will do something needlessly drastic (like the a radical socialist revolution) that impoverishes or even kills millions of people for no reason. The extreme fear that we are on the verge of ecological collapse could lead people to take a “by any means necessary” stance and end up with a cure worse than the disease. So far the word “ecoterrorism” has mainly been applied to what was really ecovandalism; but if we were in fact on the verge of total civilizational collapse, I can understand why someone would think quite literal terrorism was justified (actually the main reason I don’t is that I just don’t see how it could actually help). Just about anything is worth it to save humanity from destruction.

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Let’s zoom in on the last 250 years, where the data is much more solid:

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them.So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24(perhaps the most egregious Fictional Evidence Fallacy ever committed),if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easierto harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.

Bigotry has been a part of human society since the beginning—people have been hating people they perceive as different since as long as there have been people, and maybe even before that. I wouldn’t be surprised to find that different tribes of chimpanzees or even elephants hold bigoted beliefs about each other.

Yet it may surprise you that neoclassical economics has basically no explanation for this. There is a long-standing famous argument that bigotry is inherently irrational: If you hire based on anything aside from actual qualifications, you are leaving money on the table for your company. Because women CEOs are paid less and perform better, simply ending discrimination against women in top executive positions could save any typical large multinational corporation tens of millions of dollars a year. And yet, they don’t! Fancy that.

More recently there has been work on the concept of statistical discrimination, under which it is rational (in the sense of narrowly-defined economic self-interest) to discriminate because categories like race and gender may provide some statistically valid stereotype information. For example, “Black people are poor” is obviously not true across the board, but race is strongly correlated with wealth in the US; “Asians are smart” is not a universal truth, but Asian-Americans do have very high educational attainment. In the absence of more reliable information that might be your best option for making good decisions. Of course, this creates a vicious cycle where people in the positive stereotype group are better off and have more incentive to improve their skills than people in the negative stereotype group, thus perpetuating the statistical validity of the stereotype.

But education doesn’t seem to explain the full effect. One theory to account this is what’s called last-place aversion—a highly pernicious heuristic where people are less concerned about their own absolute status than they are about not having the worst status. In economic experiments, people are usually more willing to give money to people worse off than them than to those better off than them—unless giving it to the worse-off would make those people better off than they themselves are. I think we actually need to do further study to see what happens if it would make those other people exactly as well-off as they are, because that turns out to be absolutely critical to whether people would be willing to support a basic income. In other words, do people count “tied for last”? Would they rather play a game where everyone gets $100, or one where they get $50 but everyone else only gets $10?

I would hope that humanity is better than that—that we would want to play the $100 game, which is analogous to a basic income. But when I look at the extreme and persistent inequality that has plagued human society for millennia, I begin to wonder if perhaps there really are a lot of people who think of the world in such zero-sum, purely relative terms, and care more about being better than others than they do about doing well themselves. Perhaps the horrific poverty of Sub-Saharan Africa and Southeast Asia is, for many First World people, not a bug but a feature; we feel richer when we know they are poorer. Scarcity seems to amplify this zero-sum thinking; racism gets worse whenever we have economic downturns. Precisely because discrimination is economically inefficient, this can create a vicious cycle where poverty causes bigotry which worsens poverty.

There is also something deeper going on, something evolutionary; bigotry is part of what I call the tribal paradigm, the core aspect of human psychology that defines identity in terms of in-groups which are good and out-groups which are bad. We will probably never fully escape the tribal paradigm, but this is not a reason to give up hope; we have made substantial progress in reducing bigotry in many places. What seems to happen is that people learn to expand their mental tribe, so that it encompasses larger and larger groups—not just White Americans but all Americans, or not just Americans but all human beings. Peter Singer calls this the Expanding Circle (also the title of his book on it). We may one day be able to make our tribe large enough to encompass all sentient beings in the universe; at that point, it’s just fine if we are only interested in advancing the interests of those in our tribe, because our tribe would include everyone. Yet I don’t think any of us are quite there yet, and some people have a really long way to go.

But with these expanding tribes in mind, perhaps I can leave you with a fact that is as counter-intuitive as it is encouraging, and even easier still to take out of context: Racism was better than what came before it. What I mean by this is not that racism is good—of course it’s terrible—but that in order to be racism, to define the whole world into a small number of “racial groups”, people already had to enormously expand their mental tribe from where it started. When we evolved on the African savannah millions of years ago, our tribe was 150 people; to this day, that’s about the number of people we actually feel close to and interact with on a personal level. We could have stopped there, and for millennia we did. But over time we managed to expand beyond that number, to a village of 1,000, a town of 10,000, a city of 100,000. More recently we attained mental tribes of whole nations, in some case hundreds of millions of people. Racism is about that same scale, if not a bit larger; what most people (rather arbitrarily, and in a way that changes over time) call “White” constitutes about a billion people. “Asian” (including South Asian) is almost four billion. These are astonishingly huge figures, some seven orders of magnitude larger than what we originally evolved to handle. The ability to feel empathy for all “White” people is just a little bit smaller than the ability to feel empathy for all people period. Similarly, while today the gender in “all men are created equal” is jarring to us, the idea at the time really was an incredibly radical broadening of the moral horizon—Half the world? Are you mad?

Therefore I am confident that one day, not too far from now, the world will take that next step, that next order of magnitude, which many of us already have (or try to), and we will at last conquer bigotry, and if not eradicate it entirely then force it completely into the most distant shadows and deny it its power over our society.

This topic was decided by vote of my Patreons (there are still few enough that the vote usually has only two or three people, but hey, what else can I do?).

When it comes to climate change, I have good news and bad news.

First, the bad news:

We are not going to be able to stop climate change, or even stop making it worse, any time soon. Because of this, millions of people are going to die and there’s nothing we can do about it.

Now, the good news:

We can do a great deal to slow down our contribution to climate change, reduce its impact on human society, and save most of the people who would otherwise have been killed by it. It is currently forecasted that climate change will cause somewhere between 10 million and 100 million deaths over the next century; if we can hold to the lower end of that error bar instead of the upper end, that’s half a dozen Holocausts prevented.

There are three basic approaches to take, and we will need all of them:

Let’s take a look at how the world currently produces electricity. Currently, the leading source of electricity is “liquids”, an odd euphemism for oil; currently about 175 quadrillion BTU per year, 30% of all production. This is closely followed by coal, at about 160 quadrillion BTU per year, 28%. Then we have natural gas, about 130 quadrillion BTU per year (23%), wind, solar, hydroelectric, and geothermal altogether about 60 quadrillion BTU per year (11%), and nuclear fission only about 40 quadrillion BTU per year (7%).

The best power source is solar power, hands-down. In the long run, the goal should be to convert as much as possible of the grid to solar. Wind, hydroelectric, and geothermal are also very useful, though wind power peaks at the wrong time of day for high energy demand and hydro and geothermal require specific geography to work. Solar is also the most scalable; as long as you have the raw materials and the technology, you can keep expanding solar production all the way up to a Dyson Sphere.

“But nuclear power is dangerous!” people will say. France has indeed had several nuclear accidents in the last 40 years; guess how many deaths those accidents have caused? Zero. Deepwater Horizon killed more people than the sum total of all nuclear accidents in all First World countries. Worldwide, there was one Black Swan horrible nuclear event—Chernobyl (which still only killed about as many people as die in the US each year of car accidents or lung cancer), and other than that, nuclear power is safer that every form of fossil fuel.

“Where will we store the nuclear waste?” Well, that’s a more legitimate question, but you know what? It can wait. Nuclear waste doesn’t accumulate very fast, precisely because fission is thousands of times more efficient than combustion; so we’ll have plenty of room in existing facilities or easily-built expansions for the next century. By that point, we should have fusion or a good way of converting the whole grid to solar. We should of course invest in R&D in the meantime. But right now, we need fission.

So, after we’ve converted the electricity grid to nuclear, what next?
1B. To reduce the effect of agriculture, we need to eat less meat; among agricultural sources, livestock is the leading contributor of greenhouse emissions, followed by land use “emissions” (i.e. deforestation), which could also be reduced by converting more crop production to vegetables instead of meat because vegetables are much more land-efficient (and just-about-everything-else-efficient).

1C. To reduce the effect of transportation, we need huge investments in public transit, as well as more fuel-efficient vehicles like hybrids and electric cars. Switching to public transit could cut private transportation-related emissions in half. 100% electric cars are too much to hope for, but by implementing a high carbon tax, we might at least raise the cost of gasoline enough to incentivize makers and buyers of cars to choose more fuel-efficient models.
The biggest gains in fuel efficiency happen on the most gas-guzzling vehicles—indeed, so much so that our usual measure “miles per gallon” is highly misleading.

Quick: Which of the following changes would reduce emissions more, assuming all the vehicles drive the same amount? Switching from a hybrid of 50 MPG to a zero-emission electric (infinity MPG!), switching from a normal sedan of 20 MPG to a hybrid of 50 MPG, or switching from an inefficient diesel truck of 3 MPG to a modern diesel truck of 7 MPG?

The diesel truck, by far.

If each vehicle drives 10,000 miles per year: The first switch will take us from consuming 200 gallons to consuming 0 gallons—saving 200 gallons. The second switch will take us from consuming 500 gallons to consuming 200 gallons—saving 300 gallons. But the third switch will take us from consuming 3,334 gallons to consuming only 1,429 gallons—saving a whopping 1,905 gallons. Even slight increases in the fuel efficiency of highly inefficient vehicles have a huge impact, while you can raise an already-efficient vehicle to perfect efficiency and barely notice a difference.

We really should measure in gallons per mile—or better yet, liters per megameter. (Most of the world uses liters per 100 km; almost!)

2A. There are some exotic proposals out there for geoengineering (putting sulfur into the air to block out the Sun; what could possibly go wrong?), and maybe we’ll end up using some of them. I think iron fertilization of the oceans is one of the more promising options. But we need to be careful to make sure we actually know what these projects will do; we got into this mess by doing things without appreciating their long-run environmental impact, so let’s not make the same mistake again.

But even if we do all that, at this point we probably can’t do enough fast enough to actually stop climate change from causing damage. After we’ve done our best to slow it down, we’re still going to need to respond to its effects and find ways to minimize the harm. That’s strategy 3, adaptation.

3A. Coastal regions around the world are going to have to turn into the Netherlands, surrounded by dikes and polders. First World countries already have the resources to do this, and will most likely do it on our own (many cities already have plans to); but other countries need to be given the resources to do it. We’re responsible for most of the emissions, and we have the most wealth, so we should pick up the tab for most of the adaptation.

I strongly support the implementation of a financial transaction tax; like a basic income, it’s one of those economic policy ideas that are so brilliantly simple it’s honestly a little hard to believe how incredibly effective they are at making the world a better place. You mean we might be able to end stock market crashes just by implementing this little tax that most people will never even notice, and it will raise enough revenue to pay for food stamps? Yes, a financial transaction tax is that good.

They propose a 10% transaction tax on stocks and a 1% transaction tax on notional value of derivatives, then offer a “compromise” of 5% on stocks and 0.5% on derivatives. They make a bunch of revenue projections based on these that clearly amount to nothing but multiplying the current amount of transactions by the tax rate, which is so completely wrong we now officially have a left-wing counterpart to trickle-down voodoo economics.

Their argument is basically like this (I’m paraphrasing): “If we have to pay 5% sales tax on groceries, why shouldn’t you have to pay 5% on stocks?”

But that’s not how any of this works.

Demand for most groceries is very inelastic, especially in the aggregate. While you might change which groceries you’ll buy depending on their respective prices, and you may buy in bulk or wait for sales, over a reasonably long period (say a year) across a large population (say all of Michigan or all of the US), total amount of spending on groceries is extremely stable. People only need a certain amount of food, and they generally buy that amount and then stop.

So, if you implement a 5% sales tax that applies to groceries (actually sales tax in most states doesn’t apply to most groceries, but honestly it probably should—offset the regressiveness by providing more social services), people would just… spend about 5% more on groceries. Probably a bit less than that, actually, since suppliers would absorb some of the tax; but demand is much less elastic for groceries than supply, so buyers would bear most of the incidence of the tax. (It does not matter how the tax is collected; see my tax incidence series for further explanation of why.)

Other goods like clothing and electronics are a bit more elastic, so you’d get some deadweight loss from the sales tax; but at a typical 5% to 10% in the US this is pretty minimal, and even the hefty 20% or 30% VATs in some European countries only have a moderate effect. (Denmark’s 180% sales tax on cars seems a bit excessive to me, but it is Pigovian to disincentivize driving, so it also has very little deadweight loss.)

But what would happen if you implemented a 5% transaction tax on stocks? The entire stock market would immediately collapse.

A typical return on stocks is between 5% and 15% per year. As a rule of thumb, let’s say about 10%.

If you pay 5% sales tax and trade once per year, tax just cut your return in half.

Even if you only trade once every five years, a 5% sales tax means that instead of your stocks being worth 61% more after those 5 years they are only worth 53% more. Your annual return has been reduced from 10% to 8.9%.

But in fact there are many perfectly legitimate reasons to trade as often as monthly, and a 5% tax would make monthly trading completely unviable.

Even if you could somehow stop everyone from pulling out all their money just before the tax takes effect, you would still completely dry up the stock market as a source of funding for all but the most long-term projects. Corporations would either need to finance their entire operations out of cash or bonds, or collapse and trigger a global depression.

Derivatives are even more extreme. The notional value of derivatives is often ludicrously huge; we currently have over a quadrillion dollars in notional value of outstanding derivatives. Assume that say 10% of those are traded every year, and we’re talking $100 trillion in notional value of transactions. At 0.5% you’re trying to take in a tax of $500 billion. That sounds fantastic—so much money!—but in fact what you should be thinking about is that’s a really strong avoidance incentive. You don’t think banks will find a way to restructure their trading practices—or stop trading altogether—to avoid this tax?

Honestly, maybe a total end to derivatives trading would be tolerable. I certainly think we need to dramatically reduce the amount of derivatives trading, and much of what is being traded—credit default swaps, collateralized debt obligations, synthetic collateralized debt obligations, etc.—really should not exist and serves no real function except to obscure fraud and speculation. (Credit default swaps are basically insurance you can buy on other people’s companies. There’s a reason you’re not allowed to buy insurance on other people’s stuff!) Interest rate swaps aren’t terrible (when they’re not being used to perpetrate the largest white-collar crime in history), but they also aren’t necessary. You might be able to convince me that commodity futures and stock options are genuinely useful, though even these are clearly overrated. (Fun fact: Futures markets have been causing financial crises since at least Classical Rome.) Exchange-traded funds are technically derivatives, and they’re just fine (actually ETFs are very low-risk, because they are inherently diversified—which is why you should probably be buying them); but actually their returns are more like stocks, so the 0.5% might not be insanely high in that case.

But stocks? We kind of need those. Equity financing has been the foundation of capitalism since the very beginning. Maybe we could conceivably go to a fully debt-financed system, but it would be a radical overhaul of our entire financial system and is certainly not something to be done lightly.

Indeed, TruthOut even seems to think we could apply the same sales tax rate to bonds, which means that debt financing would also collapse, and now we’re definitely talking about global depression. How exactly is anyone supposed to finance new investments, if they can’t sell stock or bonds? And a 5% tax on the face value of stock or bonds, for all practical purposes, is saying that you can’t sell stock or bonds. It would make no one want to buy them.

Wealthy investors buying of stocks and bonds is essentially no different than average folks buying food, clothing or other real “goods and services.”

Yes it is. It is fundamentally different.

People buy goods to use them. People buy stocks to make money selling them.

This seems perfectly obvious, but it is a vital distinction that seems to be lost on TruthOut.

When you buy an apple or a shoe or a phone or a car, you care how much it costs relative to how useful it is to you; if we make it a bit more expensive, that will make you a bit less likely to buy it—but probably not even one-to-one so that a 5% tax would reduce purchases by 5%; it would probably be more like a 2% reduction. Demand for goods is inelastic. Taxing them will raise a lot of revenue and not reduce the quantity purchased very much.

But when you buy a stock or a bond or an interest rate swap, you care how much it costs relative to what you will be able to sell it for—you care about not its utility but its return. So a 5% tax will reduce the amount of buying and selling by substantially more than 5%—it could well be 50% or even 100%. Demand for financial assets is elastic. Taxing them will not raise much revenue but will substantially reduce the quantity purchased.

Okay, half it again, to a 2.5 percent tax on stocks and bonds and a 0.25 percent on derivative trades. That certainly won’t discourage stock and bond trading by the rich (not that that is an all bad idea either).

Yes it will. By a lot. That’s the whole point.

A financial transaction tax is a great idea whose time has come; let’s not ruin its reputation by setting it at a preposterous value. Just as a $15 minimum wage is probably a good idea but a $250 minimum wage is definitely a terrible idea, a 0.1% financial transaction tax could be very beneficial but a 5% financial transaction tax would clearly be disastrous.

It’s now beginning to look like an ongoing series: “Reasons to be optimistic about our democracy.”

Super PACs, in case you didn’t know, are a bizarre form of legal entity, established after the ludicrous Citizens United ruling (“Corporations are people” and “money is speech” are literally Orwellian), which allows corporations to donate essentially unlimited funds to political campaigns with minimal disclosure and zero accountability. This creates an arms race where even otherwise-honest candidates feel pressured to take more secret money just to keep up.

At the time, a lot of policy wonks said “Don’t worry, they already give tons of money anyway, what’s the big deal?”

Well, those wonks were wrong—it was a big deal. Corporate donations to political campaigns exploded in the era of Super PACs. The Citizens United ruling was made in 2010, and take a look at this graph of total “independent” (i.e., not tied to candidate or party) campaign spending (using data from OpenSecrets):

It’s a small sample size, to be sure, and campaign spending was already rising. But 2010 and 2014 were very high by the usual standards of midterm elections, and 2012 was absolutely unprecedented—over $1 billion spent on campaigns. Moreover, the only reason 2016 looks lower than 2012 is that we’re not done with 2016 yet; I’m sure it will rise a lot higher than it is now, and very likely overtake 2012. (And if it doesn’t it’ll be because Bernie Sanders and Donald Trump made very little use Super-PACs, for quite different reasons.) It was projected to exceed $4 billion, though I doubt it will actually make it quite that high.

Worst of all, this money is all coming from a handful of billionaires. 41% of Super-PAC funds comes from the same 50 households. That’s fifty. Even including everyone living in the household, this group of people could easily fit inside an average lecture hall—and they account for two-fifths of independent campaign spending in the US.

Hillary Clinton is winning, and will probably win the election; and she does have the most Super-PAC money among candidates still in the race (at $76 million, about what the Clintons themselves make in 3 years). Ted Cruz also has $63 million in Super-PAC money. But Bernie Sanders only has $600,000 in Super-PAC money (actually also about 3 times his household income, coincidentally), and Donald Trump only has $2.7 million. Both of these are less than John Kasich’s $13 million in Super-PAC spending, and yet Kasich and Cruz are now dropped out and only Trump remains.

But more importantly, the largest amount of Super-PAC money went to none other than Jeb Bush—a whopping $121 million—and it did basically nothing for him. Marco Rubio had $62 million in Super-PAC money, and he dropped out too. Martin O’Malley had more Super-PAC money than Bernie Sanders, and where is he now? In fact, literally every Republican candidate had more Super-PAC money than Bernie Sanders, and every Republican but Rick Santorum, Jim Gilmore, and George Pataki (you’re probably thinking: “Who?” Exactly.) had more Super-PAC money than Donald Trump.

You wouldn’t immediately see that from our current Presidential race; while Rubio raised $117 million and Jeb! raised $155 million and both of them lost, the winners also raised a great deal. Hillary Clinton raised $256 million, Bernie Sanders raised $180 million, Ted Cruz raised $142 million, and Donald Trump raised $48 million. Even that last figure is mainly so low because Donald Trump is a master at getting free publicity; the media effectively gave Trump an astonishing $1.89 billion in free publicity. To be fair, a lot of that was bad publicity—but it still got his name and his ideas out there and didn’t cost him a dime.

So, just from the overall spending figures, it looks like maybe total campaign spending is important, even if Super-PACs in particular are useless.