I’m trying to read more books. But it’s important to actually learn from the books I read, and a great way to do that is to take notes. Therefore, I’m aiming to also publish notes on the books I read, from now on. These notes are mainly for my own recollection, and they were not constructed with the intention to be a thorough summary. The notes are probably more helpful to those who have already read the book, rather than those trying to get an overview of the book without reading it.

The Tragedy of the Commons and The Tragedy of Commonsense Morality

Greene opens the book by discussing the classic “Tragedy of the Commons”. Here, there is a pasture with herders, who want their cows to graze on the grass. If all the herders have their cows graze as much as possible, there will be no grass left, and eventually the entire herder community will starve. Therefore, it ends up being rational for the herders to band together and create a mutual agreement to only graze as much as will allow sustainable grazing. But, each herder still knows that if they defect from the agreement and graze a lot while all the other poor shmucks stick to the agreement, then they can get a significant advantage without leading to environmental ruin later on (since it’s only the combination of everyone grazing that ruins the pasture). The tension between the collective desire to stick to the agreement and the individual desire to defect is the “Tragedy of the Commons”.

Greene argues that our moral intuitions that make it feel bad to defect from our own communities provides the solution to the “Tragedy of the Commons” (though there is some nuance). However, Greene argues that this averts one disaster just to cause another much bigger problem for modern communities – the “Tragedy of Commonsense Morality”. Here, the herders arrange themselves into tribes (in particular, moral tribes) with different values that they all share as a tribe, but that may conflict with other tribes. When these values of fairness conflict (think of a libertarian’s idea of fair vs. a communist’s idea of fair), conflict happens, despite all the morality.

Greene seeks to resolve this problem via a “meta-morality” known as “utilitarianism”, which is defintely not something I’m into or anything. He thinks we should avoid the “Tragedy of Commonsense Morality” via “Six Rules for Moral Herders” provided at the end of the book, which provide a pretty good structure for summmarizing the whole thing:

In the face of moral controversy, consult, but do not trust, your moral instincts

Rights are not for making arguments; they’re for ending arguments

Focus on the facts, and make others do the same

Beware of biased fairness

Use common currency [of value]

Give

Consult, But Don’t Trust Moral Instincts

Two Systems of Morality

Greene is a moral psychologist and speaks a lot about what psychology has to say about morality. His central thesis to his book is that of “dual-process morality”. A lot of psychological research has found that we have two systems of thinking – one of automatic processes, intuitions, and emotions; and another of deep thinking, logic, and rationality. These two systems were popularized in Daniel Kahneman’s Thinking Fast and Slow. Greene argues that these two processes extend to morality as well.

Kahneman says that we shouldn’t see the deep-thinking system (System 2) as better than the intuitive system (System 1), because while System 1 is more prone to error, it is a lot faster at generating judgement, and can allow us to make the split-second decisions we need to make. It also allows us to make a lot of decisions without burdening our conscious mind with every trivial detail.

Greene agrees that our moral intuitions are important for securing cooperation and morality on a day-to-day basis. However, just like how our intutions fail us in other domains (for example, fearing things that aren’t actually scary, or experiencing optical illusions, being prone to racism), our moral intuitions can fail us as well. For one example, people experience “scope neglect” – if you ask one group of people how much they’ll pay to clean two rivers from polution, they’ll give roughly the same answer as if you asked a different group of people how much they’d pay to clean twenty rivers.

The Switch and The Footbridge

Greene then makes a slightly more controversial thesis and claims that our more deep-thinking moral system is a primarially utilitarian one, and he spends the large bulk of the book making an argument for this. Greene starts by offering a wide range of thought experiments where people generally agree they would, all else being equal, rather save five lives than one life, and save one person from dying than five people from having sprained ankles – both utilitarian judgements.

Greene then focuses on the classic thought-experiment in moral psychology – that of the Trolley Problem:

Blockquote for the switch case

So would you flip the switch? Greene finds that many people would, thinking it much better to save five lives than one life.

-

However, what if we altered the problem a little?

Blockquote for the footbridge case

Here, people are much less sure. But what morally relevant difference is there? Here, we’re making a non-utilitarian judgement based on our intuitions, which say that pushing someone off a footbridge is wrong. Greene argues that all our defenses to not pushing in the footbridge case are just rationalizations. It turns out that, according to Greene’s psychological research, we are intuitively sensitive to actions that (a) harm someone with personal force and (b) use that personal force as a means to acheive a goal.

The reason for (a), our sensitivity to harming with personal force (as opposed to a more personally “distant” action like flipping a switch) is evolutionary – we have an innate mental “event inspector” that fires a warning anytime we consider an action that would involve personal force. Greene suggests that this event inspector exists so as to make it easier for people

Trolley problems doctrine of double effect we don’t like what is both personal and done as a means because we have a mental event inspector that is myopic to side effects

We have such an event inspector that hates personal harm because of evolution — we need advanced planning to be clever enough to survive and dominate, but we don’t want it to turn against our fellow humans

These automatic processes should not be taken as infallible, but also should not be disregarded.

Evolutionary debunking argument may not work well because there’s no moral realism but might be rescued by casting as the kind of intuitions we’d ignore / only loosely trust in other domains (fear of the dark).

It’s still good to have and use these intuitions (just like it’s still good to have fear of dark) to prevent the rise of dictators.

Act/omission distinction comes from our psychological difficulty to recognize lack of something and the large proliferation of total omissions.

It’s harder to keep track of all the actions you’re not doing so it might be better as a policy to be responsible for your own mess

How to Create More Utilitarians

Things that encourage more deliberation (less time pressure, knowing one has been burned by intuition before, mirth?, high need for cognition, emotional impairment) also encourage utilitarianism

People in public health, but not doctors, are more likely to be utilitarian

What Utilitarianism is Not

Utilitarianism has a bad name, deep pragmatism may be better

Deep pragmatism = give up your convictions to do what is best

Important to have utilitarianism not be near-sighted. It’s likely that in real trolley scenarios inaction is best.

Another problem with moral intuitions are the harmful actions that are not intentional or personal, like the environment

Rights Aren’t Arguments

Focus on the Facts

ask people how policies will work

Beware Biased Fairness

The entire problem of “The Tragedy of Commonsense Morality” comes from “biased fairness”, which is a subconscious way in which “fairness” is twisted toward what we personally want. Greene notes psychological cases of “negotiating games”, where a “prosecutor” and a “defender” are paired together and allowed to negotiate, with the existence of positive-sum trades whereby both the prosecutor and defender can come to a deal that is better for both of them while compromising a little bit.

Greene notes that when people negotiate from the perspective of their self-interest, they often agree to these mutually beneficial deals, giving up the small costs in order to secure it. However, when people negotatie from a perspective of “justice” or “what is most fair”, both the prosecutor and defendant insist that their side is in the right and it would be unjust/unfair to compromise, even if a greater benefit could be achieved.

This is a bad thing, however it comes as a side-effect of a thing that has very good effects – pro-social punishment, whereby we take costs upon ourselves to make sure that people who are defecting from society take larger costs. This allows us to keep defectors in check and secure cooperation in society.

People are biased for their tribe - even if they admit they want to do what is best, they will reject it in favor of their own philosophy (biased fairness)

Use Common Currency

Religion and reason both inadequate to ground morality because both reduce to fundamentally unprovable axioms that aren’t self evident (unlike math).

Give

Lastly, Greene urges us all to take note that we have so many more resources than many of the world’s poorest and are in a position to help them. Therefore, he argues, that since we can help someone else so much for such little cost to ourselves, we ought to do so. Greene specifically refers to the research of GiveWell, which reviews hundreds of charities to figure out where donations can go the furthest.

This is obviously a conclusion that I’m sympathetic to, and one that I personally take to heed a lot. However, it opens up an interesting discussion that I’m not sure Greene discussed to it’s fullest.
* Why do you care about that? test to find ultimate value. Mostly happiness.
* Stay up late fearing loss of pinky but not thousands of deaths - despite rationally endorsing otherwise. This is automatic mode again?
* Utilitarianism seems obvious, though the fact that we won’t give up some of our income to help poor people seems to put a wrench in that idea?