Society According to Kevin: Part 1

From the comments on my Introduction to this series, it appears I have discovered a controversial topic. Good. My first objective will be to illustrate why we cannot rely on moral compasses to guide society. After some thought, I have decided to break the topic of moral compasses into two posts: how they fail and why they fail.

During this series, I will use the term “society” to mean a group of people with a common set of explicit and implicit rules living in the same geographic region . Obviously, there is a loose hierarchy where a larger society may include smaller societies. As you move up the hierarchy, the number of common rules diminishes as the geographic area increases. Eventually, we reach the “Global Society”. The members of a society also share a significant number of resources and some kind of semi-stable identity.

The number of people in a leaf-level society varies with their economic interdependence and communication channels. Less advanced societies require fewer members to sustain coherence. In areas with high mobility and mass media, the smallest unit I would consider a coherent society contains on the order of a million people. So Palo Alto is not a society. Silicon Valley may be one. The San Francisco Bay Area definitely is.

I’ll start by outlining my position in contrast to three points brought up in the comments to the Introduction:

People have moral compasses. I absolutely agree. They appear to be a combination of evolutionarily directed hardwired behaviors and childhood indoctrination into the social group.

Moral compasses are useful. I absolutely agree. We wouldn’t have a civilization without them. However, they are useful in a limited set of situations, few of which apply at the society level.

The moral compass point of view is as legitimate as the incentive structure point of view. I’m sorry, but… no. Rather, the moral compass is an extremely narrow and coarse approximation of incentive structure.

The moral compass is an internal voice that, when faced with a choice, answers the question, “What’s the right thing to do?” People might describe it as a “feeling, “instinct”, or “belief”. The good things about moral compasses are that they are fast and cheap. If you need an answer in seconds or the amount of value in question is small, your moral compass is pretty much the only reasonable tool.

However, at the level of a modern society, where we have time to consider and the value in question in large, the moral compass breaks down. There are five major flaws with trying to apply moral compasses at this scale.

They return mostly binary answers. Is “it” right or wrong, good or bad, safe or dangerous? Our brains want to categorize rather than measure. So we get discrete rather than continuous output.

They vary significantly among people in a society. For example, in California, we have fairly even splits on important questions such as gay marriage, gun rights, death penalty, abortion, and euthanasia. Water rights, welfare, and environmental policy top the list of contentious economic issues.

They are opaque to introspection. Most people have difficulty articulating any reasons behind their position. Those that do frankly end up sounding like they’re rationalizing. In fact, there’s evidence that people decide things before they have conscious reason to.

They are sensitive to framing. “Undecided” people often respond differently to controversial questions depending on the framing. Is gay marriage a fairness or a moral issue? Are gun rights a safety or a freedom issue? Is the death penalty a life or a punishment issue?

They are hard to change. My experience is that, on controversial issues, people are very unlikely to change their minds once they’ve firmly staked out a position. They will blatantly ignore evidence in favor of a view anecdotal data points that confirm their pre-existing belief. This experience is backed up by the research behind cognitive dissonance: your beliefs change to match your actions.

The only thing that allows us to get overcome these barriers is trust. Humans are hardwired for cooperation. Unfortunately, this trust and willingness to cooperate usually only extends to a relatively small “in group” with whom we have tight social ties (for an excellent series of blog posts exploring this topic, go to the first one at Life with Alacrity).

The exact limit is debatable, but it is on the order of 100, so four orders of magnitude less than a modern society. That means it will be impossible to coordinate a modern society using moral compasses. Once you reach a certain number of people the chances of reaching an impasse on any but the most fundamental issues approaches certainty. Moreover, the number of people is too larger to rely on social trust to overcome entrenched positions.

In the next post, I’ll examine why moral compasses break down. As a result, I hope you will see that moral compasses are really an approximation of a more general approach that we can employ more directly.

7 Responses

I don’t have any disagreements here, except maybe the first #3 above. I’m not sure what that means. If it’s prescriptive, I agree, if it’s descriptive I disagree.

As I said about the introduction post: I assert that individuals sometimes — and with some people it’s often — use their moral compasses as reasons for action even when this contradicts the “rational” incentive-based choice. This is descriptive. I don’t see anything above which refutes that, and I believe it’s consistent with the model you’ve presented so far.

I agree that people try to use their moral compass to make society-level policy. However, they typically fail to achieve their goals. That’s what I mean by “the way society works”. If you try to use your moral compass, you will either fail or create an identifiable loss is social welfare. So if you care about getting what you want, you should rely on incentives.

I assume “if you care about getting what you want” refers to the collective you, as in society, or at least policy makers. If so, we are in violent agreement 🙂 The key being what you can rely on prescriptively. And as you point out, the larger the group the more incentives take over as the prime mover of individual action.

Descriptively I would still argue that you still find aggregate and emergent behaviors that are better explained by a model in which moral compasses exist than one in which only incentives exist.

Of course there’s the argument that moral compasses are simply the integral of all incentives (direct and indirect) influencing the individual, but I think that obfuscates things in the same way that behaviorism obfuscates what’s going on in human psychology.

I think the argument can be made that the moral compasses provided to us through genetic and cultural evolution are insufficient to meet the challenges of our times. However, I don’t think you make this argument convincingly.

In response to the five flaws you see in human moral compasses,

1. I don’t agree with this. I think most people understand well that though some ethical decisions are binary, others are nuanced and have a range of answers to consider.

2. We humans disagree on many specific issues, but largely agree on the basics: that people have a right to live healthy, productive, free lives. (At least on the in-group level. Certainly there are times in history where these rights were not applied to out-group members.)+

3. Yes, there’s reason to believe that ethical decisions are made more with the gut than with the mind. But this isn’t a priori a problem if your gut is telling you the right thing.

4. This one I fully agree with, and it’s an important point.

5. True, individual beliefs don’t change much. But we have seen that beliefs can change drastically on a societal level, over the scale of only a generation or so.

(1) The problem here is that you’re confusing what a person’s moral compass returns with what comes out of people’s mouths. There’s a fair amount of research to show that there are actually two modes of reasoning about ethical dilemmas: an intuitionist mode (what we’re calling the moral compass) and the rational mode. If you ask people their position, they will try to appear nuanced by using their rational mode. But research shows they act more on their visceral emotional reactions (see for instance http://www.mc.edu/campus/users/sbaldwin/emotional%20dog%20rational%20tail.pdf).

(2) Uh, actually I think the people you know and I know agree on these things. Some substantial fraction of the world’s population still believes in honor killings, revenge killings, seizing property, coercive subjugation, or stratified society. In fact, if you total up all the humans of the last 200 years (the time limit on traits formed by genetic selection), we’re probably in the minority.

(3) Uh, this is all well and good if you’re willing to believe that _your_ moral compass is somehow special = right. But what about all those other people’s moral compasses? How do we decide which answer is “right”.

(5) This scares me most of all. My ancestors even 200 years ago enslaved a race people, genocided another, and settled their disputes with violence. I’ve been indoctrinated differently but I’m terrified of relying on a mechanism that is formed by what people tell you is right when you’re a toddler. Given the rate of technological change and societal structure, do you really want to rely on something that slow and unreliable for converging to the “right” answer?

~ All of this is very interesting, and adds to my pooled ideas and theories.

I have not studied anything like you have, and so if you could forgive any misuse of language or obvious lack of understanding, then that would be great. Just let me know.

My morals are based on my energetic interactions (on all levels) and so represent only that which I have experience. Instantly, you can see that therefore they cannot be correct in all perspectives, because they were created only through one perspective.
The idea that my moral compass could ever point in the ‘right’ direction is simply absurd, as for it to point in the right direction it would have to point in all directions at once – or at least all directions that I have experienced other people pointing towards.

This means that ideally there would be another, better system by which I can base my moral decisions on.

The other system being offered here, as far as I can tell is computed rationalism, based on incentive and therefore an implied reward.
This again succumbs to the realisations that all is based on experience – rationalism included.
The idea that both of these systems, indeed all systems, result in a conclusion, either a propogating conclusion, or a non-propogating clonclusion (sorry, lost word), infers that everything is binary to some extent. Even the idea that things are… or are not… etc…

So conclusion…. Umm, I guess that I am trying to explain to myself why people and societies through individuals, base their moral decisions on faulty mechanisms. I reckon it must be that they have only ever experienced such behaviour and so it seems intrinsic to their very structure and survival that they continue to use these mechanisms.