Monthly Archives: February 2014

Up until about the age of seven, children across the world show similar levels of sharing behaviour as revealed by their choices in a simple economic game. The finding comes courtesy of Bailey House and his colleagues who tested 326 children aged three to fourteen from six different cultural groups: urban Americans from Los Angeles; horticultural Shuar from Ecuador; horticultural and marine foraging Fijians from Yasawa Island; hunter-gathering Akas from the Central African Republic; pastoral, horticultural Himbas from Namibia; and hunter-gatherer Martus from Australia.

[…]

The older children’s choices tended to mirror the behaviour of the adults from their culture on similar games, suggesting they were gradually acquiring the social norms around altruism and reciprocity for their specific society.

The emergence of cultural differences in the older children’s choices only appeared for this costly version of the game, in which giving to another person meant sacrificing their own gain. In a different version, in which they could be generous at no cost to themselves, no such differences emerged across cultures.

This makes sense, to me, of how pro-social behavior (which includes religiosity) is partially genetic and partially cultural. This also seems to imply that the crucial time to either inculcate or inject religious belief to maximum effect would be around three years old up until teenage years. Of course, religious groups already know that.

“This is one of the first studies to examine the link between perceptions of addiction to online pornography and religious beliefs,” said Joshua Grubbs, a doctoral student in psychology and lead author of the study.

The research, “Transgression as Addiction: Religiosity and Moral Disapproval as Predictors of Perceived Addiction to Pornography,” will be published today in the journal, Archives of Sexual Behavior.

“We were surprised that the amount of viewing did not impact the perception of addiction, but strong moral beliefs did,” Grubbs said. He defined Internet pornography as viewing online sexually explicit pictures and videos.

[…]

Grubbs also discovered that half of the more than 1,200 books about pornography addiction on Amazon.com were listed in the religious and spirituality sections. And many of the books were personal testimonials [my link] about the struggles with this addiction, he said.

[…]

The information may help therapists understand that the perception of addiction is more about religious beliefs than actual viewing, researchers concluded.

“We can help the individual understand what is driving this perception,” Grubbs said, “and help individuals better enjoy their faith.”

In other words, extreme religiosity has a tendency to pathologize normal human behavior.

So on the Facebook I had a conversation with someone regarding what constitutes a good explanation. The person rejoined with something like “real life isn’t a probability game” so I gave up the thread at that.

But why would someone think that the laws of probability don’t apply to everyday situations, or for making sense of the world? We intuitively use probability to make many of our decisions during the day. The decision to not walk down a dark alley in a bad neighborhood at 3am is one based on probability. There is neither a zero percent chance that you’ll get robbed nor a 100% chance that you will get robbed. The decision to fly in a plane, get in a car, check the weather, type on a keyboard… these are all decisions based on knowing the odds in your favor.

There are a few posts on Less Wrong about how probability follows from logic, and also a few books. But I think those are a bit too high level for the type of person who would think that probability doesn’t apply for making sense of the world. So something more intuitive and obvious is needed.

Let’s see if I can try this “probability follows from logic” thing with as little assumptions as possible.

Assumptions: A = A. A != ~A. Meaning that if I say the word “ball”, you know I mean “ball” and not “not-ball”. This is a fundamental assumption for normal human communication to be possible. You’ve assumed this assumption in order to comprehend this post!

A normal argument:

Premise 1: If A Then B
Premise 2: A
Conclusion: B

This is also pretty straightforward. The word to describe this inference if you want to run someone over with fancy Latin words is modus ponens.

Another type of inference:

P1: If A Then B
P2: not-B
C: not-A

Not as straightforwardly intuitive as modus ponens. This one is called modus tollens. Which means that if we have a material conditional where the consequent is false, then asserting that the antecedent is also false follows logically.

These types of inferences color a lot of our explanatory language. If my computer is on, then I can type this blog post. My computer is on, so a reasonable conclusion — based on the premise that if my computer is on then I can type this post — is that I can type this post. It seems very redundant because we should know this sort of stuff already.

Now for the challenging part.

P1: A does not equal C
P2: If A Then B
P:3 If C Then B
P4: B
C: …?

In this case, if you write as the conclusion A or C, that is a logical fallacy called affirming the consequent. Due to the logic of truth tables, you can’t conclude the antecedent of a material conditional if only given the consequent as a premise.

What if you want to find out the true antecedent for B? You could just write the conclusion as
A or C but you also have to take into account that there might be other causes for B besides A and C; that would be your unknown ???. So it would follow logically that the conclusion could be A or C or ???. But that doesn’t really help, does it? The conclusion could always be written as ??? and we would be “correct”.

Or say you were doing a code review of a program that had two separate logic gates:

If (A == true) then B;
If (C == true) then B;

You run the program and B executes. Which condition in the code was satisfied to run B? Was it A or C? Or some other unknown state? Assuming the same argument as above (A is not equal to C) if it were me, I would look for other evidence that also followed from A or C being run. But we also don’t know what other processes or code could also produce B so those have to be included in the possibilities.

To make this easier, let’s say that instead of a program, we’re dealing with something in real life. The prototypical example is wet grass. If it rains, then the grass is wet. If the sprinklers turn on, then the grass is wet. When I say “rain” I don’t secretly mean “sprinklers” or vice versa; rain is not equal to sprinklers (though it could rain and the sprinklers turn on, but for simplicity’s sake let’s assume that doesn’t happen). This also follows the same rules of logic as above. Just because the grass is wet doesn’t mean that it rained nor that the sprinklers turned on; that is the same affirming the consequent fallacy because some unknown other causes might make the grass wet.

So let’s say we ran this code so 100 times, or in real life checked the grass 100 days in a row. 10 out of those 100 times we checked the grass (i.e. ran the program), the water was wet (B executed). Of those 10 times, 2 times that the grass was wet was due to the sprinklers. 5 times the grass was wet, it rained. And the remaining 3 times the grass was wet it neither rained nor the sprinklers turned on, but was due to some unknown cause.

What makes things easy in this case is the assumption that every time it rains then the grass is wet and every time the sprinklers turn on the grass is wet. But what if that wasn’t the case? It follows that the grass being wet given that it rained or the sprinklers turned on were true would be some sort of fraction, depending on how tightly coupled rain/sprinklers were to wet grass. So to run with this thought for a moment, let’s say that 5 out of 6 times that it rained, the grass was wet and 2 out of 4 times that the sprinklers turned on, the grass was wet. Maybe the reason for the inconsistencies is due to a really arid climate that dries out the grass before it’s checked. Maybe the rain/sprinklers didn’t last long enough to really make the grass wet. Who knows.

Now, we have a bunch of fractions floating around. We need to keep track of them.

10 out of 100 days the grass was wet.
of those 10 times:
2 out of 10 times the grass was wet was due to sprinklers
5 out of 10 times the grass was wet was due to rain
3 out of 10 times the grass was wet was due to some other reason

What about the total times that the sprinklers turned on out of those 100 days, and how many times it rained during those days? Those are also numbers we need to be aware of. In this hypothetical, it rained 6 times total and the sprinklers turned on 4 times total. So more fractions:

6 out of 100 days it rained
4 out of 100 days the sprinklers turned on

It might be better to start thinking in terms of probability, right? So what is the probability that the grass is wet given that it rained? Well that would be the total number of times that the grass was wet given that it rained, or Pr(grass wet | rained). The grass was wet 10 times, and of those 10 times, 5 were due to rain; 5 out of 10 or 50%.

So let’s back up a bit. We are starting to get into the laws of probability. So as an example, what is the probability of flipping heads twice in a row? This would be 50% * 50%, which is 25%. Meaning that if you flipped a coin 100 times in a row, you should end up with the sequence heads-heads 25 times. This is a straight multiplication because each flip is independent of each other; my coin choice in the first flip doesn’t determine the type of coin tossed in the second flip. However, if the choices are dependent, then a different multiplication rule applies.

Let’s say that I pick a card from a deck, a queen. What is the probability of picking a queen? 4 out of 52. If I then ask what the probability is of picking a jack of spades next, my first choice obviously affects my second choice. The probability of picking a jack of spades is no longer 1 out of 52 but 1 out 51, since the queen has already been removed. This is dependence. It then becomes Pr(Queen) * Pr(Jack of Spades | Queen). This is 4/52 * 1/51, which is 4 out of 2652. Meaning that the sequence “picked a queen” and then “jack of hearts” is really unlikely since we are basing “jack of hearts” on the previous condition of “picked a queen”. They are dependent.

It just so happens that Pr(Queen)*Pr(Jack of Spades | Queen) = Pr(Jack of Spades)*Pr(Queen | Jack of Spades). Try it out:

4/52 * 1/51 = 1/52 * 4/51

This “x * x given y” also applies to the coin flips. Since the coin flips are independent, the Pr(heads) is equal to Pr(heads | heads) so to find out the probability of flipping heads twice in a row is Pr(heads) * Pr(heads | heads). It is still 25%.

Back to the task at hand. We know that Pr(grass wet | rained) is 50%. We also know that the probability of it raining at all, Pr(rained), is 6 out of 100, or 6%. This means we have Pr(rained)*Pr(grass wet | rained) = Pr(grass wet)*Pr(rained | grass wet). Meaning that the next time we find the grass wet, and we want to find out Pr(rained | grass wet), we have all of the info we need to calculate this: We know what Pr(rained) is, what Pr(grass wet | rained) is, and Pr(grass wet) is. So if we want to figure that out, we divide both sides of that equation Pr(rained)*Pr(grass wet | rained) = Pr(grass wet)*Pr(rained | grass wet) by Pr(grass wet). What does that formula become?

As you can see, since we don’t know what the probability of A or C is, we can’t conclude anything about A or C given that B happened. If Pr(A) were 100% then B would also be 100% and that would be modus ponens. Anything less than 100% and we can’t safely conclude A to the exclusion of C or any other conclusion just like affirming the consequent says.

What we should be aware of, however, is that modus ponens itself only works if we have 100% certainty in our premises. If Pr(B | A) was 100% and Pr(A) was 50%, then modus ponens fails, since our conclusion can only be as strong as our weakest premise. Hence Occam’s Razor.

Notice that the logical fallacy affirming the consequent can be viewed as weak Bayesian evidence: If Pr(B | A) > Pr(B | ~A), then Pr(A) is probably more likely than Pr(~A)… which means affirming the consequent — B — might be weak or strong Bayesian evidence for A.

So real life is certainly a probability game. It’s much more of a probability game than a logic game, and we can’t function at all in society without adhering to a system of logic. Like I said, you assumed logic to even comprehend the words in this post, even though logic only works if we have 100% certainty in our premises. We live in the real world, where 100% certainty in anything isn’t reasonable. We live in a world of uncertainty, not logic, making the laws of probability more relevant than logic even though, again, we need to assume logic to even comprehend basic communication and making sense of our everyday lives.

Since I always link back to the original post, I thought I would re-blog what I wrote all those years ago so that it would have a fresh life. Also, it also got me included in the top 100 rationality quotes over at Less Wrong. So, here it is:

A newspaper is better than a magazine. A seashore is a better place than the street. At first it is better to run than to walk. You may have to try several times. It takes some skill, but it is easy to learn. Even young children can enjoy it. Once successful, complications are minimal. Birds seldom get too close. Rain, however, soaks in very fast. Too many people doing the same thing can also cause problems. One needs lots of room. If there are no complications, it can be very peaceful. A rock will serve as an anchor. If things break loose from it, however, you will not get a second chance.

Is this paragraph comprehensible or meaningless? Feel your mind sort through potential explanations. Now watch what happens with the presentation of a single word: kite. As you reread the paragraph, feel the prior discomfort of something amiss shifting to a pleasing sense of rightness. Everything fits; every sentence works and has meaning. Reread the paragraph again; it is impossible to regain the sense of not understanding. In an instant, without due conscious deliberation, the paragraph has been irreversibly infused with a feeling of knowing.

Try to imagine other interpretations for the paragraph. Suppose I tell you that this is a collaborative poem written by a third-grade class, or a collage of strung-together fortune cookie quotes. Your mind balks. The presence of this feeling of knowing makes contemplating alternatives physically difficult.

This is a quote from Robert Burton’s pretty good book On Being Certain: Believing You Are Right Even When You’re Not. This quote was pretty profound to me, because it made me realize that the feeling of certainty is involuntary; I can’t just will myself to feel certain. But up until I read that book, I operated as though it was something I could control and I would rationalize any sort of belief or conclusion since it was premised on this feeling of certainty. I was well aware of rationalizing feeling angry, or love, or many other emotions and I had pretty good practice at girding myself against those emotions clouding my judgement. But the feeling of certainty also needs to be girded against!

I can’t even imagine how other people — people who think they are being rational — can navigate the world of evaluating issues rationality without knowing this fundamental (well, fundamental to me) aspect of our cognition. Just because something feels correct doesn’t mean that it actually is. And people never question that “just feels right” feeling. They skip right over it and delve into their arguments.

This post subsequently fed into a lot of my later cognitive science posts, like the Thief and the Wizard and the Intuitionists and the Rationalists. But reading this quote makes you (well, me) put a spotlight on what one aspect of your thief/intuitionist feels like, and what to look for when dealing with evaluating issues rationally. So here is that spotlight again, because it needs to stay on.

And the biggest takeaway: Feeling certain feels good! That’s what makes it extra sneaky. So what do you feel certain about but haven’t actually looked at critically? That you haven’t looked at rationality because you don’t want to lose the good feels of the feeling of being certain? Are all of the arguments that you put forth only there to make sure you don’t lose that good feeling?

I’ve written a bit about the science of persuasion and marketing, and also how someone’s identity is a good indicator of how they can be persuaded. With the recent popular debate between Ken Ham and Bill Nye, I think the ability to persuade should be more in the spotlight, especially in rationalist circles (the other side of that coin is a necessity to also be aware of dark arts persuasion). It’s not enough that you think you have the correct answers, you should also know how the human brain works in order to effectively communicate those ideas in a way humans are more likely to accept them. But when most people — even smart people — attempt to argue their case, they usually handicap themselves before even getting out the door. Why?

Because being criticized is the first roadblock to effective persuasion!

If you get offended, or if you offend someone, the likelihood of having a fruitful exchange of ideas plummets. Sure, you can minimize how offended you can become by keeping your identity small, but you have no control over the identity of a person you’re trying to communicate with.

Almost everyone I know these days has heard of confirmation bias. It’s probably one of the first biases people hear about when they start reading about why people believe the things they do. Or maybe they read it on some Tumblr “debate” thread. By now a lot of people should know that the human tendency for confirmation bias is overwhelming, which also leads to its ankle-grabbing sibling disconfirmation bias. Just take a look at how many articles there are on Google Scholar related to motivated skepticism.

Considering that disconfirmation bias is our default mode of operation when encountering dissenting information, adding an insult — either real or perceived — to the debate can only lead to a defensive position, increasing the offended party’s proclivity for disconfirmation bias to even higher than baseline levels. As I always quote, arguments are soldiers; once you know which side you’re on you defend that side at all costs. Any real or perceived deviation from the dress right dress of your soldier’s formation is seen as an attack from the enemy. So calling someone anti-science when trying to convince them of some scientific position is much more likely to have the unintended effect of increasing the gain on someone’s disconfirmation bias (summarized):

Background: The pro/anti vaccine debate has been hot recently. Many pro-vaccine people often say, “The science is strong, the benefits are obvious, the risks are negligible; if you’re anti-vaccine then you’re anti-science”.

A series of studies over recent years have found that if you make people feel uncertain or anxious, they’ll respond by turning up the intensity of their religious faith.

Quite why this happens isn’t known. It might be that unhappy people turn to their gods. Or it might be the implicit threat to their well being that’s triggering the response.

[…]

[…B]asically people who did the self-affirmation task were inoculated against the effects of uncertainty (at least insofar as religiosity goes). People who hadn’t done the self-affirmation task got religion, as expected.

Studies of social–political debate, health–risk assessment, and responses to team victory or defeat have shown that people respond to information in a less defensive and more open–minded manner when their self–worth is buttressed by an affirmation of an alternative source of identity. Self–affirmed individuals are more likely to accept information that they would otherwise view as threatening, and subsequently to change their beliefs and even their behavior in a desirable fashion. Defensive biases have an adaptive function for maintaining self–worth, but maladaptive consequences for promoting change and reducing social conflict.

If this is the case, then it’s probably in one’s best interests to read up on how to effectively persuade someone, since the science of persuasion implicitly uses techniques to make someone like you more and/or affirm their self-worth. To recap what I posted about persuasion and marketing:

1) Reciprocity: You’re more likely to get something if you give it first. Smile at someone and they will smile back.

2) Social proof: People are more likely to buy into something if they see that other people have bought into it.

3) Liking: If you discover that you have things in common with the salesman, or he uncovers likable things about yourself, then you’re more likely to like him and therefore buy from him. This could include cold reading/Barnum statements (e.g. “people who do X innocuous behavior are/have Y awesome personality trait”). You can also improve how much someone likes you by mirroring them; mirroring others to bond with them or get them to like you seems to be a fundamental aspect of our evolutionary psychology.

4) Authority: Robert Cialdini says authority is “[s]omeone who is perceived as a credible source of information that people can use to make good choices.” You’re more likely to accept someone’s argument for why you should e.g. buy a car from them if they seem like they really know what they’re talking about.

5) [Internal] Consistency (also known as the Benjamin Franklin Effect or “Cached Selves” on Less Wrong): If you do a small, innocuous compliance for someone, you are more likely to do a big compliance for them later, especially if you slowly ramp up in less and less innocuous requests. This is the strangest one to me and seems pretty counterintuitive, but it seems to work. The same thing happens between signing a throwaway survey that says “Keep the Earth beautiful” and then later agreeing to put up a huge sign on your lawn that says “Drive safely”, as the Less Wrong article clearly states.

7) False Bayesian Updates: The more arguments you list in favor of something, regardless of the quality of those arguments, the more people tend to accept what you are selling.

8) Slightly contradictory/nonsensical words/phrases, metaphors, poetry/rhyming: Our brains are not optimized for intellectual activity, but for social activity. Metaphorical language is built into our everyday, highly social languages; if I say I have a big day tomorrow, you know what I mean. The day won’t be literally big since that doesn’t make sense. But we all implicitly associate largeness with importance, and smallness with unimportance due to our embodied brain. And singing, thus rhyming, is much more fundamental to our thought processes than vanilla talking, and is much more primitive, as it lights up your entire brain.

9) Halo Effect: We act nicer to people we find attractive. Attractive people are seen as funnier, more trustworthy, more intelligent, more extroverted, etc. than less attractive people. This includes attributing “good vibes” to products that are seen in the presence of or being used by attractive people, and so on (claiming that marketers are trying to enforce a standard of beauty is putting the cart before the horse, as there’s no money to be made from such a complicated conspiracy theory). For example, taller people earn more money than shorter people (this is probably one reason why women make less money than men): “In the work setting, the halo effect is most likely to show up in a supervisor’s appraisal of a subordinate’s job performance. In fact, the halo effect is probably the most common bias in performance appraisal.”

Hostage negotiators are people whose job it is to establish empathy and rapport in a much more hostile environment than pop debates or internet discussion boards. So it’s probably a good idea to try to apply some of the techniques that have been honed in life or death situations to engage with say creationists or anti-vaccine proponents:

It’s not something that only works with barricaded criminals wielding assault rifles — it applies to most any form of disagreement.

There are five steps:

Active Listening: Listen to their side and make them aware you’re listening.

Empathy: You get an understanding of where they’re coming from and how they feel.

Rapport: Empathy is what you feel. Rapport is when they feel it back. They start to trust you.

Influence: Now that they trust you, you’ve earned the right to work on problem solving with them and recommend a course of action.

Behavioral Change: They act. (And maybe come out with their hands up.)

The problem is, you’re probably screwing it up.

What you’re doing wrong

In all likelihood you usually skip the first three steps. You start at 4 (Influence) and expect the other person to immediately go to 5 (Behavioral Change).

And that never works.

Saying “Here’s why I’m right and you’re wrong” might be effective if people were fundamentally rational.

But they’re not.

[…]

The most critical step in the Behavioral Change Staircase is actually the first part: Active listening.

[…]

1. Ask open-ended questions

You don’t want yes/no answers, you want them to open up.

A good open-ended question would be “Sounds like a tough deal. Tell me how it all happened.” It is non-judgmental, shows interest, and is likely to lead to more information about the man’s situation.

2. Effective pauses

Pausing is powerful. Use it for emphasis, to encourage someone to keep talking or to defuse things when people get emotional.

Eventually, even the most emotionally overwrought subjects will find it difficult to sustain a one-sided argument, and they again will return to meaningful dialogue with negotiators. Thus, by remaining silent at the right times, negotiators actually can move the overall negotiation process forward.

3. Minimal Encouragers

Brief statements to let the person know you’re listening and to keep them talking.

Even relatively simple phrases, such as “yes,” “O.K.,” or “I see,” effectively convey that a negotiator is paying attention to the subject. These responses will encourage the subject to continue talking and gradually relinquish more control of the situation to the negotiator.

4. Mirroring

Repeating the last word or phrase the person said to show you’re listening and engaged. Yes, it’s that simple — just repeat the last word or two:

For example, a subject may declare, “I’m sick and tired of being pushed around,” to which the negotiator can respond, “Feel pushed, huh?”

5. Paraphrasing

Repeating what the other person is saying back to them in your own words. This powerfully shows you really do understand and aren’t merely parroting.

6. Emotional Labeling

Give their feelings a name. It shows you’re identifying with how they feel. Don’t comment on the validity of the feelings — they could be totally crazy — but show them you understand.

A good use of emotional labeling would be “You sound pretty hurt about being left. It doesn’t seem fair.” because it recognizes the feelings without judging them. It is a good Additive Empathetic response because it identifies the hurt that underlies the anger the woman feels and adds the idea of justice to the actor’s message, an idea that can lead to other ways of getting justice.

A poor response would be “You don’t need to feel that way. If he was messing around on you, he was not worth the energy.” It is judgmental. It tells the subject how not to feel. It minimizes the subject’s feelings, which are a major part of who she is. It is Subtractive Empathy.

So there you have it. There actually isn’t one weird trick for advertizing science. But thinking that there was is what got you to read this blog post in the first place, which is itself a roundabout way for advertizing for science!

tl;dr: If you’re trying to communicate effectively with someone, you’ll do a much better job if they like you and/or you don’t offend or accuse them.

Advertisements

Comments Off on On Advertising Science: You Won’t Believe How Easy It Is With This One Weird Trick…

Continuing, I guess, my posts on the link between morality and religion, I came across another study that linked two concepts that I should have seen the connection to but didn’t. Now that I’ve read it, it makes total sense.

In my dissertation and my other early studies, I told people short stories in which a person does something disgusting or disrespectful that was perfectly harmless (for example, a family cooks and eats its dog, after the dog was killed by a car). I was trying to pit the emotion of disgust against reasoning about harm and individual rights.

I found that disgust won in nearly all groups I studied (in Brazil, India, and the United States), except for groups of politically liberal college students, particularly Americans, who overrode their disgust and said that people have a right to do whatever they want, as long as they don’t hurt anyone else.

[…]

Most traditional societies care about a lot more than harm/care and fairness/justice. Why do so many societies care deeply and morally about menstruation, food taboos, sexuality, and respect for elders and the Gods? You can’t just dismiss this stuff as social convention. If you want to describe human morality, rather than the morality of educated Western academics, you’ve got to include the Durkheimian view that morality is in large part about binding people together.

From a review of the anthropological and evolutionary literatures, Craig Joseph (at Northwestern University) and I concluded that there were three best candidates for being additional psychological foundations of morality, beyond harm/care and fairness/justice. These three we label as ingroup/loyalty (which may have evolved from the long history of cross-group or sub-group competition, related to what Joe Henrich calls “coalitional psychology”);authority/respect (which may have evolved from the long history of primate hierarchy, modified by cultural limitations on power and bullying, as documented by Christopher Boehm), and purity/sanctity, which may be a much more recent system, growing out of the uniquely human emotion of disgust, which seems to give people feelings that some ways of living and acting are higher, more noble, and less carnal than others.

My UVA colleagues Jesse Graham, Brian Nosek, and I have collected data from about 7,000 people so far on a survey designed to measure people’s endorsement of these five foundations. In every sample we’ve looked at, in the United States and in other Western countries, we find that people who self-identify as liberals endorse moral values and statements related to the two individualizing foundations primarily, whereas self-described conservatives endorse values and statements related to all five foundations. It seems that the moral domain encompasses more for conservatives—it’s not just about Gilligan’s care and Kohlberg’s justice. It’s also about Durkheim’s issues of loyalty to the group, respect for authority, and sacredness.

I hope you’ll accept that as a purely descriptive statement. You can still reject the three binding foundations normatively—that is, you can still insist that ingroup, authority, and purity refer to ancient and dangerous psychological systems that underlie fascism, racism, and homophobia, and you can still claim that liberals are right to reject those foundations and build their moral systems using primarily the harm/care and fairness/reciprocity foundations.

But just go with me for a moment that there is this difference, descriptively, between the moral worlds of secular liberals on the one hand and religious conservatives on the other. There are, of course, many other groups, such as the religious left and the libertarian right, but I think it’s fair to say that the major players in the new religion wars are secular liberals criticizing religious conservatives. Because the conflict is a moral conflict, we should be able to apply the four principles of the new synthesis in moral psychology.

So the five moral principles are harm/care, fairness/justice, ingroup/loyalty, authority/respect, and purity/sanctity. As Haidt wrote above, conservative and thus religious morality focuses on more than just fairness or harm/care; religiosity also includes a concentration on things like respect for authority, sanctity, and loyalty. Indeed, when people do something immoral, washing themselves makes them feel better about it. Note that the Greek “etymology” of the odd word Nazoraios (i.e. “Nazarene”, as in Jesus the Nazarene; Ἰησοῦς ὁ Ναζωραῖος) is caught up with the concept of sanctity.

The feeling of disgust likely evolved as a mechanism to detect and avoid pathogens in the environment, but it also may explain why some people are more socially conservative than others, according to newly published research.

[…]

Across four separate studies, the researcher found that those who were more easily disgusted and more afraid of contamination were more likely to be both female and socially conservative. The four studies were comprised of 980 undergraduate students in total.

The link between disgust and conservativism is bolstered by previous studies.

Research published 2012 in Social Psychological and Personality Science found disgust sensitivity was positively associated with political conservatism and the intention to vote for Republican president John McCain. Another study published 2011 in PLoS One found conservatives had stronger physiological reactions than liberals when shown gross pictures. Research published in the journal Emotion showed that disgust sensitivity was associated with unfavorable moral judgments about same-sex relationships.

[…]

Disgust, in turn, encourages “the preference of ingroup members over outgroup members, because outgroup members pose a greater disease threat,” the researchers wrote. This preference towards members of one’s own group manifests itself as socially conservative attitudes, like religious fundamentalism.

“In other words, disgust sensitivity prepares individuals to have a negative perception of others who may be a source of contamination and to avoid them.”

And here’s another conclusion that should follow necessarily from the fact that women are more religious than men, but is at first glance counterintuitive. If women are more religious than men, and the values that make people tend towards religion — like sanctity and ingroup/outgroup thinking — also make people tend towards conservative views like racism, then it must follow that religious people are more racist than non-religious people. This is true; racism and religiosity have a positive correlation. If that’s the case, then it must also follow that women are more racist than men. That also seems to be true:

This Valentine’s Day … relatively few women on mainstream dating sites will bother to respond to overtures from men of Asian descent. Likewise, black women will be disproportionately snubbed by men of all races. … Chemistry.com requires users to identify their ethnicity; like eHarmony, it considers members’ racial preferences when suggesting matches. Match.com lets users filter their searches by race. The site’s profiles include space to indicate interest (or lack thereof) in various racial and ethnic groups. …

Among the women, 73% stated a [racial] preference. Of these, 64% selected whites only, while fewer than 10% included East Indians, Middle Easterners, Asians or blacks. … 59% of [men] stated a racial preference. Of these, nearly half selected Asians, but fewer than 7% did for black women. … In October, [OkCupid.com], 80% of whose members choose to input their race, studied the messaging patterns of more than a million users and concluded on its official blog that “racism is alive and well.”

Although a considerable body of research explores alterations in women’s mating-relevant preferences across the menstrual cycle, investigators have yet to examine the potential for the menstrual cycle to influence intergroup attitudes. We examined the effects of changes in conception risk across the menstrual cycle on intergroup bias and found that increased conception risk was positively associated with several measures of race bias. This association was particularly strong when perceived vulnerability to sexual coercion was high. Our findings highlight the potential for hypotheses informed by an evolutionary perspective to generate new knowledge about current social problems an[d] avenue[s] that may lead to new predictions in the study of intergroup relations.

Studies of gender differences in orientation toward others have found that women are more strongly concerned than men with affective processes and are more likely to be other-focused, while men tend to be more instrumental and more self-oriented. Recent research has extended this finding to include gender differences in racial attitudes, and reports that women also are more favorable than men in their racial outlooks.

Though I suppose the two aren’t mutually exclusive, if the norms of the group that someone belongs to include racial tolerance. Meaning that collectivist tendencies in a group that values racial concern might lead to signaling racial tolerance but not actually practicing it.

————————–
* (a word reserved for banishing someone to the status of the outgroup, and seems to have overwhelming popularity among women. Not so much among men)