Ethnic Tension And Meaningless Arguments

Part of what bothers me – and apparently several others – about yesterday’s motte-and-bailey discussion is that here’s a fallacy – a pretty successful fallacy – that depends entirely on people not being entirely clear on what they’re arguing about. Somebody says God doesn’t exist. Another person objects that God is just a name for the order and beauty in the universe. Then this somehow helps defend the position that God is a supernatural creator being. How does that even happen?

“Sir, you’ve been accused of murdering your wife. We have three witnesses who said you did it. What do you have to say for yourself?”

“Well, your honor, I think it’s quite clear I didn’t murder the President. For one thing, he’s surrounded by Secret Service agents. For another, check the news. The President’s still alive.”

“Huh. For some reason I vaguely remember thinking you didn’t have a case. Yet now that I hear you talk, everything you say is incredibly persuasive. You’re free to go.”

While motte-and-bailey is less subtle, it seems to require a similar sort of misdirection. I’m not saying it’s impossible. I’m just saying it’s a fact that needs to be explained.

When everything works the way it’s supposed to in philosophy textbooks, arguments are supposed to go one of a couple of ways:

1. Questions of empirical fact, like “Is the Earth getting warmer?” or “Did aliens build the pyramids?”. You debate these by presenting factual evidence, like “An average of global weather station measurements show 2014 is the hottest year on record” or “One of the bricks at Giza says ‘Made In Tau Ceti V’ on the bottom.” Then people try to refute these facts or present facts of their own.

2. Questions of morality, like “Is it wrong to abort children?” or “Should you refrain from downloading music you have not paid for?” You can only debate these well if you’ve already agreed upon a moral framework, like a particular version of natural law or consequentialism. But you can sort of debate them by comparing to examples of agreed-upon moral questions and trying to maintain consistency. For exmaple, “You wouldn’t kill a one day old baby, so how is a nine month old fetus different?” or “You wouldn’t download a car.”

If you are very lucky, your philosophy textbook will also admit the existence of:

3. Questions of policy, like “We should raise the minimum wage” or “We should bomb Foreignistan”. These are combinations of competing factual claims and competing values. For example, the minimum wage might hinge on factual claims like “Raising the minimum wage would increase unemployment” or “It is very difficult to live on the minimum wage nowadays, and many poor families cannot afford food.” But it might also hinge on value claims like “Corporations owe it to their workers to pay a living wage,” or “It is more important that the poorest be protected than that the economy be strong.” Bombing Foreignistan might depend on factual claims like “The Foreignistanis are harboring terrorists”, and on value claims like “The safety of our people is worth the risk of collateral damage.” If you can resolve all of these factual and value claims, you should be able to agree on questions of policy.

None of these seem to allow the sort of vagueness of topic mentioned above.

II.

A question: are you pro-Israel or pro-Palestine? Take a second, actually think about it.

Some people probably answered pro-Israel. Other people probably answered pro-Palestine. Other people probably said they were neutral because it’s a complicated issue with good points on both sides.

Probably very few people answered: Huh? What?

This question doesn’t fall into any of the three Philosophy 101 forms of argument. It’s not a question of fact. It’s not a question of particular moral truths. It’s not even a question of policy. There are closely related policies, like whether Palestine should be granted independence. But if I support a very specific two-state solution where the border is drawn upon the somethingth parallel, does that make me pro-Israel or pro-Palestine? At exactly which parallel of border does the solution under consideration switch from pro-Israeli to pro-Palestinian? Do you think the crowd of people shouting and waving signs saying “SOLIDARITY WITH PALESTINE” have an answer to that question?

But it’s even worse, because this question covers much more than just the borders of an independent Palestinian state. Was Israel justified by responding to Hamas’ rocket fire by bombing Gaza, even with the near-certainty of collateral damage? Was Israel justified in building a wall across the Palestinian territories to protect itself from potential terrorists, even though it severely curtails Palestinian freedom of movement? Do Palestinians have a “right of return” to territories taken in the 1948 war? Who should control the Temple Mount?

These are four very different questions which one would think each deserve independent consideration. But in reality, what percent of the variance in people’s responses do you think is explained by a general “pro-Palestine vs. pro-Israel” factor? 50%? 75%? More?

In a way, when we round people off to the Philosophy 101 kind of arguments, we are failing to respect their self-description. People aren’t out on the streets saying “By my cost-benefit analysis, Israel was in the right to invade Gaza, although it may be in the wrong on many of its other actions.” They’re waving little Israeli flags and holding up signs saying “ISRAEL: OUR STAUNCHEST ALLY”. Maybe we should take them at face value.

This is starting to look related to the original question in (I). Why is it okay to suddenly switch points in the middle of an argument? In the case of Israel and Palestine, it might be because people’s support for any particular Israeli policy is better explained by a General Factor Of Pro-Israeliness than by the policy itself. As long as I’m arguing in favor of Israel in some way, it’s still considered by everyone to be on topic.

III.

Some moral philosophers got fed up with nobody being able to explain what the heck a moral truth was and invented emotivism. Emotivism says there are no moral truths, just expressions of little personal bursts of emotion. When you say “Donating to charity is good,” you don’t mean “Donating to charity increases the sum total of utility in the world,” or “Donating to charity is in keeping with the Platonic moral law” or “Donating to charity was commanded by God” or even “I like donating to charity”. You’re just saying “Yay charity!” and waving a little flag.

Seems a lot like how people handle the Israel question. “I’m pro-Israel” doesn’t necessarily imply that you believe any empirical truths about Israel, or believe any moral principles about Israel, or even support any Israeli policies. It means you’re waving a little flag with a Star of David on it and cheering.

So here is Ethnic Tension: A Game For Two Players.

Pick a vague concept. “Israel” will do nicely for now.

Player 1 tries to associate the concept “Israel” with as much good karma as she possibly can. Concepts get good karma by doing good moral things, by being associated with good people, by being linked to the beloved in-group, and by being oppressed underdogs in bravery debates.

“Israel is the freest and most democratic country in the Middle East. It is one of America’s strongest allies and shares our Judeo-Christian values.

Player 2 tries to associate the concept “Israel” with as much bad karma as she possibly can. Concepts get bad karma by committing atrocities, being associated with bad people, being linked to the hated out-group, and by being oppressive big-shots in bravery debates. Also, she obviously needs to neutralize Player 1’s actions by disproving all of her arguments.

“Israel may have some level of freedom for its most privileged citizens, but what about the millions of people in the Occupied Territories that have no say? Israel is involved in various atrocities and has often killed innocent protesters. They are essentially a neocolonialist state and have allied with other neocolonialist states like South Africa.”

The prize for winning this game is the ability to win the other three types of arguments. If Player 1 wins, the audience ends up with a strongly positive General Factor Of Pro-Israeliness, and vice versa.

Remember, people’s capacity for motivated reasoning is pretty much infinite. Remember, a motivated skeptic asks if the evidence compels them to accept the conclusion; a motivated credulist asks if the evidence allows them to accept the conclusion. Remember, Jonathan Haidt and his team hypnotized people to have strong disgust reactions to the word “often”, and then tried to hold in their laughter when people in the lab came up with convoluted yet plausible-sounding arguments against any policy they proposed that included the word “often” in the description.

I’ve never heard of the experiment being done the opposite way, but it sounds like the sort of thing that might work. Hypnotize someone to have a very positive reaction to the word “often” (for most hilarious results, have it give people an orgasm). “Do you think governments should raise taxes more often?” “Yes. Yes yes YES YES OH GOD YES!”

Once you finish the Ethnic Tension Game, you’re replicating Haidt’s experiment with the word “Israel” instead of the word “often”. Win the game, and any pro-Israel policy you propose will get a burst of positive feelings and tempt people to try to find some explanation, any explanation, that will justify it, whether it’s invading Gaza or building a wall or controlling the Temple Mount.

So this is the fourth type of argument, the kind that doesn’t make it into Philosophy 101 books. The trope namer is Ethnic Tension, but it applies to anything that can be identified as a Vague Concept, or paired opposing Vague Concepts, which you can use emotivist thinking to load with good or bad karma.

IV.

Now motte-and-bailey stands revealed:

Somebody says God doesn’t exist. Another person objects that God is just a name for the order and beauty in the universe. Then this somehow helps defend the position that God is a supernatural creator being. How does that even happen?

The two-step works like this. First, load “religion” up with good karma by pitching it as persuasively as possible. “Religion is just the belief that there’s beauty and order in the universe.”

Wait, I think there’s beauty and order in the universe!

“Then you’re religious too. We’re all religious, in the end, because religion is about the common values of humanity and meaning and compassion sacrifice beauty of a sunrise Gandhi Buddha Sufis St. Francis awe complexity humility wonder Tibet the Golden Rule love.”

Then, once somebody has a strongly positive General Factor Of Religion, it doesn’t really matter whether someone believes in a creator God or not. If they have any predisposition whatsoever to do so, they’ll find a reason to let themselves. If they can’t manage it, they’ll say it’s true “metaphorically” and continue to act upon every corollary of it being true.

(“God is just another name for the beauty and order in the universe. But Israel definitely belongs to the Jews, because the beauty and order of the universe promised it to them.”)

If you’re an atheist, you probably have a lot of important issues on which you want people to consider non-religious answers and policies. And if somebody can maintain good karma around the “religion” concept by believing God is the order and beauty in the universe, then that can still be a victory for religion even if it is done by jettisoning many traditionally “religious” beliefs. In this case, it is useful to think of the “order and beauty” formulation as a “motte” for the “supernatural creator” formulation, since it’s allowing the entire concept to be defended.

But even this is giving people too much credit, because the existence of God is a (sort of) factual question. From yesterday’s post:

Suppose we’re debating feminism, and I defend it by saying it really is important that women are people, and you attack it by saying that it’s not true that all men are terrible. What is the real feminism we should be debating? Why would you even ask that question? What is this, some kind of dumb high school debate club? Who the heck thinks it would be a good idea to say ‘Here’s a vague poorly-defined concept that mind-kills everyone who touches it – quick, should you associate it with positive affect or negative affect?!’

Who the heck thinks that? Everybody, all the time.

Once again, if I can load the concept of “feminism” with good karma by making it so obvious nobody can disagree with it, then I have a massive “home field advantage” when I’m trying to convince anyone of any particular policy that can go under the name “feminism”, even if it’s unrelated to the arguments that gave feminism good karma in the first place.

Or if I’m against feminism, I just post quotes from the ten worst feminists on Tumblr again and again until the entire movement seems ridiculous and evil, and then you’ll have trouble convincing anyone of anything feminist. “That seems reasonable…but wait, isn’t that a feminist position? Aren’t those the people I hate?”

(compare: most Americans oppose Obamacare, but most Americans support each individual component of Obamacare when it is explained without using the word “Obamacare”)

We have our node “Israel”, which has either good or bad karma. Then there’s another node close by marked “Palestine”. We would expect these two nodes to be pretty anti-correlated. When Israel has strong good karma, Palestine has strong bad karma, and vice versa.

Now suppose you listen to Noam Chomsky talk about how strongly he supports the Palestinian cause and how much he dislikes Israel. One of two things can happen:

“Wow, a great man such as Noam Chomsky supports the Palestinians! They must be very deserving of support indeed!”

So now there is a third node, Noam Chomsky, that connects to both Israel and Palestine, and we have discovered it is positively correlated with Palestine and negatively correlated with Israel. It probably has a pretty low weight, because there are a lot of reasons to care about Israel and Palestine other than Chomsky, and a lot of reasons to care about Chomsky other than Israel and Palestine, but the connection is there.

I don’t know anything about neural nets, so maybe this system isn’t actually a neural net, but whatever it is I’m thinking of, it’s a structure where eventually the three nodes reach some kind of equilibrium. If we start with someone liking Israel and Chomsky, but not Palestine, then either that’s going to shift a little bit towards liking Palestine, or shift a little bit towards disliking Chomsky.

Now we add more nodes. Cuba seems to really support Palestine, so they get a positive connection with a little bit of weight there. And I think Noam Chomsky supports Cuba, so we’ll add a connection there as well. Cuba is socialist, and that’s one of the most salient facts about it, so there’s a heavily weighted positive connection between Cuba and socialism. Palestine kind of makes noises about socialism but I don’t think they have any particular economic policy, so let’s say very weak direct connection. And Che is heavily associated with Cuba, so you get a pretty big Che – Cuba connection, plus a strong direct Che – socialism one. And those pro-Palestinian students who threw rotten fruit at an Israeli speaker also get a little path connecting them to “Palestine” – hey, why not – so that if you support Palestine you might be willing to excuse what they did and if you oppose them you might be a little less likely to support Palestine.

Back up. This model produces crazy results, like that people who like Che are more likely to oppose Israel bombing Gaza. That’s such a weird, implausible connection that it casts doubt upon the entire…

Oh. Wait. Yeah. Okay.

I think this kind of model, in its efforts to sort itself out into a ground state, might settle on some kind of General Factor Of Politics, which would probably correspond pretty well to the left-right axis.

In Five Case Studies On Politicization, I noted how fresh new unpoliticized issues, like the Ebola epidemic, were gradually politicized by connecting them to other ideas that were already part of a political narrative. For example, a quarantine against Ebola would require closing the borders. So now there’s a weak negative link between “Ebola quarantine” and “open borders”. If your “open borders” node has good karma, now you’re a little less likely to support an Ebola quarantine. If “open borders” has bad karma, a little more likely.

I also tried to point out how you could make different groups support different things by changing your narrative a little:

Global warming has gotten inextricably tied up in the Blue Tribe narrative: Global warming proves that unrestrained capitalism is destroying the planet. Global warming disproportionately affects poor countries and minorities. Global warming could have been prevented with multilateral action, but we were too dumb to participate because of stupid American cowboy diplomacy. Global warming is an important cause that activists and NGOs should be lauded for highlighting. Global warming shows that Republicans are science denialists and probably all creationists. Two lousy sentences on “patriotism” aren’t going to break through that.

If I were in charge of convincing the Red Tribe to line up behind fighting global warming, here’s what I’d say:

In the 1950s, brave American scientists shunned by the climate establishment of the day discovered that the Earth was warming as a result of greenhouse gas emissions, leading to potentially devastating natural disasters that could destroy American agriculture and flood American cities. As a result, the country mobilized against the threat. Strong government action by the Bush administration outlawed the worst of these gases, and brilliant entrepreneurs were able to discover and manufacture new cleaner energy sources. As a result of these brave decisions, our emissions stabilized and are currently declining.

Unfortunately, even as we do our part, the authoritarian governments of Russia and China continue to industralize and militarize rapidly as part of their bid to challenge American supremacy. As a result, Communist China is now by far the world’s largest greenhouse gas producer, with the Russians close behind. Many analysts believe Putin secretly welcomes global warming as a way to gain access to frozen Siberian resources and weaken the more temperate United States at the same time. These countries blow off huge disgusting globs of toxic gas, which effortlessly cross American borders and disrupt the climate of the United States. Although we have asked them to stop several times, they refuse, perhaps egged on by major oil producers like Iran and Venezuela who have the most to gain by keeping the world dependent on the fossil fuels they produce and sell to prop up their dictatorships.

We need to take immediate action. While we cannot rule out the threat of military force, we should start by using our diplomatic muscle to push for firm action at top-level summits like the Kyoto Protocol. Second, we should fight back against the liberals who are trying to hold up this important work, from big government bureaucrats trying to regulate clean energy to celebrities accusing people who believe in global warming of being ‘racist’. Third, we need to continue working with American industries to set an example for the world by decreasing our own emissions in order to protect ourselves and our allies. Finally, we need to punish people and institutions who, instead of cleaning up their own carbon, try to parasitize off the rest of us and expect the federal government to do it for them.

In the first paragraph, “global warming” gets positively connected to concepts like “poor people and minorities” and “activists and NGOs”, and gets negatively connected to concepts like “capitalism”, “American cowboy diplomacy”, and “creationists”. That gives global warming really strong good karma if (and only if) you like the first two concepts and hate the last three.

In the next three paragraphs, “global warming” gets positively connected to “America”, “the Bush administration” and “entrepreneurs”, and negatively connected to “Russia”, “China”, “oil producing dictatorships like Iran and Venezuela”, “big government bureaucrats”, and “welfare parasites”. This is going to appeal to, well, a different group.

Notice two things here. First, the exact connection isn’t that important, as long as we can hammer in the existence of a connection. I could probably just say GLOBAL WARMING! COMMUNISM! GLOBAL WARMING! COMMUNISM! GLOBAL WARMING! COMMUNISM! several hundred times and have the same effect if I could get away with it (this is the principle behind attack ads which link a politician’s face to scary music and a very concerned voice).

Second, there is no attempt whatsoever to challenge the idea that the issue at hand is the positive or negative valence of a concept called “global warming”. At no point is it debated what the solution is, which countries the burden is going to fall on, or whether any particular level of emission cuts would do more harm than good. It’s just accepted as obvious by both sides that we debate “for” or “against” global warming, and if the “for” side wins then they get to choose some solution or other or whatever oh god that’s so boring can we get back to Israel vs. Palestine.

Some of the scientists working on IQ have started talking about “hierarchical factors”, meaning that there’s a general factor of geometry intelligence partially correlated with other things into a general factor of mathematical intelligence partially correlated with other things into a general factor of total intelligence.

I would expect these sorts of things to work the same way. There’s a General Factor Of Global Warming that affects attitudes toward pretty much all proposed global warming solutions, which is very highly correlated with a lot of other things to make a General Factor Of Environmentalism, which itself is moderately highly correlated with other things into the General Factor Of Politics.

VI.

Speaking of politics, a fruitful digression: what the heck was up with the Ashley Todd mugging hoax in 2008?

Back in the 2008 election, a McCain campaigner claimed (falsely, it would later turn out) to have been assaulted by an Obama supporter. She said he slashed a “B” (for “Barack”) on her face with a knife. This got a lot of coverage, and according to Wikipedia:

John Moody, executive vice president at Fox News, commented in a blog on the network’s website that “this incident could become a watershed event in the 11 days before the election,” but also warned that “if the incident turns out to be a hoax, Senator McCain’s quest for the presidency is over, forever linked to race-baiting.”

Wait. One Democrat, presumably not acting on Obama’s direct orders, attacks a Republican woman. And this is supposed to alter the outcome of the entire election? In what universe does one crime by a deranged psychopath change whether Obama’s tax policy or job policy or bombing-scary-foreigners policy is better or worse than McCain’s?

Even if we’re willing to make the irresponsible leap from “Obama is supported by psychopaths, therefore he’s probably a bad guy,” there are like a hundred million people on each side. Psychopaths are usually estimated at about 1% of the population, so any movement with a million people will already have 10,000 psychopaths. Proving the existence of a single one changes nothing.

I think insofar as this affected the election – and everyone seems to have agreed that it might have – it hit President Obama with a burst of bad karma. Obama something something psychopath with a knife. Regardless of the exact content of those something somethings, is that the kind of guy you want to vote for?

Then when it was discovered to be a hoax, it was McCain something something race-baiting hoaxer. Now he’s got the bad karma!

This sort of conflation between a cause and its supporters really only makes sense in the emotivist model of arguing. I mean, this shouldn’t even get dignified with the name ad hominem fallacy. Ad hominem fallacy is “McCain had sex with a goat, therefore whatever he says about taxes is invalid.” At least it’s still the same guy. This is something the philosophy textbooks can’t bring themselves to believe really exists, even as a fallacy.

But if there’s a General Factor Of McCain, then anything bad remotely connected to the guy – goat sex, lying campaigners, whatever – reflects on everything else about him.

This is the same pattern we see in Israel and Palestine. How many times have you seen a news story like this one: “Israeli speaker hounded off college campus by pro-Palestinian partisans throwing fruit. Look at the intellectual bankruptcy of the pro-Palestinian cause!” It’s clearly intended as an argument for something other than just not throwing fruit at people. The causation seems to go something like “These particular partisans are violating the usual norms of civil discussion, therefore they are bad, therefore something associated with Palestine is bad, therefore your General Factor of Pro-Israeliness should become more strongly positive, therefore it’s okay for Israel to bomb Gaza.” Not usually said in those exact words, but the thread can be traced.

VII.

Here is a prediction of this model: we will be obsessed with what concepts we can connect to other concepts, even when the connection is totally meaningless.

Suppose I say: “Opposing Israel is anti-Semitic”. Why? Well, the Israelis are mostly Jews, so in a sense by definition being anti- them is “anti-Semitic”, broadly defined. Also, p(opposes Israel|is anti-Semitic) is probably pretty high, which sort of lends some naive plausibility to the idea that p(is anti-Semitic|opposes Israel) is at least higher than it otherwise could be.

Maybe we do our research and we find exactly what percent of opponents of Israel endorse various anti-Semitic statements like “I hate all Jews” or “Hitler had some bright ideas”. We’ve replaced the symbol with the substance. Problem solved, right?

Maybe not. In the same sense that people can agree on all of the characteristics of Pluto – its diameter, the eccentricity of its orbit, its number of moons – and still disagree on the question “Is Pluto a planet”, one can agree on every characteristic of every Israel opponent and still disagree on the definitional question “Is opposing Israel anti-Semitic?”

(fact: it wasn’t until proofreading this essay that I realized I had originally written “Is Israel a planet?” and “Is opposing Pluto anti-Semitic?” I would like to see Jonathan Haidt hypnotize people until they can come up with positive arguments for those propositions.)

I think it’s about drawing a line between the concept “anti-Semitism” and “oppose Israel”. If your head is screwed on right, you assign anti-Semitism some very bad karma. So if we can stick a thick line between “anti-Semitism” and “oppose Israel”, then you’re going have very bad feelings about opposition to Israel and your General Factor Of Pro-Israeliness will go up.

Notice that this model is transitive, but shouldn’t be.

That is, let’s say we’re arguing over the definition of anti-Semitism, and I say “anti-Semitism just means anything that hurts Jews”. This is a dumb definition, but let’s roll with it.

First, I load “anti-Semitism” with lots of negative affect. Hitler was anti-Semitic. The pogroms in Russia were anti-Semitic. The Spanish Inquisition was anti-Semitic. Okay, negative affect achieved.

Then I connect “wants to end the Israeli occupation of Palestine” to “anti-Semitism”. Now wanting to end the Israeli occupation of Palestine has lots of negative affect attached to it.

It sounds dumb when you put it like that, but when you put it like “You’re anti-Semitic for wanting to end the occupation” it’s a pretty damaging argument.

This is trying to be transitive. It’s trying to say “anti-occupation = anti-Semitism, anti-Semitism = evil, therefore anti-occupation = evil”. If this were arithmetic, it would work. But there’s no Transitive Property Of Concepts. If anything, concepts are more like sets. The logic is “anti-occupation is a member of the set anti-Semitic, the set anti-Semitic contains members that are evil, therefore anti-occupation is evil”, which obviously doesn’t check out.

(compare: “I am a member of the set ‘humans’, the set ‘humans’ contains the Pope, therefore I am the Pope”.)

Anti-Semitism is generally considered evil because a lot of anti-Semitic things involve killing or dehumanizing Jews. Opposing the Israel occupation of Palestine doesn’t kill or dehumanize Jews, so even if we call it “anti-Semitic” by definition, there’s no reason for our usual bad karma around anti-Semitism to transfer over. But by an unfortunate rhetorical trick, it does – you can gather up bad karma into “anti-Semitic” and then shoot it at the “occupation of Palestine” issue just by clever use of definitions.

This means that if you can come up with sufficiently clever definitions and convince your opponent to accept them, you can win any argument by default just by having a complex system of mirrors in place to reflect bad karma from genuinely evil things to the things you want to tar as evil. This is essentially the point I make in Words, Words, Words.

If we kinda tweak the definition of “anti-Semitism” to be “anything that inconveniences Jews”, we can pull a trick where we leverage people’s dislike of Hitler to make them support the Israeli occupation of Palestine – but in order to do that, we need to get everyone on board with our slightly non-standard definition. Likewise, the social justice movement insists on their own novel definitions of words like “racism” that don’t match common usage, any dictionary, or etymological history – but which do perfectly describe a mirror that reflects bad karma toward opponents of social justice while making it impossible to reflect any bad karma back. Overreliance on this mechanism explains why so many social justice debates end up being about whether a particular mirror can be deployed to transfer bad karma in a specific case (“are trans people privileged?!”) rather than any feature of the real world.

But they are hardly alone. Compare: “Is such an such an organization a cult?”, “Is such and such a policy socialist?”, “Is abortion or capital punishment or war murder?” All entirely about whether we’re allowed to reflect bad karma from known sources of evil to other topics under discussion.

Look around you. Just look around you. Have you worked out what we’re looking for? Correct. The answer is The Worst Argument In The World. Only now, we can explain why it works.

VIII.

From the self-esteem literature, I gather that the self is also a concept that can have good or bad karma. From the cognitive dissonance literature, I gather that the self is actively involved in maintaining good karma around itself through as many biases as it can manage to deploy.

I’ve mentioned this study before. Researchers make victims participants fill out a questionnaire about their romantic relationships. Then they pretend to “grade” the questionnaire, actually assigning scores at random. Half the participants are told their answers indicate they have the tendency to be very faithful to their partner. The other half are told they have very low faithfulness and their brains just aren’t built for fidelity. Then they ask the participants victims their opinion on staying faithful in a relationship – very important, moderately important, or not so important?

There is a strong signal of people who are told they are bad at fidelity to state fidelity is unimportant, and another strong signal of people who are told they are especially faithful stating that fidelity is a great and noble virtue that must be protected.

The researchers conclude that people want to have high self-esteem. If I am terrible at fidelity, and fidelity is the most important virtue, that makes me a terrible person. If I am terrible at fidelity and fidelity doesn’t matter, I’m fine. If I am great at fidelity, and fidelity is the most important virtue, I can feel pretty good about myself.

This doesn’t seem too surprising. It’s just the more subtle version of the effect where white people are a lot more likely to be white supremacists than members of any other race. Everyone likes to hear that they’re great. The question is whether they can defend it and fit it in with their other ideas. The answer is “usually yes, because people are capable of pretty much any contortion of logic you can imagine and a lot that you can’t”.

I had a bad experience when I was younger where a bunch of feminists attacked and threatened me because of something I wrote. It left me kind of scarred. More importantly, the shape of that scar was a big anticorrelated line between self-esteem and the “feminism” concept. If feminism has lots of good karma, then I have lots of bad karma, because I am a person feminists hate. If feminists have lots of bad karma, then I look good by comparison, the same way it’s pretty much a badge of honor to be disliked by Nazis. The result was a permanent haze of bad karma around “feminism” unconnected to any specific feminist idea, which I have to be constantly on the watch for if I want to be able to evaluate anything related to feminism fairly or rationally.

Good or bad karma, when applied to yourself, looks like high or low self-esteem; when applied to groups, it looks like high or low status. In the giant muddle of a war for status that we politely call “society”, this makes beliefs into weapons and the karma loading of concepts into the difference between lionization and dehumanization.

The Trope Namer for emotivist arguments is “ethnic tension”, and although it’s most obvious in the case of literal ethnicities like the Israelis and the Palestinians, the ease with which concepts become attached to different groups creates a whole lot of “proxy ethnicites”. I’ve written before about how American liberals and conservatives are seeming less and less like people who happen to have different policy prescriptions, and more like two different tribes engaged in an ethnic conflict quickly approaching Middle East level hostility. More recently, a friend on Facebook described the-thing-whose-name-we-do-not-speak-lest-it-appear and-destroy-us-all, the one involving reproductively viable worker ants, as looking more like an ethnic conflict about who is oppressing whom than any real difference in opinions.

Once a concept has joined up with an ethnic group, either a real one or a makeshift one, it’s impossible to oppose the concept without simultaneously lowering the status of the ethnic group, which is going to start at least a little bit of a war. Worse, once a concept has joined up with an ethnic group, one of the best ways to argue against the concept is to dehumanize the ethnic group it’s working with. Dehumanizing an ethnic group has always been easy – just associate them with a disgust reaction, portray them as conventionally unattractive and unlovable and full of all the worst human traits – and now it is profitable as well, since it’s one of the fastest ways to load bad karma into an idea you dislike.

The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge of the blade. As with the map, so too with the art of mapmaking: The Way is a precise Art. Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory. Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.

The official desciption is of literal precision, as specific numerical precision in probability updates. But is there a secret interpretation of this virtue?

Four top secret Virtues known only to the Highest Clergy: 1) Fnorg 2) Turlity 3) Charigrace 4) Love-231.

Precision as separation. Once you’re debating “religion”, you’ve already lost. Precision as sticking to a precise question, like “Is the first chapter of Genesis literally true?” or “Does Buddhist meditation help treat anxiety disorders?” and trying to keep these issues as separate from any General Factor Of Religiousness as humanly possible. Precision such that “God the supernatural Creator exists” and “God the order and beauty in the Universe exists” are as carefully sequestered from one another as “Did the defendant kill his wife?” and “Did the defendant kill the President?”

I want to end by addressing a point a commenter made in my last post on motte-and-bailey:

In the real world, the particular abstract questions aren’t what matter – the groups and people are what matter. People get things done, and they aren’t particularly married to particular abstract concepts, they are married to their values and their compatriots. In order to deal with reality, we must attack and defend groups and individuals. That does not mean forsaking logic. It requires dealing with obfuscating tactics like those you outline above, but that’s not even a real downside, because if you flee into the narrow, particular questions all you’re doing is covering your eyes to avoid perceiving the the monsters that will still make mincemeat of your attempts to change things.

The world is a scary place, full of bad people who want to hurt you, and in the state of nature you’re pretty much obligated to engage in whatever it takes to survive.

But instead of sticking with the state of nature, we have the ability to form communities built on mutual disarmament and mutual cooperation. Despite artificially limiting themselves, these communities become stronger than the less-scrupulous people outside them, because they can work together effectively and because they can boast a better quality of life that attracts their would-be enemies to join them. At least in the short term, these communities can resist races to the bottom and prevent the use of personally effective but negative-sum strategies.

One such community is the kind where members try to stick to rational discussion as much as possible. These communities are definitely better able to work together, because they have a powerful method of resolving empirical disputes. They’re definitely better quality of life, because you don’t have to deal with constant insult wars and personal attacks. And the existence of such communities provides positive externalities to the outside world, since they are better able to resolve difficult issues and find truth.

But forming a rationalist community isn’t just about having the will to discuss things well. It’s also about having the ability. Overcoming bias is really hard, and so the members of such a community need to be constantly trying to advance the art and figure out how to improve their discussion tactics.

As such, it’s acceptable to try to determine and discuss negative patterns of argument, even if those patterns of argument are useful and necessary weapons in a state of nature. If anything, understanding them makes them easier to use if you’ve got to use them, and makes them easier to recognize and counter from others, giving a slight advantage in battle if that’s the kind of thing you like. But moving them from unconscious to conscious also gives you the crucial choice of when to deploy them and allows people to try to root out ethnic tension in particular communities.

I mean, I wouldn’t have thought to do that if I hadn’t seen a Tumblr post weeks ago which made this joke. You’d have to realize that it’s the words that are important, rather than the general concept or something else entirely, and that’s not obvious from the post.

Yeah. It sounded very much like the kind of thing where people in the know will figure it out and people who aren’t in the know have no recourse but to ask, and those things are more frequently totally impenetrable than solvable by a relatively straightforward Google.

I thought that too, but I tried the Google search anyway and was very surprised at the very first result being so straightforward. I thought it would take a bit of digging. My first thought was an earlier SSC article mentioning a particular drug in a circumspect way to avoid spam (and a drawn-out hint-giving in the comments which, if I remember correctly, I figured out well after the original asker), and my second was “that sounds like something that could be said of neoreaction, but I highly doubt that would would be censored here”.

I suspect that if you try to think up examples of things which cannot by figured out by a Google search, more than half of what you come up with will be solvable by a Google search. If you indulge me and actually perform such experiment, please do report the results!

(compare: most Americans oppose Obamacare, but most Americans support each individual component of Obamacare when it is explained without using the word “Obamacare”)

Caveat : while this is posted a lot, it’s not really coherent. The popular parts of the Affordable Care Act are popular, but the unpopular parts are really really unpopular, and the whole isn’t the average of its parts. There’s a reason your link quickly glosses over the individual mandate, and related reasons it completely ignores a number of less well-known but still highly controversial components.

Many opponents of the Affordable Care Act still hate it without knowing its technical details precisely because they don’t like the people who passed it, true. But they’d probably not like many of the technical details, and saying that they like three of the most popular details doesn’t actually show that.

Researchers make victims participants fill out a questionnaire about their romantic relationships. Then they pretend to “grade” the questionnaire, actually assigning scores at random.

Yet if /you/ want to do research, it’s IRBs this, minimal harm that…

Heh.

Here is a prediction of this model: we will be obsessed with what concepts we can connect to other concepts, even when the connection is totally meaningless.
I don’t know anything about neural nets, so maybe this system isn’t actually a neural net, but whatever it is I’m thinking of, it’s a structure where eventually the three nodes reach some kind of equilibrium….

This seems like a really, really obviously flawed way of looking at the universe at first, and leads to such obviously bad conclusions. But let’s try to model it a different way.

If you like actions the As and dislike the actions of the Bs, someone’s opinions on either group aren’t just going to hit points on a General Factor of Politics : the accuracy of those opinions gives you information about that person’s expertise, and conveniently your priors are all your initial likes and dislikes. This becomes far more interesting if you think of it in terms of denialized updating.

Agree that the ACA is a bad choice, because even if people like the general goals or even strategies of a proposal, they might not like the actual details of the implementation, and since the law has been–in parts–in effect for sometime now, people have experience with it.
For example, someone may think that a mandate is good if the fine is x$ but cruel if it is 3x$, and thus quite rationally giving answers in line with the polls (even apart from the criticisms above).

. The popular parts of the Affordable Care Act are popular, but the unpopular parts are really really unpopular, and the whole isn’t the average of its parts. There’s a reason your link quickly glosses over the individual mandate, and related reasons it completely ignores a number of less well-known but still highly controversial components.

This strikes me as a case of “when the tool you have is a hammer, every problem looks like a nail”. Scott identifies some sort of actual fallacy or problem, and then goes around applying it far too broadly, to cases that seem to pattern-match to that problem, even if the actual details don’t match up.

I’m beginning to think that this sequence of posts may represent genuine progress towards a General Theory of How to Productively Discuss Political Things That Are Hard to Productively Discuss. Wouldn’t that be something?

I agree, and I wish it was all collected into a paper book that I could recommend to that majority which still doesn’t read long things on screens. I promise I’d buy at least 250 dollars worth of copies of that book.

I’ve recommended HPMOR to lots of people, but the only measure that I was ever very successful with was to print out the first couple of chapters, put them in a nice binding, and give them out as gifts…

That may have more to do with the social baggage around gift-giving than any preference for paper. I have had success recommending HPMOR by writing the website link and recommendation on a fancy card and giving it as a gift at Christmas-time (along with an Amazon gift-card so as not to seem cheap).

Probably the wrong place to shill this, but: I work at a self-publishing company. I would love a fancy bound hardcover copy of SSC essays. SCOTT IF YOU’RE LISTENING, PLEASE GIVE US THIS GIFT FOR CHRISTMAS

To be quite frank, I think a physical book of Scott’s posts on politics and reasonable discussion would much more effective than HPMOR or the Sequences, and I would happily purchase or pre-order it. If we get enough people to commit (and Scott obliges us), it can happy.

But forming a rationalist community isn’t just about having the will to discuss things well. It’s also about having the ability. Overcoming bias is really hard, and so the members of such a community need to be constantly trying to advance the art and figure out how to improve their discussion tactics.

A few related thoughts.

“Rationality” is hard to come by in the areas you’re discussing because feedback is missing. People correct themselves based on incentives. Get things wrong when predicting stock prices and you lose money. Buy a house in the wrong neighborhood and you get polar bear hunted. Get a policy question wrong (I hope that we can agree that although there may not be a “right” policy in some matter there are infinite numbers of stupid proposals that can be held for infinite numbers of stupid reasons) and the consequence is what? Actually nothing. Now, if you’re just some guy then no big deal about being wrong, right? On the other hand, when you’re faceless cog in the machine and your stupid ideas steer the machine in a wrong and destructive direction you still don’t feel any personal consequences. A rationalist should conclude that that system will tend towards disaster in the long run for the same reason that cancer kills people.

Ok, nothing punishes you for having wrong beliefs. What about having correct beliefs? Any benefit for that? How about if those correct beliefs are really unpopular?

Can you please restrain yourself from doing this? The rest of your comment is highly intelligent and you’re distracting from that by throwing in this sentence, which I would say is 0 of true, necessary to the rest of your comment, or kind, and you should be willing to admit is at most 1 out of 3.

He’s talking about Matt Yglesias, who described his attack here. (More details were discussed later; since the attackers were all black and nothing was stolen, it seems likely to suspect it was a hate crime.)

I wouldn’t have thought this was a real thing (I hadn’t even really heard of it) until it happened to someone I know. Walking through downtown San Francisco in the early evening, he got the shit beat out of him. They took his phone and tried to take his coat, but didn’t touch his wallet.

He came out of it with a serious head injury and no memory of the event, but was shown security camera footage of it by the police.

I doubt he would have even mentioned it to me — we’re acquaintances, not really friends, at the time he was either a metamour or an ex-metamour — except that he showed up to an event the next day with a huge black eye looking like absolute shit.

It occurs to me that if, hypothetically speaking, you were discussing social-justice-type questions in a diverse space that somehow managed to keep some reasonable norms going, you actually would have a feedback mechanism: the reactions of the other participants. Suppose I make some suggestion about how best to meet the needs of [marginalized group], and a [member of marginalized group] says something like “What you just said makes me feel really uncomfortable.” (Contrast Tumblr, where people don’t say that, but instead make their displeasure known in other ways.) That’s evidence that my suggestion as I gave it wasn’t quite on the mark, because not feeling threatened is an important human need. (How strong evidence it is depends on all different factors, of course.)

You couldn’t consistently arrive at the correct answers to policy questions this way, but I think you’d learn a surprising amount.

It’s not an “I win” button. Of course if it keeps getting abused cynically for political gains then that dilutes its evidentiary value, but I don’t think that happens anywhere near as often as you seem to think it does. Far more common, I think, is the situation where one party genuinely feels threatened and the other doesn’t want that to be counted as points against their side, with the usual justification being that you shouldn’t allow meta-level ingroup-outgroup considerations to weigh into object-level value judgments. Which is a valid point, but I think we sometimes forget that in matters as messy as these you can’t draw a bright line between the object-level and the meta-level.

Also, it works both ways; if the rationalists in the room all think something is fishy, then that tells you something useful too.

Yes, that’s a real concern (hence why I specified “cynically”) and we can’t always give people what they say they need. At the same time, I do feel like much of this community is too quick to deny people what they say they need on the basis of arguments from abstract principles, which can easily be a cover for our own tribal allegiances.

The node structure thing you describe sounds like PageRank with reciprocal or undirected edges; interesting to think that PageRank applied straightforwardly to moral reasoning might work well (or, better than not-at-all).

Moral reasoning, not really (although the eigenmorality thing is sort of interesting). But on e.g. formation of political beliefs, levels of alcohol consumption or anything else where you’re influenced by those close to you in the network, you bet.

Really? POPEHAT? All the blogs on my sidebar, and they complain about Popehat? The people who post hilarious articles about why you shouldn’t start frivolous defamation lawsuits? And also Clark, but we don’t talk about him?

Really, if Popehat is outside the Overton Window now it’s narrowed to a peephole.

“One such community is the kind where members try to stick to rational discussion among themselves. These communities are definitely better able to work together, because they have a powerful method of resolving empirical disputes. They’re definitely better quality of life, because you don’t have to deal with constant insult wars and personal attacks. And the existence of such communities provides positive externalities to the outside world, since they are better able to resolve difficult issues and find truth.

But forming a rationalist community isn’t just about having the will to discuss things well. It’s also about having the ability. Overcoming bias is really hard, and so the members of such a community need to be constantly trying to advance the art and figure out how to improve their discussion tactics.”

It’s meant to concisely make a point in a blunt and performative manner, which is Max’s shtick in general.

To unpack: the side that’s superior by local standards is not invulnerable, and can be defeated. (See: the Mongols.) Standards like the ones that exist locally seem to correlate with reluctance to take up serious self-defense.

You can sit down and debate another rationalist; you can’t sit down and debate Genghis Khan. But one of the problems with this is that people who don’t actually practice rationality in this sense will try to lay claim to it and draw the borders of rationality to exclude everyone elthedish to them. This is a common leftist move: anyone who isn’t a leftist is irrational, because rationality necessarily implies leftism.

If you’re a rationalist, you can’t debate the merits of using Dark Arts against another rationalist, just like if you’re Britain, you can’t debate the merits of breaking your seven-centuries-old treaty and going to war with Portugal. But when the Mongol hordes show up, you’d better have some Unforgivable Curses ready.

“But forming a rationalist community isn’t just about having the will to discuss things well. It’s also about having the ability. Overcoming bias is really hard, and so the members of such a community need to be constantly trying to advance the art and figure out how to improve their discussion tactics.”

How does this work in a world where ability is not uniformly distributed?

One useful thing about being part of a community is that members can often partially compensate for deficiencies by delegating tasks they can’t do well to other members for whom these tasks are easier.

Think of the types of accommodation offered to physically disabled members of most communities. If a paraplegic participates in a rock-climbing community, they can benefit a great deal by delegating certain tasks to other people. Meanwhile, the people helping them strap in or pull themselves over a ledge pay a mostly negotiable time/effort cost.

Or take Ozy’s recent post on spoons/forks, people with excess spoons can help out those who have reached their self-motivational limit. This doesn’t cost them anything (because they have more spoons then they need to accomplish everything on their agenda) and not only significantly benefits the other person, but also the group as a whole (a group that offers some degree of reliable protection from Moloch is attractive to new recruits and more likely to inspire fidelity on the part of current members).

What really matters is each member knowing how to optimally delegate: What am I sub-optimal at? Can I self-modify to improve my ability to do this? Or is it more efficient to use another brain for this particular task?

Of course the risk here is every brain delegating out everything to every other brain creating a vast network of idiots (a procession of the blind leading the blind). This is a reason that linking status to doing your own brainwork can be a useful norm. Link to one of the Sequences and you save yourself the time/effort of trying to explain something really complicated while also boosting Eliezer’s rockstar status. Win/win.

I’ve often said that tribalism is the root of all evil. Our need to belong to some group or other, and then to defend that group (because we’re really defending ourselves) leads to so many problems. I haven’t had much luck trying to convince people to disassociate from all their tribes though 😉

A postrationalist take, or perhaps just (meta-)rationalization: reasoning by concept valence is a form of party-line voting, which is a means of deferring to people with similar values to but more information than yourself.

So to go with the feminism example: by my naive moral intuitions I’d probably be against abortion. But it seems like people who share my values overall are pro-availability-of-abortion, i.e., “feminist.” “Feminism” here really does serve a useful function here insofar as I grant that my own reasoning has to be balanced against the collective reasoning of people running my utility function.

(Of course this makes things vulnerable to feedback loops. I suspect the general way to cut these off would be to allow consensus-forming discussion and action to remain separate – say, I “hypocritically” argue for restrictions on abortion online (or maybe only with other feminists) whilst voting for pro-choice candidates, or whatever – or by designating a group of ideological specialists who are encouraged to be more independent-minded and to whose collective wisdom we consult.)

I think this is the first time I’ve ever seen someone seriously propose that party-line opinions are established by more intelligent informed reasoners than oneself rationally considering what the correct policies would be given their values.

Well, I’ll second the progressive conjurer inasmuch as it makes sense to study a few issues as time permits in depth, find someone that agrees with you, and take their advice on the topics one doesn’t have the time for. Specialization, simply. Although as he alludes it is fragile if they are doing likewise.
Whether those people are better rationalists or more informed than the people posting here is debatable, perhaps, but it does seem like a decent strategy for the median voter.
Actually, on closer thought, it seems like the definition of “Republic.”

Absolutely. It’s how representative democracy should work: we say “I don’t understand the details of how topics X Y and Z work, so rather than call for the policy which makes most sense to me based on my limited understanding, let’s elect someone who will make it their job to understand those topics and determine appropriate policies.”

What percentage of the time representative democracy manages to work that way is a different matter.

I think this is the first time I’ve ever seen someone seriously propose that party-line opinions are established by more intelligent informed reasoners than oneself rationally considering what the correct policies would be given their values.

Compositional fallacy: every particular reasoner could be dumber and less informed than you (though n.b. you’re dumber and less informed than you think you are*) but the collectivity has passed through more generations of arguments and has a larger sample size for error correction and so on than you starting from point 0. (For certain decisions, like many forms of voting, error correction via sample size is already baked in, so adjust for circumstances.)

Of course, as with the market’s being smarter than any particular individual depending on lots of individuals thinking that they’re smarter than the market, good citizenship here entails adding some Hegel to the Burke, as it were. (Also like the market it almost certainly systematically fucks up in certain ways. Unlike markets I’m not sure there’s a ton of research on what forms of discourse-structures aggregate information efficiently. Robin Hanson’s found an interesting hobbyhorse in suggesting that markets could aggregate a lot more than they do, so maybe the solution is futarchy IDK – or maybe “Catholic” (large, hierarchical, centralized, credentialized structures, like the HRCC or CPSU, vulnerable to institutional capture) or “Protestant” (less so, vulnerable to opportunistic mutations) structures are particularly good at these kind of things. Scott already mentioned this in the Ecclesiology post and, credit where credit is due, NRx have already done a lot of thinking about this. So have Marxists but it’s buried pretty deep in organizational and historical debates that I don’t have enough of a handle on.**

*My primary concern with “rationalism,” and why I don’t identify as a “rationalist,” is that this insight really seems to be the core of everything useful about it, but the dynamics of the community surrounding it (possibly necessarily so) more or less suppress it in practice.

**I was about to say that this inferential distance makes those argument-thickets less deep, because they can’t talk to each other as easily, but of course it’s also a reflection of their depth. So maybe depth is self-limiting IDK.

I think this is the first time I’ve ever seen someone seriously propose that party-line opinions are established by more intelligent informed reasoners than oneself rationally considering what the correct policies would be given their values.

Well, that’s kind of how the Catholic “party” justifies “deferring to the Magisterium” to itself.

>So to go with the feminism example: by my naive moral intuitions I’d probably be against abortion. But it seems like people who share my values overall are pro-availability-of-abortion, i.e., “feminist.”

Yeah, that happened to me once. I came to the conclusion that just because an opinion has people claiming it’s “liberal” doesn’t always mean it’s right. Politics has a bit of a free-rider problem.

(I later came to the further conclusion that my views had been systematically biased by my tribal affiliation, but honestly I’m not sure I’ve successfully fixed that problem.)

>people running my utility function

I’m … pretty sure feminists aren’t running on a different utility function to the rest of humanity.

Does this also help explain why you so often get exactly two competing coalitions?

Node connections seem to have either a positive or a negative value- if left-handed people are sinister, then left-handed scissors will be seen as bad by association. But in practice, it also seems to be the case that this will cause right-handed scissors to become more virtuous and gain positive affect. Connections can be used to do more than reflect evil from known bads- by building an inverse connection to a bad-thing, you can make something good.

(You know what I hate? Nazis. Man, Nazis are terrible.)

My gut feeling is that if I went all Stuart Kauffmann and modeled a bunch of randomly formed networks with both correlative and anti-correlative connections, and with ‘up’ and ‘down’ positions and amplitudes for each node, and then allowed the connections to propagate positive and negative affects through different nodes, then I would see that a given network would most often be stable in one of two configurations. And further that these two states would tend to be matched on many nodes and the exact inverse of each other on all the rest. Just a gut feeling though; the actual math is pretty heinous. I’m also especially curious how often such a network would be stable, or fall in to stable loops.

One way to avoid excessive identification with the Blue or Red tribe is to pick a couple of signature issues, one identified with the Red tribe and one identified with the Blue tribe, and keep reminding yourself “These are the people who are wrong about X.”

OTOH, this can cause you to be called both a Right-Wing Nut Job and a Marxist.

All the stuff about experiments with hypnosis giving people disgust reactions to certain words and it warping their opinions make me think A: “This doesn’t sound like the things people say hypnosis is actually capable of,” and B: “It is obviously an emotional disgust reaction to say that people willing to mess with human minds like this should be killed, but I’m going to stick with it anyway.”

Tagging along here because you’re the first reply I see expressing something similar to my thought:

I would love to hear opinions, especially from Scott, on whether this _is_ something that hypnosis seems to be capable of. I have spent some time trying to work out how much of what hypnosis has been claimed to do is real, and I just can’t seem to converge. (I have gone so far as having a friend who is a practitioner attempt to hypnotize me. I suspect she counted the result as evidence in favor whereas I counted it as evidence against, which gives me some idea how the available secondhand evidence can be so mixed.)

Isn’t the General Factor of John, where all of John’s characteristics are connected in the sort of network you describe, the halo/horns effect?

Clearly the fact that John is a goat fucker must mean he is a bad driver.

Unless you’re really into goat fucking, in which case clearly it means he is a good driver.

(And since ‘the halo effect’ is connected to rationality, which has good karma, and I have managed to make a connection between the halo effect and your article, and thus you, I now like you a bit more, and so you must be a slightly better driver).

Yet Scott isn’t connected much more strongly to rationality? I think his driving habits are far more determined by this strong connection than by the relatively weak connection via the halo effect (even taking into account that we love to double-count things).

“Little flow diagram things make everything better. Let’s make a little flow diagram thing.”
Is it nit-pickingly pedantic of me to point out that you didn’t actually make a diagram, and describing a hypothetical diagram probably isn’t anymore illuminating than describing the concepts and relationships said diagram would be representing if it was, in fact, made? Yes? Then forget I brought it up.

I made a nice little diagram in my head, which I would not have done had it not been mentioned.
Also, why use Visio when there is ASCII?
Numbers in brackets indicate strength of connection.(1 means fully dependant on each other)

“compare: most Americans oppose Obamacare, but most Americans support each individual component of Obamacare when it is explained without using the word “Obamacare””

It’s definitely true that partisan and ethnocentric attitudes make Obamacare less popular when primed, but the poll linked from that link (which is a selective political discussion of the poll) doesn’t quite say what you cite it for.

It looks like the 2012 electorate liked provisions that mandate that various people get insurance at below-market rates, but they disliked the individual mandate to force other people to buy insurance at above-market rates to pay for the below-market insured.

The voters also dislike higher taxes, higher insurance premiums, and having their insurance policies cancelled. In general, you get broad support for spending programs and mandated benefits (except foreign aid) in the abstract without mentioning costs, but when you combine costs and benefits (“more teachers!” vs “raise taxes to hire more teachers!”) the polling often changes substantially.

Kaiser has an ongoing tracking poll with lots of detailed data (the trend has been towards unfavorable since 2012):

So I’m guessing that means you don’t want me to link this post to they-who-must-not-be-named on Twitter? I’ll respect your choice but I think it’s a shame, I think they could really get a lot out of it, and some of it wouldn’t even just be better weapons to attack their enemies with.

I guess this is an issue of trading off reach of ideas for a nice clean comments section, and your idea of clean is different from mine.

The weird thing is that I get stats on incoming links, and even if I don’t mention it, half of the times people cite my articles on Reddit or Twitter or wherever, it’s to make a commentary about reproductively mature ants.

Here I am trying to narrow down the basic principles of all logical discussion, and the only area in which people feel like this might be useful is the discussion of ant reproduction. I am totally flummoxed by this.

Well, ant reproduction is a hot topic. It can also be reasonably described as a *Gray vs. Blue* topic, which has taken shapes that match very well topics you have written about. And not only have you written about those extraordinarily well, but you also did it in a way that’s a lot more friendly to bluish-gray sympathies than most other available resources.

Yes, it’s a clever euphemism for a nasty Internet culture-war clusterfuck that we don’t want to name directly lest we draw the baleful gaze of its unsleeping lidless eye. Google “reproductively viable worker ants” for the referent.

I don’t know anything about neural nets, so maybe this system isn’t actually a neural net

It’s not. The rules for what qualifies as a “neural network” are specific enough to disqualify the sorts of operations you have going on (and for that matter, disqualify the way the brain actually works). If you just said “network,” then you’d be totally in the clear.

Could you go into more detail? I am not particularly familiar with neural nets, but the familiarity I do have made me think this was legal. What specifics forbid this, and are they the same ones that forbid brains?

Recurrent neural networks are totally a thing, and are totally neural networks. In an ML context, neural networks are prototypically feedforward (i.e. acyclic and stateless) but not necessarily… We’re having an argument about what “standard” means here.

Markov Random Fields (MRFs), in particular the pairwise kind, might be the mathematical model being pointed to here. MRFs have nodes that can take on a good/bad karma and have the plausibility of the global configurations of these karmas depend on both inherent features of the nodes and on local agreement along edges between neighbors.

Strictly speaking, the model doesn’t have a concept of time, so there’s no idea of path-dependence as maybe determining if some node ends up good or bad. But nodes with marginal probabilities near 0.5 (say, of being good) could be interpreted as ones which historical accidents or chance or luck could have pushed either way. Also, the techniques that folks in machine learning and stats use to work with these models can have a very dynamic and agent-y character, e.g., the evocatively named belief propagation!

I agree with nico’s comment upthread that Scott was using NN informally, and don’t think it’s particularly important whether a more correct model that captures his coherent extrapolated vision is formally a NN or not.

Recommendation taken, though it’s going to be a long time before I can read anything not at least tangentially dissertation-related. Fwiw, my views on graphical models are mostly borrowed from people who take the Wainwright and Jordan (2008) “paper” as foundational. There might be other perspectives I’m unaware of.

I was at least a bit unclear. The joint/posterior distribution on a graphical model is a fixed, static, well-defined thing. That graphical model may represent something static, like the depth of pixels in an image (pairwise MRF for stereo vision), dynamic, like a sequence of words in a text (HMM), or something in-between, like a sequence of bases in DNA (HMM) or the equilibrium distribution of states in a spin glass (pairwise MRF). In any case, computations involving these models may have a dynamic character that does not necessarily correspond to any dynamics in the system being modeled.

I just meant people sometimes like to encode temporal directionality via directed graphical models. This does not mean you need to give up the undirected part if you also need it (maybe you want to track a social network over time, so need both undirected and directed relationships). This is what chain graphs are for.

—

I am aware of what graphical models are, and how sum/product algs work, etc. Thank you for taking the time to clarify what you meant :).

Your argument in section VII presupposes that there are no possible actual arguments for thinking that opposition to Israel is evidence of anti-semitism, without explicitly stating this as a hypothesis. Since I think there are such arguments, I found it quite confusing. I got it on the second or third read through by modelling someone who does believe that and imagining at each step what they’d now be thinking.

This comment should not be construed as showing a desire to discuss political questions here.

Changing the subject now. I think people often vote based not on policy but on a comparison of the each politician’s General Factor of Character and a General Factor of Shares My Values. This is not such a bad thing. Trying to intersect my policy opinions with each politician’s policy opinions and turn that into a comparator function across all politicians is… not impossible, but I don’t have the time. Instead everything I read or hear about a politician or party feeds into those nodes, either positively or negatively, and I then make my decision based on those node values.

I’m comfortable with this decision procedure. It doesn’t mean I don’t think carefully of deeply about specific issues, but once I’m done thinking about them and ready to move on, most of what’s left relevant to my vote is captured in the changes to those node values.

Actually, that wasn’t a change of subject. I think you’re saying (again in VII) that there’s no value to knowing if someone is “anti-semitic” over and above knowing all of their specific policy opinions. Since it’s impossible to know all current and future policy opinions of a candidate, I think it could be quite valuable indeed, and a feeder into the two nodes mentioned above.

Not only that, but it might actually be counterproductive to vote on policy. You don’t know what the politician whose policy positions you like will be able to do in office, whether for pragmatic or political reasons. If you agree with 95% of their positions, but the 5% you don’t like is the 5% they have in common with the other party and so they end up acting on it way more often, you’ve just shot yourself in the foot. When you’re picking a politician, you’re also delegating a negotiator, so it’s far more useful to know what values they’ll prioritize and whose interests they’ll protect than what policies they’d enact with zero friction.

Scott actually acknowledges your point on section VII – he’s saying in the second paragraph that, strictly speaking, it’s plausible that someone is more likely to be anti-semitic if they oppose Israel. The language is admittedly unusual if you don’t read much rationalist stuff.

Some replies to your second point:

I think people often vote based not on policy but on a comparison of the each politician’s General Factor of Character and a General Factor of Shares My Values. This is not such a bad thing.

For the record, I basically agree.

Instead everything I read or hear about a politician or party feeds into those nodes, either positively or negatively, and I then make my decision based on those node values.

This is the part I have a problem with. This might work (and probably worked for a long time) in a relatively small group where basically everybody knows each other by name. But in a modern political context, the system gets absolutely flooded with noise. People exploit the model so that it ends up spitting out nonsense. Here’s how this can be done in the Israel/anti-semitism case.

Somebody with a big megaphone starts shouting about how opposing Israel is anti-semitic, and if someone opposes Israel we know they are an anti-semite.

They get challenged, because people who “oppose Israel” feel like they are being associated with Nazis. But the guy with the megaphone says that all they really mean by “anti-semite” is someone who supports policies that would hurt Jews (even if that’s not why they support the policy).

If we take him at his word, this means that when Megaphone Guy calls someone an anti-semite it doesn’t actually tell us whether they fit in the group of people that we associate with the term. But the node network doesn’t know that. It has a concept for anti-semitism that it associates with Nazis, and megaphone guy is building a connection between opposing Israel and Nazis through the anti-semitism node by making up a new definition of anti-semitism. Now think for a second about how vague the term “opposing Israel” is.

So your decision-making procedure gets hijacked by skilled manipulators, who then win elections. Eventually, instead of people trying to provide direct evidence of their candidate’s character you get nothing but attempts to associate candidates with bad things, regardless of evidence. Does this sound familiar at all?

Yes, I saw that P expression, but interpreted the surrounding context as implying that the only reason we might (naively) think that value would be high is because the inverse expression is high, and often a high probability of one goes with a high probabiity of the other.

But the guy with the megaphone says that all they really mean by “anti-semite” is someone who supports policies that would hurt Jews

Do you mean this hypothetically? Like, this is how an argument of this sort could get played out? Because in my experience usually when someone says someone else is anti-semitic, they are happy to admit that they really mean it. And not just in the happens-to-oppose-Israel sense. I’m writing from Europe, btw, so perhaps are the terms of the debate in the US different and require that people not press the issue and back off when challenged?

Anyway, assuming either that it’s a hypothetical or the debate in the US is very different from the debate here…

“Yes, I saw that P expression, but interpreted the surrounding context as implying that the only reason we might (naively) think that value would be high is because the inverse expression is high.”

Scott states that “p(opposes Israel|is anti-Semitic) is probably pretty high”. If we interpret that as “p(opposes Israel|is anti-Semitic) > p(supports Israel|is anti-Semitic)”, then that is equivalent to asserting that opposing Israel is evidence of anti-semitism. That’s what being evidence of a claim means: if it’s more likely, given that the claim is true, then it’s evidence for the claim.

I would think opposition to Isreal would be pretty poor evidence of anti-semitism in America, insofar as I would expect most Americans who dislike Jews to dislike Moslems as much or more. Of course, I could be wrong, and it is surely different in other places (especially, obviously, among Moslems).

Wait. One Democrat, presumably not acting on Obama’s direct orders, attacks a Republican woman. And this is supposed to alter the outcome of the entire election? In what universe does one crime by a deranged psychopath change whether Obama’s tax policy or job policy or bombing-scary-foreigners policy is better or worse than McCain’s?

Even if we’re willing to make the irresponsible leap from “Obama is supported by psychopaths, therefore he’s probably a bad guy,” there are like a hundred million people on each side. Psychopaths are usually estimated at about 1% of the population, so any movement with a million people will already have 10,000 psychopaths. Proving the existence of a single one changes nothing.

[nitpick]
This isn’t as irrelevant as it might seem – what matters is not if a leader attracts some evil or crazy followers, because there are indeed enough evil and/or crazy people that any sufficiently large group will include some, but whether or not the leader is doing something to make them bolder and more likely to act violently.
[/nitpick]

In the 20th Century, American Catholic businessmen who felt a lot of team spirit and competitive animal spirits didn’t engage all that much in pro-Catholic foreign policy activism regarding the Vatican, Ireland, Italy, Poland, Spain, Mexico, Quebec or wherever. Why not? One reason is because they had the Notre Dame Fighting Irish football team as a proxy to root for.

Similarly, Protestant businessmen invest huge amounts of money in their home state college football teams. T. Boone Pickens, for example, has given somewhere around $175,000,000 to boost Oklahoma State’s football team. Phil Knight of Nike may have given more to the U. of Oregon. If we had college football back in 1861-1865, maybe we wouldn’t have had the Late Unpleasantness.

In contrast, their American Jewish counterparts never had a college football team to root for and invest in, so they’ve rooted for and invested in Israel.

That helps explain when peace will come to the Middle East: about the same time that Notre Dame and USC negotiate an end to conflict on the football field.

As a counterpoint, consider the soccer teams that were famously supported by narcoterrorists. Extravagant sums were spent on the teams, but team victories and losses were not so much a substitute for violence as an excuse for additional violence.

But instead of sticking with the state of nature, we have the ability to form communities built on mutual disarmament and mutual cooperation. Despite artificially limiting themselves, these communities become stronger than the less-scrupulous people outside them, because they can work together effectively and because they can boast a better quality of life that attracts their would-be enemies to join them. At least in the short term, these communities can resist races to the bottom and prevent the use of personally effective but negative-sum strategies.

Come, let us create an in-group to escape the in-groups!

You can curate to some extent, but you’ll just shift your group’s blind spot to a different location, and it’s not at all apparent that this is optimal.

It seems to me that the best personal strategy is to choose a side that you think will win (ideally it lines up with your friends and family), join it, totally ignore political arguments beyond perfunctory affirmations that you are still part of the in-group, and move on with optimizing other parts of your life. Communes might align more closely with your personal preferences, but you sharply reduce the size of the social network you can pull on. The con is that it will mean some necessary mindkilling on one’s self in order to not be a source of stress. In return it gets you cities, organizations, and neighborhoods that stand with your side, rather than a few thousand people on the internet and a few tens in your area. That sounds to me like a good recipe for survival and for happiness, even though accuracy of your beliefs may suffer.

I REALLY, REALLY, REALLY HATE YOUR COMMENT! (not you personally). This is the same as encouraging defect in a prisoner’s dilemma. Yes, creating non-mindkilled world requires a lot of effort, a lot of coordination. Yes, it is ambitious goal.

Nevertheless, memes that you should actually be intellectually honest should spread for the sake of the world. The world where your political movement won but the truth was lost in the process is the world not worth having. Everything else can be achieved.

Even if it is true that you can become rich by using fraud and deception, you shouldn’t encourage people to commit these crimes.

I understand, that if the enemy is talking lies and maybe even winning, committing to not lying feels like a unilateral disarmament. But you should also understand that in this case, it is not a literal war – it is memetic one. You cannot win the war of good against evil by becoming evil yourself, even if evil has its hands unconstrained and has more weapons, because in that case, evil won. You must not be so short-sighted so as to think that this is the literal war, where copying all new enemy’s weapons is a good strategy. This is memetic war, which totally unlike the literal one and it is no surprise that some metaphors break down. What weapons you have is inseparable from you and your identity.

To quote

Go again and see not just the film and the play but also read the text of Robert Bolt’s wonderful play “A Man for All Seasons”, I am sure some of you must have seen it – where
Sir Thomas More decides that he would rather die than lie or betray his faith. And one moment he is arguing with the particularly vicious which hunting prosecutor. A servant of the king and a hungry and ambitious man.

And More says: “You’d break the law to punish the devil, wouldn’t you?”

The prosecutor says: “break it? I’d cut down every law in England if that would take it to catch him”.

“Yes you would, wouldn’t you?” And then “When you would have cornered the devil and the devil would turn around to meet you, where would you run for protection, all the laws of England having been cut down and flattened? Who would protect you then?”

Once you throw out the truth and epistemic rationality out of the window, who would protect you when the next tide of lies is turned against you? You may become a symbolic scapegoat for a some sort of new political movement, like, e.g. so many stories, which are unremarkable by itself, became internationally infamous because the media, social media and various activists decided to focus on these particular stories as a part of their moral crusade? And those movements wouldn’t use presumption of innocence. They would use “presumption of what would make a best symbolic victory for our movement”.

In the past (and still in some part of the world, which may include some parts of your town) a person who had the most muscles (or one who could gather a gang with most muscles combined) and could beat up others was the king. Nowadays (outside the said parts of the world) a person who has the most meme spreading power and the power to ruin reputations is the king. We have police and court system who are supposed to deal with the first kind of gangsters. As the gangsterism of the second type is much harder to define, that solution is hardly applicable. We themselves need to be vigilant defenders of truth.

In a world where truth has some value, it can protect you against false accusations (made, for example, because some kind of people at that particular point in time were in dire need of some kind of symbolic victory). In a world where there is no one to fight for the truth, meme-gangsters are opposed only by other meme-gangsters, we are in the state of war of everyone against everyone, and there is absolutely no incentive for those gangsters to reduce the intensity of war, nor there is any truth that can act as a shield, as a defense against them.

Well, how convinced the other person is does provide me with an information that I take into account when I have to estimate whether a certain statement is correct when I do not have other sources to check. And we do not even have to know anything about Aumann’s theorem or even about the idea to aspire to be more rational.

For example, I talked with a friend and one of us (I can’t remember who) made a statement that “An object X is in a Y part of the city”. After “No”, “Yes”, “No”, “Yes”, “No”, “Yes” we sort of agreed that, in fact, it is probably is in a said part of the city (in that particular case it turned out to be correct), even though we exchanged no factual information except our levels of belief!

Argument…is not what it seems to me suitable to do with opinions. What one does with opinions – all one needs to do with them, having found that one has them – is to enjoy them, display them, use them, develop them, in order to cajole, press, bully, soothe and sneer other people into sharing (or being affronted by) them. To argue them is, it seems to me, a very vulgar, debating-society sort of activity.

I suspect that almost no one would agree with this outright, yet almost everyone acts as if it is true. A question is how one can turn people into more principled rhetoricians, and I fear that the answer is cajoling, pressing, bullying, soothing and sneering.

I used to think this was false, but this post of Scott’s, and his previous ones, have pretty much convinced me that I was wrong, and Maurice Cowling is right.

If you want to convince people, become very good at rhetoric. If you want to get along with people, avoid mentioning any opinions that don’t match theirs. If you want to convince people, but aren’t any good at rhetoric, then give up and keep your opinions to yourself, because you have no chance of convincing anyone of anything.

I think you are far too uncharitable to people’s reasoning abilities about politics, because you don’t sufficiently understand where they are coming from, so you round off their beliefs too roughly. For example:

(compare: most Americans oppose Obamacare, but most Americans support each individual component of Obamacare when it is explained without using the word “Obamacare”)

This isn’t true. People support each individual component of Obamacare except the individual mandate and the tax increases (and are pretty pissed off that they liked their old coverage but can’t keep it). In other words, they like the benefits, they don’t like the costs. Why round this off to mood affiliation/affective dislike, when there’s a far simpler story?

Second example: Your global warming thing. You are quite right that there’s a general factor of “Global Warming” and that specific problems/solutions aren’t really discussed. But you are quite wrong that your essay from last time, or chanting “Global Warming is Communism” could switch the sides. The reason that global warming is a leftist issue is that it presents a huge, society-wide problem that could plausibly need a MASSIVE CRUSADE to stop. And the drive to perfect humanity via crusading activism is the emotional heart of leftism. Meanwhile, the emotional heart of the right is that central planning cannot solve societal problems, so global warming cannot possibly be a problem that needs central planning to solve.

Note: our theories make different predictions! Your theory suggests, due to path-dependence, that across many countries, left and right are roughly equally likely to be “alarmist” or “denialist.” My theory suggests that the right is almost always the more “denialist” side. Compare across Western countries (left and right are rather different outside of this context) and there’s your answer.

The emotional heart of the right is a kind of nostalgia for a river they stepped into once before (or thought they did), and wish to step into again. Tolkien’s Middle Earth world is what resonates with the heart of the right: it’s all ruins of former splendor, and hearts of men growing weak.

Various left wing sorts have nostalgia for the regulatory glories of our past – Glass-Steagall, for example. See also talk about bringing manufacturing jobs back and the golden era of unions, or the golden era we had before Clinton’s welfare reform.

I wish there were a better term than ‘nostalgia’ here. ‘Nostalgia’ I think connotes an unrealistic, idealized memory of some past state, but I see it also used referring to accurate accounts of particular past facts.

The reason that global warming is a leftist issue is that it presents a huge, society-wide problem that could plausibly need a MASSIVE CRUSADE to stop. And the drive to perfect humanity via crusading activism is the emotional heart of leftism. Meanwhile, the emotional heart of the right is that central planning cannot solve societal problems, so global warming cannot possibly be a problem that needs central planning to solve.

OTOH, these characterizations of the emotional hearts of the left/right tend to be favored by those on the right (who tend to care more about issues of central planning). I’d say the emotional heart of the left is more along the lines of reducing the harm that humans cause to (the rest of) the planet. IOW, it’s unlikely that their positions are mirror images, and more likely that one side takes a position for/against an issue that the other side really doesn’t care that much at all about.

It was a Republican issue in the States during the late 19th and very early 20th centuries, too, but the Republicans of the time didn’t much resemble the current party. The Democrats adopted some conservation goals as part of the New Deal, but I wouldn’t call it a plank of the party until at least the Seventies.

Note also that “environmentalism” is a very large bucket. Global warming wouldn’t have even been on the radar back then. I imagine (and this is admittedly not well-sourced) that back then it would have revolved much more around things like major, visible air pollution, contaminated water, and deforestation. These are all basically solved problems in the modern West.

tldr; comparing past and present left/right views on “environmentalism” a century ago is probably about as illuminating as comparing past and present left/right views on our relations with Europe. (Democrats used to think we should bomb Germany! Why have they flip-flopped?)

Pollution was more of a late 20th century issue: think rivers catching fire in Ohio, Silent Spring, Captain Planet.

The early conservation movement basically revolved around land-use issues: deforestation, overdevelopment of sites like Yellowstone or Yosemite, overexploitation of fish and game resources. It’s why we now have things like national parks, national forests, and hunting licenses.

Historically the emotional heart of leftism was that the world could be made a much better place for all people by radical changes to society, and the old order should be overthrown. Today, what you describe (a very rightist view really) has infected the left and is consuming it from the inside out. I really wish I could kick it out and bring back the utopianism of the past.

(does this make me some sort of strange retro-futuristic reactionary progressive radical?)

Your theory suggests, due to path-dependence, that across many countries, left and right are roughly equally likely to be “alarmist” or “denialist.” My theory suggests that the right is almost always the more “denialist” side.

There are elements of the right that could be defined thus, running from Hayek and through Rand and Friedman, but it does not apply to Burke, or Carlyle, or Kirk, or Oakeshott, or Scruton.

Besides, how about the War on Terror? Many of the Big-C Conservatives who rail against climate change alarmism and international efforts to reduce carbon expenditure were up in arms about the threat of jihadism and all for attempts to curb it through ambitious state action. I suspect that the most influential bias begins with the alleged cause of AGW – the consumerism that their tribe defends. (Conversely, some leftists are doubtless inclined towards believing in it as it affirms everything they felt about commerce.)

There are elements of the right that could be defined thus, running from Hayek and through Rand and Friedman, but it does not apply to Burke, or Carlyle, or Kirk, or Oakeshott, or Scruton.

Not in Burke??? I guess I imagined it when he wrote that society is an unplanned, eternal, cross-generational compact between the dead, the living, and the yet-to-be-born, that society is best expressed not by the state but by the little platoons of civil society, and that it’s barbarous to take a sword to that compact, or those little platoons, supposing you can do better by “reason” at the behest of a central state. Was it some other Edmund Burke?

That’s precisely what I identified as the emotional heart of the right, and it’s also what anyone with even a passing acquaintance with Burke will immediately identify as his central point. There’s a very similar message in Oakeshott too, although the language is different. Scruton, Carlyle and Kirk I am less familiar with, but they can all be found saying similar things at times.

Many of the Big-C Conservatives who rail against climate change alarmism and international efforts to reduce carbon expenditure were up in arms about the threat of jihadism and all for attempts to curb it through ambitious state action.

People who think Conservatives are being hypocritical for supporting a big-government solution here are misunderstanding the Conservative critique of big government. Conservatives don’t believe that the government is bad at everything. I understand why people get the impression that they do; most modern debates revolve around getting the government involved in areas Conservative think it’s incompetent, so “small government” becomes convenient short-hand.

But actually, Conservatives think the government is extremely good at one particular thing: killing people. (You can see this sentiment expressed most purely in the Libertarian insistence that all government action is ultimately based on force.)

Since they believe that fighting jihadism involves a lot of killing they have no trouble involving the government.

The topic isn’t the proper use of government, but alarmism. Traditionally, conservatives were isolationists. Maybe they want a big military for defensive purposes, but they don’t want to use it at the drop of a hat.

That’s a decent justification, but another answer is that there is more than one right encapsulated in the coalition block that we tend to call the right as a whole, and that while they might generally hold to similar policy prescriptions, they don’t necessarily derive the same positions from the same basic principles. A religious conservative who emphasizes social positions rather than economic ones may deny global warming for different reasons than a Ron Paul type would. Or it may simply be that the social conservative who is very heavily into government intervention in one sense, has internalized the values of the coalition as a whole without picking apart what the precise role of the government should be in a general sense.

Given that modern conservatism descends from the fusion philosophy liberal conservatism which descends from classical liberalism and Tory social conservatism, we might chalk reticence about making regulatory efforts about the environment up to the economic classical liberal bias. The right follows non-intervention (beyond protecting property rights and blah blah, but we are talking narratives here more than anything) when it is do with issues supposed to be economic (I would object that all issues are economic really, but that’s besides the point), but not when it is to do with the set of issues that are designated to be “social”; those are up for grabs for government regulation.

The left formed its own coalition long ago. Some classical liberals were influenced by socialist ideals, and so the counterpart to liberal conservatism was social liberalism, with the issue of government regulation inverted; social freedom, economic regulation (the simple narrative goes). These two forms battled it out until they took over the main part of the political spectrum, and language shift caused the “liberal” part to drop off of “conservative” in America, and social liberals to be known simply as “liberals”. We can see the evidence of the coalition-splitting, when we see that conservatives in Australia are called liberals instead; the conservative part dropped off from the liberal part instead of the other way around.

So the right as a whole may be denialist because of the demands of being in a coalition with free market capitalists who are denialist, and not because each part of the coalition has a consistent stance that central planing cannot fix societal issues, even if that’s what they profess to have.

A lot of attempts to describe the right and left with one or two principles get confused because we forget that they are really coalitions of multiple viewpoints. They’ve lasted long enough that the values of their coalitions have been internalized, but perhaps with a lot of twisting of rationale so that people who want a drug war and sexual regulations can also want lower taxes and less central government. The same is true for the left, and the existence of one philosophically inconsistent coalition helps hold the other together. People want to be unlike their political enemies in as many respects as possible, as it avoids getting hit by crossfire.

The thrive or survive story may describe some general emotional tendency, but it’s also true that coalitions of convenience can turn into coalitions with momentum, and that over time they can be come to be seen as complete worldviews despite lacking any one principle to underpin them. It’s instructive here to go back far enough in history to find anti-capitalist rightists like George Fitzugh, or racist leftists like Mikhail Bakunin and Pierre Joseph Proudhon. Coalitions eventually formed in which such combinations of views would risk you being hit by crossfire.

I think it’s a significant weakness in your early argument that you assume the Israel/Palestine conflict consists of many independent questions. Because the conflict consists of one side reacting to the other, reacting to the first and so on you can easily have a situation where one side being justified makes the others reaction unjust, makes the firsts reaction to their reaction just and so on where saying you believe Israel is justified or Palestine is justified is much more natural than discussing some specific action of either.

I’ve been trying to practice empathy more, especially seeking to understand why people I disagree with feel the way they do. (My motto is: “I understand how you could feel that way; If I were you, I’d feel that way too.”) I then find myself super frustrated when people playing the ethnic tensions game slam on people I now have a better understanding for. I foolishly leap in to defend them, which leads to me getting rounded off to the closest stereotype.

This is why all my friends think I’m a conservative MRA who’s pro-reproductively viable worker ants, even though I’m a liberal feminist who doesn’t give a shit about ants. (I think I may have broken the metaphor, there.)

The obviously correct action is not to defend the evil outgroup who I have now taken the effort to humanize in my own mind. Or at least not to be mad at members of my ingroup who haven’t chosen to make that effort.

I seem to have strayed some from the plot here, but oh well. I’m going to post it anyway, because if I posted it on facebook, it would be mistaken for a status ploy. (I’m more empathetic than you are. Ha!)

I think to succeed at all at this, you have to establish your bona fides as a liberal feminist, or whatever, before you can pull a Nixon goes to China by acting against type in empathizing with the outgroup.

The chief obstacle you’ll run into is the “even the liberal New Republic” or Bruce Bartlett/David Frum problem. It was sort of interesting at first when “even the liberal New Republic” magazine took a conservative position, or GOP partisans Bartlett and Frum took Democratic positions, but eventually all that happened is that people on the left lost respect for the New Republic and people on the right lost respect for Bartlett and Frum.

Likewise, what may eventually happen even if you first establish your liberal feminist bona fides is that people may decide that you’re not a *real* liberal feminist because you don’t hate the outgroup enough. Perhaps it’s a kind of social capital: you can empathize with the outgroup once, or maybe twice, but if you keep being argumentatively civil toward them, people will just see you as a quisling (“the Alan Colmes of X”) and loathe you as a traitor.

I think to succeed at all at this, you have to establish your bona fides as a liberal feminist, or whatever, before you can pull a Nixon goes to China by acting against type in empathizing with the outgroup.

Yeah. Unfortunately, in the stupid social media world I live in, you establish your bona fides by reposting thoughtless snark against the outgroup. Tribal signalling uber alles, with lack of signalling being suspicious, and actually extending empathy towards the other side being a full confirmation that you were always with them in the first place.

So my best bet seems to be to give the blue tribe the finger, and hang out with the grey tribe, who seem less likely to demonize me for seriously considering that my opponents might have a good reason for believing what they do. Or just not talking about political things. Ever. (But see my inability to put up with thoughtless snark once I’ve empathized with someone. Argh.)

I’m not sure they grey tribe is actually better, mind you. They just seem to be less likely to declare you to be the Great Satan for considering opposing memes. I’m actively looking into this.

Since my root concern here is empathy, the best view is that I’m not sure that the grey tribe is actually more empathetic. (So, better at empathy.) However, if it is true that the grey tribe will not attack you for having empathy with the outgroup, then it’s certainly better. (Here, better = better for me to be in.)

However, I’m not sure. This may just be a case of who counts as the outgroup. The grey tribe is willing to put up with worker ants and MRAs, as long as they can say that they came to their opinions rationally, but I suspect that deontologists are probably not very welcome.

I’m not sure it’s possible to have a group that allows expressing a significant amount of empathy for the outgroup. Is going grey just switching sides to a coalition that contains the worker ants? Is that a bad thing?

How would I be able to tell if a group was pro-empathy for outsiders? Is that even possible?

This may just be a case of who counts as the outgroup. The grey tribe is willing to put up with worker ants and MRAs, as long as they can say that they came to their opinions rationally, but I suspect that deontologists are probably not very welcome.

I think you might be conflating our weird little initiatory warrior society with the Gray tribe at large. Deontological thinking isn’t uncommon among Grays — it just isn’t the Reds’ Christian traditionalist deontology or the Blues’ social deontology.

That’s true to some extent, though maybe not to the extent you’re thinking. Some highly respected LW people (e.g., Alicorn) are deontologists. And from the effective altruism survey, Peter Hurford reports: “there were more EAs who took our survey that have non-consequentialist moral philosophies than there were EAs who took our survey that were women”. Though maybe that says more about our gender diversity than our ethical diversity.

Rather than gaining status by refusing to empathize with outgroups, I suggest trying to gain status by empathizing more with ingroups. In particular, the way to do this without saying ‘[group X] is right in all their disagreements with Outgroup’ is to seek out internal interactions and analysis within/between ingroups.

Basically, model participation in your ingroup as ‘research’, not as ‘debate’. If you want to fit in with Catholics without siding with them on contentious issues, express an interest in and curiosity about theology and Church history, and act on that interest/curiosity. If you want to fit in with anti-racism activists, try to learn about the experiences and perspectives of racial minorities, cite minorities’ words, learn relevant jargon and history, etc. The same should hold for rationalists.

I suspect the grey tribe is better along that axis (and several others) in large part because it *has no power*. If they started winning elections THEN they could get away with casually demonizing other tribes – that’s something one does from a position of strength.

The thing *I* like most about the greys is that they tend to be more honest and sensible in what they say. That, too, is a result of being small and powerless.

(If grey politicians and pundits had much chance of actually WINNING ELECTIONS or influencing policy, I’m sure they’d start falsifying preferences to the degree the other tribes do.)

I don’t remember exactly, but the general feeling was that she was most of the way there and needed either a couple of years of normal intellectual development or some particular combination of personal circumstances which may arrive later in life. I’m not comfortable talking about it beyond that, though.

I am generalizing from one example here so take this with a planet sized bucket of salt.

I have spent years arguing in favor of unpopular opinions (As have many on this site). This sounds bad and crazy but I finally found a way to freely argue my beliefs without losing status or friends. Whenever someone insults me or mis-states my position I immediately and viciously attack their intelligence and rationality. If their attack was really unfair they must have said something nonsensical. I will pounce on this, explain on the vast foolishness of their errors and argue that talking to them on this topic is a waste of my time. I am only interested in people who are capable of seriously considering the issues.

The ideal result is that onlookers get two impression. The clear is that if you attack me I am going to swing back hard. It doesn’t matter if you win every fight as long as you hurt your attacker. The goal is to discourage aggression towards you. And frequently humans will perceive you as intelligent and high status for attacking dismissive toward your opponent. A mental advantage of this strategy is that, while you should never say anything false or mis-represent the facts, once you have drawn your sword you no longer care at all about being polite or kind to the opponent. While they may be less kind then you in general they may not realize that you have fully taken the gloves off.

Its possible I only adopt this attitude because I have gotten hurt so badly by people for expressing my views. And one should be very very careful not to be too quick in deciding someone is worthy of attack. But while I would never do this on lesswrong (or similar places) I don’t think the wider world is safe enough for people to disarm. Its a sad state of affairs.

Examples of good comments:

“That is not what ze said (briefly explain why) You seem to lack basic reading abilities. Perhaps you are better off staying silent when people who can read are speaking.”

“A decent person would not take such a strong stance without a reasonable understanding of the arguments on both sides. Given that you just said/did (Stuff) you have no idea what this argument is about. Please be quiet.”

“That source is obviously bullshit (for X reason). I have better things to do then talk to people who just spew mis-information if it supports there side. Maybe try to learn to be less biased. Though I doubt you are capable of this so maybe just give up”

You read one thing you have no idea the context of, and your response is a petty denunciation with a posing attempt to anchor the conversation to what you and your friends like, and in doing so hopefully invoke the vestiges of some abject insecurity that the effigy for your posing, who is in fact a real person, might have had abused into them.

Sadly, I have lost enough social status in the circles where this is an issue* that when I say, “have you even done the research? It’s clear that you haven’t.” They can just respond, “yes I have,” and I look like the asshole.

Maybe if I cultivate a zero tolerance attitude they will be less likely to engage, but then I just look like a bitter, nasty, conservative MRA worker ant.

I’d like to retain the ability to discuss things with people who are not fully prepared for battle.

It helps a lot to have solid theoretical models to use. Read a lot. For example, Excluded by Julia Serano has a lot of insight into this stuff. Likewise, this article by Katherine Cross. (Read her whole blog.)

It looks like this: I might confront a feminists who is on a “fat smelly neckbeard virgin” rant by pointing out how that is an awful thing to say according to feminist theory. Since when do we fat shame? virgin shame? care about how someone choses to display their body hair? Is this what feminism teaches?

These arguments are easy to make, cuz they’re true.

Next I confront the notion that I am centering the menz. I agree as feminists we do not want to center men. So let us not do that. In fact, we can simply leave these men out of the current debate. There is no reason to target nerds-in-general because of the actions of a certain (really terrible) subset of nerds. This mess is not their fault.

Then out with the big guns: point out how the actual folks pushing the worker-ant-agenda vary in their nerdy-neckbeard status, and that we should confront their anti-feminism and misogyny directly instead of attacking social and physical features that are shared by people who are not worker-ants.

One more thing, do not try to confront someone right in the middle of an angry hate-rant. They will not listen. Choose when to speak. Often you should not try to engage those people directly, but publish your thoughts in a more general way.

Also, some people are a lost cause, at least right now. They are on a path. This is where they are today.

I agree with the sentiment, but, speaking as someone who’s been observing worker ants for a couple of months now, I think this is still partly based on partly inaccurate information.

I really like this analysis, and will link to it instead of making controversial claims here. It matches my observations very well, even though it’s coming from someone who’s been watching from the opposite side, which raises my confidence in my conclusions. Of the information I could add about the ants, the most important is that they have already identified several of the unaffiliated, legitimately third party trolls who’ve been responsible for probably most of the really terrible behavior, including groups like the GNAA, who’ve openly bragged about impersonating the ants (that this kind of thing isn’t more widely known adds to their confusion and occasional tinfoil hat behavior).

While you are totally correct in all your points here, I feel like you may be addressing the wrong problem. I’m not specifically interested in schooling alleged feminists about how their behavior towards worker ants is actually anti-feminist. (It is, for all the reasons you’ve noted.) Rather, I, in trying to understand the worker ants have been practicing empathy towards them, and now see them as people reacting in a normal fashion to their circumstances. (Normal, not rational.) Then, my friends accuse me of being a worker ant myself.

I mean, I’d love for them to stop being nasty mean-spirited people. But what I really want is for them not to assume that I must be a member of the group they’re targeting, just because I don’t hate them enough.

I remember reading _Black like me_, and the copy I had had a quote about how the author was vilified after it came out, and people “said he had been secretly black all along.” Like seriously, once someone had expressed empathy for people of another _race_, it was assumed that he must be a member.

Part of the ethnic tension antipattern seems to be the totalizing part of it. You can’t be neutral — any sort of empathy for a side means you must be a member of that side! It’s terribly frustrating.

(Also, I’m sure if you wrote a blog post titled “how the opposition to worker ants is anti-feminist, that you’d be very popular among the worker ants, and get anathematized by the internet feminists. You could be the next Christina Hoff Sommers just by spending an extra hour on your post.)

You sound like you are bothered that they are doing something illogical, inferring that you an ant. But do they really believe that? Probably they just mean that you are evil.

Why are you practicing empathy to understand people you disagree with? Maybe you should start closer to home, and understand your “friends” with whom you ostensibly agree with on a political program, but disagree with on tactics. (An advantage to your approach is that it may be easier to understand people further away because you are disinterested.)

Warning: abortion is apparently less controversial than ants. I can’t even.)

I understand the reasons why the people I agree with feel the way they do, because I feel that way too. Am I “practicing empathy” when I agree with my liberal echo chamber that legal abortion is an important part of a woman’s right to choose if she wants to be a parent or not? I mean I feel all the same feels that make me think that’s a reasonable argument. I have the same values. It’s a damn easy kind of empathy to empathize with people who you already agree with and share values and experiences with.

I have to use empathy to understand those who I disagree with. When someone tells me that abortion is unacceptable, I have to work hard to put myself in their shoes. I have to remind myself that this is another human being with feelings and values that are just as valid as mine. I have to seek out their values and feelings and understand how they get to their conclusions from the steps of goodness, rather than assuming that they’re just an evil mutant.

Empathy for my allies is easy; Automatic, even. Empathy for outsiders is much harder.

I don’t really care that it’s illogical. I care that it hurts me. And I think that we should encourage empathy, not discourage it. I’m upset that they seem to value tribal unity over empathy. If they think I am evil because I’m willing to do to work to extend empathy towards outsiders… well, I’ll think they’re evil right back for valuing unity over empathy.

I guess I would be less upset if they said, “I hate you because you’re extending empathy towards outsiders” then when they say, “I hate you because you’re secretly a conservative.” If they were honest about it, I could at least write them off as assholes.

This seems to me to be a relative of punishing non-punishers. But rather than explicitly say you’re going to punish non-punishers, they say that non-punishers must be secretly members of the root group we want to punish. Like, the Quakers are all secretly Communists? That requires a pretty twisted world-view.

Obviously, I’m failing to have empathy right now, because the best explanation I can figure out for the behavior is “mindkilled by bias”. That explanation sells well in areas associated with OvercomingBias, but does not pass the test of understanding how you could reach the conclusion without being an evil mutant. So I’m not happy with it.

I meant your last paragraph. The word “frustrate” suggests that you don’t understand your allies. That’s important because (as you demonstrate) your allies can hurt you more than your enemies. If you did understand them, you would act differently and elicit different reactions from them.

. . .

Actually I think introspection is really hard, so no, I don’t think that you’re practicing empathy on them/yourself for your points of agreement, but I don’t want to have that conversation. In any event, applying empathy to your disagreements with your allies and enemies both sound easier.

obMM: No, the Quakers are not secretly Communists. It is the Communists that are secretly Quakers.

I place a high value on empathy in general, and on understanding my enemies rather than demonizing them. My allies seem to prefer demonizing to empathizing*, which is frustrating, because of the values gap.

Introspection is hard. Empathy is hard. “Moar Empathy” seems like a good solution, in general.

* This is a very un-empathetic evaluation, and I would like to replace it as soon as possible.

It seems to me that I should start with whoever I’m in front of, right this very moment. It’s not fair to my allies to skip empathizing with them just because we already agree, but it’s not fair to outsiders to say, “sorry, you’ll have to wait until I’ve empathized with all my friends first, before I’ll take you seriously.

None of which helps with the “I saw you be empathetic to an ant, I’m going to round off to the nearest misogynist and shun you” problem.

I’m not saying to empathize with your allies because you agree, but because you disagree. I’m saying to empathize with their reaction to your apologetics. Also, your allies often are in front of you. You tried starting with your enemies and you got burned. Understanding your allies to head off future danger is a valuable step towards empathizing with your enemies. Maybe that’s not “fair” to your enemies, but… hey, your username suggests that you’re too concerned with that word. Well, you are special in that you’re the only one you can control and that has to shape all of your actions.

Yes, the borscht is a reference to Scott’s spy dialog. One anonymous person trying to figure out if another anonymous person is actually a real person that they kind-of know. Your style (and points) in the above conversation reminds me strongly of someone I know in meatspace. I figured it was worth checking.

“You’re A for being B” often just means P(A|B) is very high. You got so close by suggesting a high P(A|B) lends naive plausibility, but you never stopped to consider that P(A|B) is what the statement actually means rather than just that P(A|B) might support it (Substitute in “anti-Semitic” and “end the occupation” if you wish). Related, it may also mean “the reasons people typically have for B would lead to A as well”. Other possible interpretations are “the movement to do B is infested with people who want A and you support them or are willfully ignorant about them” or even “the vocabulary you are using to describe B is an applause light for A believers”. Don’t try to parse natural language literally; “You’re A for being B” is rarely if ever meant to be “B logically implies A”.

And clearly someone who wants to end the occupation because he had space aliens directly tell him to end the occupation and do nothing else is not anti-Semitic, but in practice, not many of the people wanting to end the occupation fall in this category.

“Remember, Jonathan Haidt and his team hypnotized people to have strong disgust reactions to the word “often”, and then tried to hold in their laughter when people in the lab came up with convoluted yet plausible-sounding arguments against any policy they proposed that included the word “often” in the description”

– I am **sure** I wouldn’t fall for that; I’d love to try it. How strong was the effect btw?

> The world is a scary place, full of bad people who want to hurt you,
> and in the state of nature you’re pretty much obligated to engage
> in whatever it takes to survive.

Exactly. In your social Bubble, you can be concerned with how you “determine and discuss negative patterns of argument.”

But in the world, far nastier things are a concern.

Step out of the Bubble and ask how we can argue more fairly about whether ISIS is a good thing or now. We could consider whether we are treating ISIS’s perspectives with a mindless and unfair tribalism. But that would be a Bubbly approach. An approach more to ISIS’s own mindset (if you choose to deal with them) is to fight them to the death and leave out the debate over the moral value of sex slavery. It is a real luxury to have any enemy that you can actually debate with.

In fact, your analysis of discussion about Israel is not about the actual conflict over here; it is about how residents of the US and other countries talk (and talk and talk) about it, possibly because they consider Israel to be inside the Bubble and so worth this sort of discussion.

But instead of sticking with the state of nature, we have the ability to form communities built on mutual disarmament and mutual cooperation. Despite artificially limiting themselves, these communities become stronger than the less-scrupulous people outside them, because they can work together effectively and because they can boast a better quality of life that attracts their would-be enemies to join them. At least in the short term, these communities can resist races to the bottom and prevent the use of personally effective but negative-sum strategies.

One of the great delights of my childhood was Star Trek paperbacks. One of them, the title of which I sadly can’t recall, was a bunch of vignettes about the Starfleet Academy adventures of the original series crew. In one tale, Cadet Kirk and his classmates are sent to a space station with stun-only phasers to play an “assassin” game: one randomly (and secretly) selected cadet can only win by stunning everybody, whereas everybody else wins by surviving. Kirk blocks off all but one of the doors to a cafeteria, and only lets in cadets who leave their phasers outside. The walled garden of civility Scott advocates reminds me of that. (If I were nerdy in a different way, it probably would’ve reminded me of Hobbes’ Leviathan and the state monopoly on legitimate force instead.)

Of course, whether entering that cafeteria was a good strategy for any given cadet depended on whether Kirk was the assassin or not, and if so whether he’d left his phaser outside. I honestly can’t remember how the vignette ends, but I think it was with Kirk not being the assassin and his walled garden saving everybody. Here, I think the equivalent to Kirk being the assassin and a rationalist walled garden turning out poorly might be the whole “phyg” thing—not that I’m saying that it is a phyg, just that I think that would be how my analogy cashes out.

Honestly, the image of Scott’s advocated rationalist walled garden saving the argumentative day like Cadet Kirk warms my heart, even if it does seem utopian. I am very much a well-wisher toward a rationalist “Kirk’s cafeteria” strategy.

As such, it’s acceptable to try to determine and discuss negative patterns of argument, even if those patterns of argument are useful and necessary weapons in a state of nature. If anything, understanding them makes them easier to use if you’ve got to use them, and makes them easier to recognize and counter from others, giving a slight advantage in battle if that’s the kind of thing you like. But moving them from unconscious to conscious also gives you the crucial choice of when to deploy them and allows people to try to root out ethnic tension in particular communities.

Well, one of the things that impresses me about rationalists’ fight for Scott’s “Elua” values is having the good sense to investigate a kind of atheist ecclesiology to see if any useful tools can be acquired from earlier movements’ past practices. Since helping your side understand opponents’ tactics without sinking to their level isn’t a new ecclesiological problem for reformists of any stripe, I offer this snippet of the Bible, with its nice redolence of game theory jargon; it obviously isn’t offering exactly the same advice as Scott, but reminds me of Scott’s advice quite a bit, and the pretty language might make a nice rhetorical tool:

Behold, I send you forth as sheep in the midst of wolves: be ye therefore wise as serpents, and harmless as doves.

For the first cadet to enter the cafeteria, it’s a gamble. For everyone else, they can see that Kirk hasn’t killed the prior cadets, and they can see that there are many cadets in a public room, not all of whom can be the assassin, who could assault Kirk and take his phaser if he suddenly began stunning. (Or so I would imagine, it’s been at least 15 years since I read that story.)

The best way to break the initial deadlock would probably be for Kirk to loudly explain to several cadets at once that, if they are not the assassin, they should all be glad to publicly volunteer to form the initial block with him. I don’t recall that being in the story though.

Same. Not that I want a car. I can’t drive and would have nowhere to put it. But the idea of being able to download a car is just too cool to pass up in favour of “sensible decision making”, “being an adult”, “not destroying the lounge” etc.

And in reply to the global warming example, I’ll say something I said last time: your attempt to rephrase global warming in a conservative way doesn’t work because the current situation is not that liberal positions are used to justify belief in global warming, it’s that global warming positions are used to justify liberalism. The arrow is in the opposite direction from what you think it is.

Actually rephrasing global warming in a conservative way would mean using global warming as justification for doing conservative things, like “global warming is bad, so we should encourage India and China to become Christian, which would give them motivation to stop destroying the earth” or some such.

the current situation is not that liberal positions are used to justify belief in global warming, it’s that global warming positions are used to justify liberalism.

Agreed. The tribal loyalty precedes the policy position. That’s part of why using less oil as away to defund jihadism, despite being an interesting take on things, never really took off except as a rhetorical cudgel for people who were already on the left to use against SUV drivers and their ilk.

I interpret “are you pro-Israel or pro-Palestine” as “do you think that, overall, the Israel or the Palestine side has better arguments and better goals”, to which I would then give an answer. It’s not a meaningless question. But when Scott asked it it was clear he was presenting it as a meaningless question and couldn’t figure out what people actually mean by it.

While this kind of posturing is common (especially in these circles), it is almost never true, and when it is, it belies total ignorance, not tolerance.

In reality, informed adults are incapable of excising their secret allegiances. Yes, there may be groups on both sides you dislike, but deep down, there is one which is YOURS and another which is NOT. There is one group which you consider to be the real source of the problem, and one which, while it has its assholes, is basically correct. Ingroup-outgroup. This is psychology so basic they were writing political tracts about it a hundred years ago.

The problem for “rationalists” (And I loosely include myself here) is that the rational part of our brain does not make our decisions for us. Our irrational midbrain does, all fear and sex and tribalism. These decisions are then sent to the forebrain to get a “rational” justification. It is physically impossible for a human being to be truly rational. To attempt it with “free will” and “logic” is to try to alter the course of Jupiter with a toothpick.

And this criticism must extend to Scott’s utopian ending of the OP. The same tribalism which infects every other human also extends to those who think deeply and intelligently about the human condition. This is why we have the same basic problems we always have had. If being smart and logical solved everything (or anything at all), Socrates would have handled it.

I read Cadmium as saying “far less polarizing for most non-Jewish non-Muslim Westerners”, whereas Tarrou seems to be claiming that it is mind-killingly polarizing.

(If Tarrou was also just claiming that *some* issues were mindkillingly polarizing but not Israel for the average westerner, then Cadmium may have misread him. I agree with Cadmium’s reading, though I’m less sure about his objection)

It has nothing to do with the polarization of the subject. Every subject, once understood in the vaguest terms (and let’s be honest, few humans ever get past the vaguest of understandings) is interpreted by the midbrain in tribal terms.

Our brain has decided. We may make a huge show of being even-handed, ostentatiously pick apart each side for faults and praise their strong points. Try desperately to prove our “objectivity”. But at the end of it all, baseline, gun-to-the-head make a choice time? We’ve all decided the minute we heard about the issue. It really is as simple as blue and red. And yes, those categories are meaningless, but the construct they symbolize is all-consuming.

Who do you want exalted in society? Who do you want crushed, driven before you, forced to do your will? These are the questions that matter. Because they will predict the answers for everything. Unfortunately, they are also the least likely to be answered honestly.

In the so far almost complete absence of environments which don’t go out of their way to destroy people’s minds it’s more more in line with the evidence to classify those tendencies under anthropology than psychology.

Somebody says God doesn’t exist. Another person objects that God is just a name for the order and beauty in the universe. Then this somehow helps defend the position that God is a supernatural creator being.

You’ve used this example so many times that I finally have to ask: is this is a thing that actually occurs in the wild? Is this among the more serious apologists, or youtube comment level?

I’ve never seen it myself (which, admittedly, doesn’t mean a whole lot). What I have seen are arguments that you can’t logically believe in beauty/order/reason without ultimately believing in God, but these are ontological arguments, not definition shifts.

Some fools somewhere must have made this argument–there are plenty of stupid people. Frankly, though, it mostly reads to me as a strawman (or a weak man) of better arguments, being a variant of a strawman that usually goes like this:

Jerry Coyne: “Sophisticated theology”TM has nothing to do with the God of the Bible, who goes around zapping disobedient Bronze Age shepherds!

Some poor sap of a theist blogger: There are big thick books about how the God of the Philosophers = the God of the Bible, or at least could = the God of the Bible. Summa Contra Gentiles, e.g., tries to walk the reader through the (many, many) inferential steps.

P.Z. Myers: Courtiers Reply!

Edward Feser: Myers shuffle!

And then it just descends into more snarky tribalism. As a theist, I blame Coyne et al. for “starting it.” But there’s probably a more sympathetic reading of Coyne that I’m missing.

I get annoyed by the “Bronze Age” tag since (a) that seems to be regarded as the height of wit (b) it’s mere chronological snobbery and proves nothing; it’s as much to say “Pshaw! Why should I be bound by some Bronze Age shibboleth about ‘murder’ being ‘wrong’? That may have been all very well back in the days when Ug hit Thog over the head with a rock, but I now have access to semi-automatic weapons with which I may shoot up a school and murder tens of my disliked and loathed fellow students and teachers, therefore by the magic and mystic passage of millennia between Ug and me, I no longer am fettered by these outworn notions!”

You’re correct, I was ignoring the mistake in historical periods. I suppose that is an artefact of me being so hardened, at this stage, to seeing things like “The Renaissance brought Europe out of the Dark Ages” and being too tired to correct that no, Mediaeval Period is not the same as Dark Ages for the 5,000th time.

Don’t get me started on the Library of Alexandria posts and some goddamn awful graph some idiot pulled out of their arse about how we’d all be colonising multiple star systems by now if it hadn’t burned down.

The historical illiteracy makes me pull my hair out, but what can you do when the nearest notion of what “the past” was like is something they half-remember out of “Monty Python and the Holy Grail” and they sneer at history as one of the useless humanities subject, unlike Real Manly Men Hard Science STEM subjects?

Which reminds me: those who say biology is for girls and it’s not ‘real’ science, why yes – you are correct. Biology is for girls; Irish girls who may be in some way responsible for feeding the planet fifty years from now.

I don’t know if such a thing exists on YouTube or similar, but something similar certainly exists within my family. Having been told that belief in God is just semantics over a particular emotional state so I must believe in God really, the very same family members are getting very worried that another family member will get very distressed soon and won’t cope with dying because they are an atheist (they’re not ill, just over a hundred years).

The motte is that god is an emotional state and the bailey is that an afterlife exists.

Somebody says God doesn’t exist. Another person objects that God is just a name for the order and beauty in the universe. Then this somehow helps defend the position that God is a supernatural creator being.

I have to disagree here, because in my (limited) experience, the types who go “Oh, no, no, ‘God’ is just a name for the order and beauty in the universe” are generally dumping the supernatural out of religion as fast as they can and ending up with an ethical system predicated on Niceness.

The ones who try to get to the end-point of “supernatural creator being” start off with “There is order and beauty in the universe, do we agree?” and then go step-by-step to “This order and beauty comes from a supernatural creator being” (not “is a supernatural creator being”, which would be at least panentheism and more or less heretical according to how strongly it is held, and in its strongest form probably would be pantheism).

I think this underestimates the number of people who just don’t have particularly coherent beliefs. Sometimes you sit next to someone on a plane who believes in god and x, y, or z mainstream Christian doctrine, but they’re just not dogmatic, you know? We all worship something and let’s just be friends. They’re not dumping the supernatural bits of religion, but they’re not particularly interested in them as substantive beliefs either.

Oh, there’s certainly a share of nice people with Vague Deity Concept who are all “you believe in Buddha and I believe in Christ and he believes in Krishna and they believe in Mohammed and let’s all just agree there are many paths up the mountain”. I have little quarrel with them, though that doesn’t stop me being snarky at times.

The ones that drive me up the wall are the ones who cling on to something residual they are calling Christianity but who are so busy spinning in circles with one eye on the Spirit of the Age so they can discard anything that may seem too difficult for the Modern Person to believe, that they end up in a position where I go “So why bother remaining a cleric or bishop? Why not write a nice series of popular pabulum along the lines of ‘Chicken Soup for the Soul’ only a bit more high-minded, and leave religion alone?”

Also, they tend to latch on to just the latest but one scientific idea, so that they’re always that bit slightly behind the curve even when they’re talking about “post-Copernican universes” and what-not. So they still exhibit what I’ve seen called as “Like X, only Christian!” when it comes to making tacky Christian versions of the latest (five years ago) trend in pop music, or movies, or expressions of popular culture, only they tend to do it with science.

That’s probably my personal make-up, though; I have to have the lumpy, thick bits in religion – the supernatural. A nice, clear, thin ‘all the gristly bits strained out’ liquid religion is not for me. It’s either the supernatural or atheism; I’m not smart or sophisticated enough for a ‘reasonable’, ‘rational’ religion 🙂

I’m also probably too literal-minded to be able to get the mindset of those (who are probably perfectly sincere in their convictions) when they talk about the uses of ritual and liturgy and the rest of the historical apparatus of the Church as symbols and poetry and inspirational something-or-other but without needing to believe all that old-fashioned stuff anymore. It seems pointless: if you want symbolism, there is plenty of poetry and secular rituals you can adapt that are better suited to the times than “let’s use the Eucharist as a metaphor for incorporating all humankind into sharing one big family meal”.

It’s youtube-comment level, but you’d have to go somewhere relatively quiet and un-argumentative to find it. I often find comments to the effect of “this is so beautiful how can anyone doubt the existence of god” on sacred music videos, and I think they’re mostly serious. It’s not exactly a definition shift, but it is changing the subject.

“The god atheists don’t believe in — I don’t believe in him either” is an actual religious slogan. Originally said by an orthodox rabbi, IIRC. I don’t think the original speaker (or most of the people quoting him) expected prayers of healing to actually heal the sick. On the other hand, he probably did refrain from shellfish and mixed fabrics.

One of the main examples in this essay doesn’t actually fit into a simple one-dimensional left-right axis.

In America, Palestine-Israel matches left-right. This might make you think that antisemitism is left-wing, but it isn’t. As you say, people try to attach the antisemitic label to anti-Israel sentiment, but people who are convincingly labeled antisemitic are right-wing. (I think the same is true in Europe.)

It’s fine as an example of the topic of the essay, but when you suggest that these micro-dynamics lead to the macro-phenomenon of a left-right axis, it doesn’t fit.

“[C]onvincingly” is doing a lot of work there. I think people who found “they hate our freedom” to be a plausible explanation for 9/11 would presumably find charges of anti-Semitism flung at people on the left to be plausible and (not unrelatedly) pleasant to contemplate.

I don’t understand what you’re trying to do with this essay. That people think in this way is highly obvious. I’m disappointed because I generally come away from your arguments with several new ideas and several refinements on old ideas. I’ve come away from this with nothing except a little less free time.

Another prediction of this model is a point you’ve made recently about the Koch brothers. I think it’s worse reiterating.

Keeping track of correlations in beliefs is hard, so if you start out with a mild belief that the Koch brothers are evil and that global warming is a very important issue, then hearing that the Koch brothers fund research critical of global warming will reinforce both beliefs. Since you don’t track the correlations, you don’t realize that your strongly beliefs are circular.

Arguably- since our brains are based on a connectionist technology (neural networks), this sort of bias is impossible to *not* have. It’s not really a bug, or a feature, just a completely natural implication of our underlying architecture.

(Granted, the degree to which we suffer from this is partially under our control and should be minimized.)

I wonder if it’s possible to somehow exploit that to fix the current problems. Form a lot of positive connections with disliking tribalism, or something like that. That could create new problems, of course, so the question is which is worse?

Scott I apologize for my earlier charge against you. I no longer think that you are engaged in a political battle against the SJ movement. I think you have successfully steered clear of the sort of vagueness that you are talking about here.

A couple times I’ve noticed that I instinctively trust characters in books who have the same name as my best friends. The same thing can also happen the other way around, too, when I meet someone who has the same name as my favorite character in a book. The effect is generally diluted over time for common names, so I notice it most with fairly rare names.

It’s easier to see it happen with names, but I wonder if the same thing can apply to words that have multiple meanings? Say, if someone is struggling in calculus, will they have a more negative view of financial derivatives?

Correlational research shows that people are (slightly) more likely to marry people whose names rhyme with theirs, or start with the same letter. They are also more likely to live in a place similar to their name (I.E., there are more Charleses [sp?] in Charleston than elsewhere, more Marys in Maryland, etc.).

Choosing to move to a place with one’s own name would suggest a rather strong concern. But if a Mary or a Charles was born and raised in the so-named place, then it might suggest a concern of their parents, or whoever gave them the names. Another factor might be, that Charles and Mary are ‘old-fashioned’ names, and Charleston and Maryland are Southern areas, where names are likely to be handed down from older relatives.

Disclaimer: I have no idea who Lena Durham is or what causes she is associated with, but my own opinion is that if you are a person in the public eye and you plonk a revelation like that down on the table (if I am getting the story right, that she did some quasi-sexual experimentation with her sister when they were both young?) in the way she did – that she phrased it in the language of “grooming”, as used when talking about how paedophiles “groom” those they intend to abuse – and when you deliberately invoke that kind of comparison, especially when child sexual abuse is the one sin the great majority of society, no matter their political, religious or other views, all consider is the one thing (or one of the very few things) that can really be called a sin and that is really evil – then you are inviting people to take sides and express opinions.

I really wish that the reaction of the public at large to this kind of baiting was to yawn and ignore it on the grounds of “Well, who cares what she did when she was seven, for heaven’s sake?”

But really, if you come out with something that can be made sound like “I planned to sexually abuse my little sister”, and you are (as far as I can make out) someone who is savvy about the media and publicity, what else are you doing but being deliberately provocative and hoping to evoke and stoke outrage?

In this and many other scandals, I think the timeline of how it unfolded is important to understanding it.

The initial article which pointed out Lena’s “I found rocks in my sister’s vagina” story gave her age as 17. I suspect this was a mistake, and at any rate they later corrected it to 7. Had it started with 7, people may well have brushed it off as “kids do crazy things” (I’m not sure. I’ll get back to you when I have a 7 year old and a good idea of what’s within the normal range of behavior.) If you thought she did this at 17, then yes, clear sexual abuse. By the time the real age came out, sides had already formed.

But! there were other anecdotes in the book, which also came out as a result. For example, that she bribed her sister to kiss her for extended periods, which she herself described as “anything a sexual predator might do to woo a small suburban girl I was trying.” Which, yeah, if she were an older brother rather than sister, nobody would hesitate to say she abused her sister.

So it’s starting to look like the original accusations were fake but accurate.

(For bonus points: Throw in the cultural baggage of conservatives saying gay acceptance is the gateway to acceptance of incest and pedophilia. Observe who is trying to brush off Dunham’s incest and pedophilia. Fitting a pre-existing tribal warfare narrative is great way for a story to gain traction.)

Suppose a rock star wrote a memoir wherein he describes fucking an unconscious groupie. In particular, he details the lecherous nature of his actions. He goes on about how when the groupie learned about what happened they were okay with it. People call him a rapist despite it not being the typical rape. He has outspoken political positions so tribes take sides.

Do you think this would be pure worst argument too? From what I’ve seen a lot of people really think what she did was abuse and caused by her family’s perverse culture.

As I said in the original, I’m confident there would be no less outrage if Bristol Palin had revealed some odd sexualized things she did with her siblings as a child.

—

I guess the question is what role Dunham plays in our culture. Is she holding herself out as an exemplar of how we (or at least young women) should manage our romantic lives? I’m not so sure, since, though I’ve never seen the show, my understanding is that it also exposes the downsides of some of these behaviors.

In short, even if we stipulate that Dunham is guilty of molesting her sister, I’m not sure what that proves. It seems to be a kind of Underpants Gnome Political Plan.

I don’t think most people are trying to prove anything. It looks more like people getting off on shared disgust reactions to me. A rationalist thinking that people must be trying to prove something may be the typical mind fallacy at work.

In 2012, the Democrats were effective in associating Republicans and specific policies with a War on Women.

They tried to do it this time around again, and seem to have failed.

Did people get smarter to it this time around? Did the non-WOW based reasons for those policies take that long to get through? Did the Republicans make substantial shifts in policy that made this no longer operable?

Or, was this just replaced with this year’s flavor of association (“Obamacare?”), which will in turn be replaced by something else in 2 years?

Did the Republicans make substantial shifts in policy that made this no longer operable?

Sadly, I don’t think people wised up to the fact that the Republicans aren’t emotively anti-woman, but simply have religious beliefs that as policy disproportionately affect women. There were high profile gaffs regarding women by Republicans in 2012, mostly involving bad word choice but also some ignorance. There just don’t seem to have been any gaffs nearly as juicy this time around as, say, Todd Akin’s. Either way, Yes, there’s a shelf-life. Outrage has to be served fresh for maximum tribal enjoyment.

Impressive essay! The model you’ve constructed bears close resemblance to Milton Lodge’s and Charles Taber’s ‘johnny Q Public’-model of political information processing. These two political psychologists have done all sorts of fascinating lab research on how people come to political judgments, summarized in an eye-opening book called ‘ the rationalizing voter’. With diagrams!

The process of attaching emotion to symbols is usually seen as the search for meaning. Abstracted to a 30,000 foot level, seeing argument and rhetoric this way makes its easier to talk about the really stupid ideas that other tribe has. Especially the day after an election, when the rah-rah stuff is particularly annoying.

Meaning must be much the same thing as ethics, morality, values. Basically a bunch of wetware- genes and gene expression, chemicals, and neuron connections. I find that comforting.

We probably couldn’t have mind, self-awareness, mental flexibilty ( whatever they are, acknowledged here to be clouds of meaning ) without the emotion and meaning.

Autonomous robots are going to have to mimic this somehow, in a simple way, even though we don’t understand what the process is. Granted, there is also statistical AI approach to learning, so maybe not.

often-noted, but good for me to remember:

The attachment of emotion has a paleolithic feedback loop and our modern condition is orders of magnitude more complex and abstracted from our origins. Expect dissonance and errors.

The neuro-scientists have been saying that much of the emotional attachment processing happens unconsciously and that should make us wary.

Steven Pinker speculates in The Better Angels of Our Nature on an explanation for the Flynn Effect:

A shorthand abstraction is a hard-won tool of technical analysis that, once grasped, allows people to effortlessly manipulate abstract relationships. Anyone capable of reading this book, even without training in science or philosophy, has probably assimilated hundreds of these abstractions from casual reading, conversation, and exposure to the media, including proportional, percentage, correlation, causation, control group, placebo, representative sample, false positive, empirical, post hoc, statistical, median, variability, circular argument, tradeoff, and cost-benefit analysis. Yet each of them—even a concept as second-nature to us as percentage—at one time trickled down from the academy and other highbrow sources and increased in popularity in printed usage over the course of the 20th century.

Someone just seems to have put “karma mirror” in my toolbox, which on the first glance feels quite useful. I tried it on the thing which probably provoked its invention (ant jingamabob) and went from “Why am I following this disturbing spectacle like some gruelling World Cup game? Which side should I root for – the one just defending because yay underdog, or the one just attacking so the high-pitched whine finally ends? Is there a hidden FIFA we can agree is the real villain?” to “This is a lively, varied example of karma-mirroring. Ideal to teach the concept to young people…”

I was sent to this blog about a month ago via Twitter and my impression after reading a huge amount of backlist and each new post is that I came in time for its most exciting phase yet. Today’s post seems to be the payoff for, let’s say, the last year of blogging, and I’m not sure the author even knew he was doing a year of setup until yesterday…

Scott, thank you for the most exciting text I read yet in 2014. Today’s post and the ones you link to in it would make a great first draft for a very fine book.

Scott has mostly been discussing barriers to rational discussion (amongst other things), but he notes at the end of “Five Case Studies…” that he doesn’t know how to fix the problem of excessive contentiousness resulting from cultural tribalism in our society.

Sandman’s work is relevant to this problem. The piece I posted I think is apropos. I hope you will read it (and perhaps his other work.) I would be happy to offer additional references.

Again. I’m sorry for the link spam. I should have just waited for an active thread.

One of the most laughable examples of badly trying to stick some guilt onto a person/set of policies via association is the frequent media invocation that Putin is Just Like Hitler(!)™. As I pointed out on Jerry Coyne’s blog, this is mere Godwinning. Comparing the (generally just) German annexations of the Rhineland, Sudetenland, and Austria with the reunification of Russia and Krim is acceptable on its own merits. But the comparison wasn’t on its own merits; it was clearly meant to imply that Putin (who, as pointed out by Steve Sailer, is only slightly less pro-Semitic than Yeltsin) would surely annex a bunch of Ukrainians and Poles (for no particular reason), something that would be bizarre from the perspectives of many a Russian nationalist, and (in complete contradiction to his present policies) begin a policy of killing the Jews who lay Russia’s golden eggs (or at least expropriate from them the egg-laying mechanisms).http://www.theguardian.com/politics/2014/sep/02/david-cameron-warns-appeasing-putin-ukraine-hitlerhttp://whyevolutionistrue.wordpress.com/2014/05/13/surprise-ukraine-referendum-a-total-farce/#comment-850437
I think the people here understand why such implications are ridiculous.

Comparing the (generally just) German annexations of the Rhineland, Sudetenland, and Austria with the reunification of Russia and Krim is acceptable on its own merits.

You could make a good argument that because they were inhabited by Germans, those territories should have gone to Germany. But that *alone* isn’t the reason for the comparison to Putin; it’s just a part of it. The comparison to Sudetenland was that not only did Germany claim the lands that were inhabited by Germans, Germany also manufactured crises concerning the German inhabitants and then used the manufactured crises as an excuse to to take it over. Furthermore, Germany used the seized land as a steppingstone to take over other places that were not inhabited by Germans.

This is exactly how the game Argument Champion works! The entire “argument” is connecting positive ideas to your idea, and negative ideas to your opponent’s idea. For example: “Do you loathe BELL? Then you loathe AUDIO. Everyone knows BELL has a lot to do with SOUND. SOUND logically leads to AUDIO. That’s why I say BELL is closely related to AUDIO.”

I wonder how much the game makers actually believed in their model.

EDIT: In playing the game to get that quote, the opponent ended up arguing that Syria supports audio…

The problem is “friend of Israel” works differently than “friend of Bob.” But we’re still chimps under the skin, and so we act like “Israel” and “feminism” and “the Republican Party” are individuals someone could be a personal friend to.

Do I want to know all that’s good and bad about Bob, before I become a good friend of Bob? Sure.

Do I want to know all that’s good and bad about Israel, before I become a good friend of Israel? Israel is a country, not a person. Using the word “friend” will lead me to incorrect predictions, and needlessly constrain my policy choices.

I think that’s where this class of arguments comes from: our instinct to treat movements (nations, etc.) as if they’re individual people with whom we have personal relationships.

To the extent that this model is correct and Lakoff and Johnson’s Metaphors We Live By is correct, this is as deep as language. We allow transitivity in metaphorical language because experiences and concepts that aren’t immediate are otherwise incomprehensible. So you can talk about argument is war and suddenly lots of features of war apply to arguments that would otherwise be nonsense. And by lots I mean all of them except that in English we frequently use these terms.

So an alternate title to this post might be “When cognitive metaphors go to war” but that construction was already taken.

Re: Your experience of being attacked by some feminists and having to be careful to be rational about feminist arguments in general.

A great and pretty extreme example for this is Céline’s Trifles for a Massacre. It starts with “some Jews didn’t want to stage the ballet I wrote and some Jewish critics really didn’t like my books” and then concludes its’ way to “Yes, we probably should start killing the Jews”. If you don’t know much about Céline it reads more like a parody of antisemitism than an actual argument for it.

Now most ream red and team blue members having started advocating killing the other side. Fortunately there seems something in early 21st century America that makes taking your animosities to their logical extreme less likely than it did in early to mid 20th century Europe. (hopefully).

Humans are social animals. It’s important for us to like ourselves and our friends and family. Thinking our own kind are the BESTEST might be necessary for truly getting the benefits of belonging to a community.

I’ve always been frustrated that libertarianism is so short on tribal markers. Liberals are blue and conservatives are red, but what color are we? What is our theme song? What is our animal symbol? What food do we eat? What are our favorite hobbies? What do we wear? [Sadly, the Vibram fad seems to be dying out.] We’re all so fucking *lonely* because we won’t give any distinguishing characteristics to our tribe.

Being Red Tribe or Blue Tribe is perfectly normal and taken for granted. Being energetically communitarian around something completely different gets you branded as “cultish.” I actually think we need *more* tribes, not less tribalism — people identifying strongly with groups small enough that they actually make *sense* for the native human wiring. “Don’t be tribal, be abstractly committed to truth” in practice often seems to come out to “Leave your little tribe and join up with my big tribe!” People who really and truly aren’t tribal do exist, but they are profoundly alienated and you probably wouldn’t want to be one.

I think the lack of tribal markers is an advantage, not a disadvantage, precisely because it makes tribalism more difficult, which makes truth-seeking easier. If anything, we should be less tribal – when a libertarian messes up, we should be the first to criticize them, before Salon and their ilk get there. What clothes do we wear? What food do we eat? Thank God we haven’t gotten that bad – we don’t conform as much, and that’s great. We shouldn’t be encouraging tribalism among ourselves, but destroying tribalism in others, so they too can be better truth-seekers.

On the other hand, it could be said that rejecting tribalism is itself a tribal marker.

We’re all so fucking *lonely* because we won’t give any distinguishing characteristics to our tribe.

We’re lonely because there are so few of us in many social circles, at least compared to the Reds and Blues. I would expect someone obsessed with some obscure hobby to feel similarly, even if they do have identifiers. Once libertarians find other libertarians to interact with, they’re just as happy as anybody else (my guess is that they’re happier, but I may be wrong), despite the lack of tribal markers.

Well, yes, clearly the reason why “our kind” don’t embrace tribalism is that we see its disadvantages and want to be more dispassionate.

But community identification also has *advantages*. Within-community trust and cooperation. Increased motivation. Self-esteem. The kinds of fun and meaning that can be found in group celebrations and traditions.

The problem is, when people realize they want these things, they usually go join a different community, usually the Red or Blue tribe.

We can have group celebrations based on specific commonalities, rather than general tribal markers – i.e. people who like Rand Paul can celebrate when he wins, and people who like Tesla can celebrate when it wins, but there’s no necessary reason for Rand Paul fans to celebrate Tesla and vice versa.

In addition to that, because “Gray Tribe” is an actually-existing cluster, people who have one Gray trait will be more likely to have other Gray traits, and so the more central Gray people would see each other relatively often, because they’d be celebrating the same things. Those commonalities can serve as tribal markers to a certain degree, and there’s nothing wrong with that because they were already there and don’t exist just to mark Grays as Grays. The important thing is to prevent those common traits from becoming normative – “We’re Grays, so we do X, Y, and Z” is the failure mode of “Some of us do some combination of X, Y, and Z, so we’re Grays”.

The problem is, when people realize they want these things, they usually go join a different community, usually the Red or Blue tribe.

Then we’re not doing well enough in encouraging them to take ideas seriously. If they did, then being in Red or Blue spaces would be too uncomfortable for them.

Tribalism has benefits quite apart from feelings of community. In politics we already have a modern* word for tribes: coalitions. Coalitions get things done, which is at least as important to most people as getting to the truth (and just as important to those who have gotten to the truth and want to do something with it). If you want to accomplish anything, you need to join a coalition of some sorts, forming mutually beneficial alliances to make changes you both want. You don’t even have to want the same things. Getting what you want while your ally gets a thing you are neutral about is still a win.

So people will naturally gravitate towards tribes, because it works. And they will defend those connected to their tribes because that is essential to actually having a tribe that works; the crazy guy at the fringe of your tribe can return the favor at another time. This is rational behavior. Even ignoring facts harmful to your tribe is rational. This is basically the entire idea behind the Overton Window.

* Curse you, Scott! I’ve now read enough NRX material that I see an implicit Whig view of history in the decision to use the term “tribe,” branding it as something old and therefore bad.

What about when a word is karma-loaded from the worst possible actual sense of the word. Then it is used for a situation that is a much lighter case but it is an actually accurate usage, yet it does not deserve that load? A guy who stole a loaf of bread is accurately described as a criminal but does not really deserve that karma-load this word carries.

So what happens when character assasination is done while being formally, technically right?

I just want to say something about the the tumblr post you linked to in which I commented:

DO YOU KNOW HOW MUCH IT HURTS FAT WOMEN TO SEE THIS?
I mean, you should care that you are hurting fat men but you have decided that they are unworthy of empathy. But maybe you will care that you are hurting fat women.

Because I half-knew what I was doing in that comment. Part of it was “you are being mean to fat people and I am a fat person and you made me sad.” But part of it was also That I know, on tumblr, hurting fat people = oppression = evil, and that if I could get people who make fun of neckbeards associated with oppression, then that would mean making fun of neckbeards is not Approved Behaviour on tumblr. And maybe that was dishonest? But I didn’t say anyting untrue.

Interesting it is, that you find it necessary to translate “hurting fat women” as “hurting fat people” when explaining your Tumblr comment on SSC. This neckbeard begins to wonder whether “the radical notion that women are people” is dog-whistle for “the radical notion that only women are people”.

The trope namer is Ethnic Tension, but it applies to anything that can be identified as a Vague Concept, or paired opposing Vague Concepts, which you can use emotivist thinking to load with good or bad karma.

Alternative name: Magnetic Affect (Idea: We can turn ordinary metallic objects into magnets, by rubbing them with existing magnets)

Oooh, he comes so close to acknowledging that the threats and harrassment are overwhelmingly not coming from people directly associated with the reproductively-viable-ants movement, but he just can’t bring himself to do it, can he? Gotta keep that blue flag flying, lest he be cast out.

(Point of reference – the reproductively viable ants recently tracked down the person responsible for a chain of rape and murder threats against Anita Sarkeesian. He turned out to be a Brazillian games blogger trying to make a bigger name for himself. (He was publicly very anti-GG.) They offered her the info, she ignored it entirely. I’m not fond of nutshells, but that seems like one to me.)

Maybe I’m woefully naive, but if worker ants aren’t harassing anyone, what… are they doing exactly? I’m having a hard time imagining why any group of people would get really worked up if some people they’re not associated with were accused of harassment.

Then again, I seem to live in a really weird bubble where everyone is gesturing toward “the ant situation” as though everyone’s talking about it but no one’s actually talking about it in front of me so I have no idea what’s going on.

Honestly, I think it’s a kind of self-perpetuating positive feedback loop of outrage and persecution at this point. Nerd culture is all about its shared suffering, especially when something that can be described in terms of bullying is involved, and ants are no exception: “we’re being bullied for being ants and doing ant things” is to them what rape threats are to your average feminist.

That’s an outside-view sort of take on it, though. Ask a reproductively viable worker ant and they’ll say it’s all about corruption in the media coverage of ant reproduction and related topics. And that is, to be fair, pretty shit — I just don’t think it’s driving this clusterfuck at this point.

They’re currently 1- emailing advertisers of some gaming sites and all of Gawker group, trying to get them to drop advertising by pointing to some insulting articles and tweets that seem to advocate bullying (and being a lot more successful at it than I’d have expected), 2- defending against accusations of harassment, misogyny, etc. and trying to get people to listen to their side of the story (not sure I understood you, but they are the ones being accused, while convinced that either third party trolls or the accusers themselves are the ones responsible), 3- talking/arguing about games, journalism and feminism, and 4- drama.

I… I don’t expect it to make much sense without something of a step by step. Even then it will be only occasionally coherent, and frankly undeserving of all the attention. This getting to the size it got is entirely the work of the opposition (the story they spun *would* be worthy of the attention, if only it were true).

1) Felicia Day talks about reproductively-viable worker ants in a blog post, and specifically about her fear of being “doxxed”.
2) Someone in the comments points out that she was doxxed over a year ago and (perhaps unwisely) links to the info, which is accessible with a Google search.
3) Everyone, including Day, howls how the reproductively-viable worker ants just doxxed Felicia Day.

Never mind that the information has been online for a year, long before RVWA were a thing. Never mind that the information was originally found in publicly-accessible records. Never mind that the comment never actually mentioned ants at all.

In a situation like this, it doesn’t actually matter what the ants are doing at all. The could be curing cancer and building a space elevator, it doesn’t matter, does it? All that matters is that the ants are The Enemy and The Enemy Must Be Stopped.

I don’t know anything about neural nets, so maybe this system isn’t actually a neural net, but whatever it is I’m thinking of, it’s a structure where eventually the three nodes reach some kind of equilibrium. If we start with someone liking Israel and Chomsky, but not Palestine, then either that’s going to shift a little bit towards liking Palestine, or shift a little bit towards disliking Chomsky.

Use a Markov chain!

Basically, imagine a “traveler” on each node. Whenever the traveler is on a node, it randomly travels to another node. The probability of it going from Noam Chomsky to Palestine is Noam Chomsky’s support for Palestine.

Then, your own support for a node is the percentage of time that the traveler spends on it. Or rather, relative support is the ratio of time. If you have a lot of nodes supporting Israel and each other, the traveler will sort of get caught i them and spend a lot of time on Israel.

However there’s no notion of initial support–the time the traveler spends is completely determined by the links themselves, not your initial biases. And there’s no notion of “inhibition” comparable to what you see in biological neural networks, really.

But there is this concept of an equilibrium emerging from a lot of support-links. The equilibrium is that the average time it spends on a node approaches a defined limit as time approaches infinity. Basically the Law of Large Numbers, despite the lack of probabilistic independence. Those time-averages that it converges too take into account all the conflicting support links but are still static, unchanging numbers.

My use of a Markov chain for this is inspired by Google’s PageRank algorithm, where the traveler moves across links between websites, and the fraction of time it spends on a website is its importance, which is balanced with relevance in ranking results. Maybe we can get a concept of inhibition from there, because in PageRank the traveler also has a probability of going to a random, not necessarily linked other page. So there’s always this baseline probability of going to an unsupported node. Maybe “inhibition” can be a probability below baseline of going from Noam Chomsky to Israel?

Forget the concrete realization of random walks. It is basically impossible for stationary distributions to divide the world into two clusters. Go abstract by identifying the stationary distribution with the eigenvector of the adjacency matrix with biggest eigenvalue. Now allow negative edges, which makes no sense for random walks, but still give a matrix, still has a eigenvectors.

Added: Actually, your suggestion is equivalent, if the strength of the baseline connection is strong enough so that when you subtract off the negative connection, it remains positive. And it reorders the top eigenvalues. But if you want more than 2 clusters, you have to take several eigenvalues, anyhow.

Meta

Subscribe via Email

Email Address

Collin F. of Instacart is looking for software engineers to work there.

Giving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.

Jane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.

James Koppel Coaching teaches software engineers how to spend less time debugging and write robust future-proof code. We’ve helped SSC readers be more confident in design decisions and articulate in code reviews. Advanced Software Design courses offered live and online.

Dr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.

Nectome is building the first brain preservation technique to verifiably preserve your memories for the future.

AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.

Triplebyte is building an objective and empirically validated software engineering recruitment process. We don’t look at resumes, just at whether you can code. We’ve had great success helping SSC readers get jobs in the past. We invite you to test your skills and try our process!

MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.

80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.

Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page

Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.

Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here