I feel like one of the most important lessons I’ve had about How the World Works, which has taken quite a bit of time to sink in, is:

In general, neither organizations nor individual people do the thing that their supposed role says they should do. Rather they tend to do the things that align with their incentives (which may sometimes be economic, but even more often they are social and psychological). If you want to really change things, you have to change people’s incentives.

But I feel like I’ve had to gradually piece this together from a variety of places, over a long time; I’ve never read anything that would have laid down the whole picture. I remember that Freakonomics had a few chapters about how incentives cause unexpected behavior, but that was mostly about economic incentives, which are just a small part of the whole picture. And it didn’t really focus on the “nothing in the world works the way you’d naively expect” thing; as I recall, it was presented more as a curiosity.

On the other hand, Robin Hanson has had a lot of stuff about “X is not about Y“, but that has mostly been framed in terms of prestige and signaling, which is the kind of stuff that’s certainly an important part of the whole picture (the psychological kind of incentives), but again just a part of the picture. (However, his upcoming book goes into a lot more detail on why and how the publicly-stated motives for human or organizational behavior aren’t actually the true motives.)

And then in social/evolutionary/moral psychology there’s a bunch of stuff about social-psychological incentives, of how we’re motivated to denounce outgroups and form bonds with our ingroups; and how it can be socially costly to have accurate beliefs about outgroups and defend them to your ingroup, whereas it would be much more rewarding to just spread inaccuracies or outright lies about how terrible the outgroups are, and thus increase your own social standing. And how even well-meaning ideologies will by default get hijacked by these kinds of dynamics and become something quite different from what they claimed to be.

There’s also a relevant strand of this in the psychology of motivation/procrastination/habit-formation, on why people keep putting off various things that they claim they want to do, but then don’t. And how small things can reshape people’s behavior, like if somebody ends up as a much more healthy eater just because they don’t happen to have a fast food restaurant conveniently near their route home from work. Which isn’t necessarily so much about incentives themselves, but an important building block in understanding why our behavior tends to be so strongly shaped by things that are entirely separate from consciously-set goals.

“Experiental pica” is a misdirected craving for something that doesn’t actually fulfill the need behind the craving. The term originally comes from a condition where people with a mineral deficiency start eating things like ice, which don’t actually help with the deficiency. Recently I’ve been shifting towards the perspective that, to a first approximation, roughly everything that people do is pica for some deeper desire, with that deeper desire being something like social connection, feeling safe and accepted, or having a feeling of autonomy or competence. That is, most of the things that people will give as reasons for why they are doing something will actually miss the mark, and also that many people are engaging in things that are actually relatively inefficient ways of achieving their true desires, such as pursuing career success when the real goal is social connection. (This doesn’t mean that the underlying desire would never be fulfilled, just that it gets fulfilled less often than it would if people were aware of their true desires.)

Share this:

There were several interesting talks at the GoCAS workshop on existential risk to humanity. The one that was maybe the most thought-provoking was the last one, by Seth Baum, who discussed the difficulty of translating the results of academic research into something that actually does save the world.

He gave two examples. First, climate change: apparently in the economics literature, there has been an extended debate about the optimal level of a carbon tax; however, in the US, all of this debate and finding exactly the optimal level is kind of irrelevant, given that there is no carbon tax and there is considerable opposition to creating one. So the practical work that needs to be done, has been various organizations that are working to get support for politicians that care about climate change. Also valuable are some seemingly unrelated efforts like work to stop gerrymandering, because it turns out that if you didn’t have gerrymandering, it would be easier to elect politicians who are willing to implement things like a carbon tax.

His other example was nuclear disarmament. Academia has produced various models of nuclear winter and of how that might be an x-risk; however, in practice this isn’t very relevant, because the people who are in charge of nuclear weapons already know that nuclear war would be terribly bad. For them, the possibility of nuclear winter might make things slightly worse, but the possibility of nuclear war is already so bad that such smaller differences are irrelevant. This is a problem, because nuclear disarmament and reducing the size of the nuclear stockpile could help avert nuclear winter, but the decision-makers are thinking that nuclear war is so bad that we must be sure to prevent it, and one of the ways to prevent it is to have a sufficiently large nuclear arsenal to serve as a deterrent.

His suggestion was that the thing that would actually help in disarmament, would be to make various countries – particularly Russia – feel geopolitically more secure. The US basically doesn’t need a nuclear arsenal for anything else than deterring a nuclear strike by another power; for any other purpose, their conventional military is already strong enough to prevent any attacks. But Russia is a different case: they have a smaller military, smaller population, smaller economy, and they border several countries that they don’t have good relations with. For them, maintaining a nuclear arsenal is an actual guarantee for them not getting invaded. Similarly for Pakistan, maybe Israel. The key for actually getting these countries disarm would be to change their conditions so that they would feel safe in doing so.

He emphasized that he wasn’t saying that academic research was useless, just that the academic research should be focused and it should be used in a way that actually helps achieve change. I’ve been thinking about the usefulness of my x-risk/s-risk research for a while, so this was very thought-provoking, though I don’t yet know what actual updates I should do as a result of the talk.

Share this:

Some time back, I saw somebody express an opinion that I disagreed with. Next, my mind quickly came up with emotional motives the other person might have for holding such an opinion, that would let me safely justify dismissing that opinion.

Now, it’s certainly conceivable that they did have such a reason for holding the opinion. People do often have all kinds of psychological, non-truth-tracking reasons for believing in something. So I don’t know whether this guess was correct or not.

But then I recalled something that has stayed with me: a slide from a presentation that Stuart Armstrong held several years back, that showed the way that we tend to think of our own opinions as being based on evidence, reasoning, etc.. And at the same time, we don’t see any of the evidence that caused other people to form their opinion, so instead we think of the opinions of others as being only based on rationalizations and biases.

Yes, it was conceivable that this person I was disagreeing with, held their opinion because of some bias. But given how quickly I was tempted to dismiss their view, it was even more conceivable that I had some similar emotional bias making me want to hold on to my opinion.

And being able to imagine a plausible bias that could explain another person’s position, is a Fully General Counterargument. You can dismiss any position that way.

So I asked myself: okay, I have invented a plausible bias that would explain the person’s commitment to this view. Can I invent some plausible bias that would explain my own commitment to my view?

I could think of several, right there on the spot. And almost as soon as I could, I felt my dismissive attitude towards the other person’s view dissolve, letting me consider their arguments on their own merits.

So, I’ll have to remember this. New cognitivetrigger-action plan: if I notice myself inventing a bias that would explain someone else’s view, spend a moment to invent a bias that would explain my opposing view, in order to consider both more objectively.

Share this:

Everyone, it sometimes seems, has their own pet theory of why social media and the Internet often seem like so unpleasant and toxic places. Let me add one more.

People want to feel respected, loved, appreciated, etc. When we interact physically, you can easily experience subtle forms of these feelings. For instance, even if you just hang out in the same physical space with a bunch of other people and don’t really interact with them, you often get some positive feelings regardless. Just the fact that other people are comfortable having you around, is a subtle signal that you belong and are accepted.

Similarly, if you’re physically in the same space with someone, there are a lot of subtle nonverbal things that people can do to signal interest and respect. Meeting each other’s gaze, nodding or making small encouraging noises when somebody is talking, generally giving people your attention. This kind of thing tends to happen automatically when we are in each other’s physical presence.

Online, most of these messages are gone: a thousand people might read your message, but if nobody reacts to it, then you don’t get any signal indicating that you were seen. Even getting a hundred likes and a bunch of comments on a status, can feel more abstract and less emotionally salient than just a single person nodding at you and giving you an approving look when you’re talking.

So there’s a combination of two things going on. First, many of the signals that make us feel good “in the physical world” are relatively subtle. Second, online interaction mutes the intensity of signals, so that subtle ones barely even register.

Depending on how sensitive you are, and how good you are generally feeling, you may still feel the positive signals online as well. But if your ability to feel good things is already muted, because of something like depression or just being generally in a bad mood, you may not experience the good things online at all. So if you want to consistently feel anything, you may need to ramp up the intensity of the signals.

Anger and outrage are emotional reactions with a very strong intensity, strong enough that you can actually feel them even in online interactions. They are signals that can consistently get similar-minded people rallied on your side. Anger can also cause people to make sufficiently strongly-worded comments supporting your anger that those comments will register emotionally. A shared sense of outrage isn’t the most pleasant way of getting a sense of belonging, but if you otherwise have none, it’s still better than nothing.

And if it’s the only way of getting that belonging, then the habit of getting enraged will keep reinforcing itself, as it will give all of the haters some of what they’re after: pleasant emotions to fill an emotional void.

So to recap:

When interacting physically, we don’t actually need to do or experience much in order to experience positive feelings. Someone nonverbally acknowledging our presence or indicating that they’re listening to us, already feels good. And we can earn the liking and respect of others, by doing things that are as small as giving them nonverbal signals of liking and respect.

Online, all of that is gone. While things such as “likes” or positive comments serve some of the same function, they often fail to produce much of a reaction. Only sufficiently strong signals can consistently break through and make us feel like others care about us, and outrage is one of the strongest emotional reactions around, so many people will learn to engage in more and more of it.

Obligatory social links

Follow me on:

Google+ Posts

Kaj Sotala:
I remember being annoyed for a while when I learned in elementary school that 2/3 and 4/6 are equivalent. 4/6 felt like a slightly larger fraction than 2/3 to me, and I felt like it was throwing a perfectly good, slightly-bigger-than-two-thirds fraction to waste if it was actually exactly equal to 2/3.

Kaj Sotala:
Oh wow, this is brilliant: I had never before thought of modern people over-consuming sugar, as being an application of Goodhart's Law. But it's true.

> Goodhart's Law (which is incredibly appropriately named) reads "any measure which becomes a metric ceases to be a good measure." Another way to say this is "proxies are leaky," i.e. the proxy never quite gets you the thing it was intended to get you. If you want to be able to differentiate between promising math students and less-promising ones, you can try out a range of questions and challenges until you cobble together a test that the 100 best students do well on and the following 900 do worse on. But as soon as you make that test the test, it's going to start leaking. In the tenth batch of a thousand students, the 100 best ones will still do quite well, but you'll also get a bunch of people who don't have the generalized math skill, but who did get good at answering the specific, known questions. Your top 100 will no longer be composed only of the 100 actual-best math students.

> This is analogous to what's happened with Western diets and sugar. Prehistoric primates who happened to have a preference for sweet things (fruit) also happened to get a lot more vitamins and minerals, and therefore they survived and thrived at higher rates than those sugar-ambivalent primates who failed to become our ancestors and died out. The process of natural selection turned a measure for nutrition (sweetness) into a metric (having a sweet tooth/implicit hardwired assumption that more sugar → more utility), which was fine until we learned to separate the sugar from the nutrients (teaching to the test) and discovered that our preferences were hardwired to the proxy rather than to the Actual Good Thing.Goodhart's Imperius

Kaj Sotala:
This post is about the combined effects of cheap solar energy, batteries, and robocars. Peak oil is coming soon, and will be at least as important as peak whale oil; probably more like peak horse. First I noticed a good article on the future of fossil…Peak Fossil Fuel

Kaj Sotala:
> I feel like one of the most important lessons I’ve had about How the World Works, which has taken quite a bit of time to sink in, is:

> In general, neither organizations nor individual people do the thing that their supposed role says they should do. Rather they tend to do the things that align with their incentives (which may sometimes be economic, but even more often they are social and psychological). If you want to really change things, you have to change people’s incentives.

Kaj Sotala:
> I've been seeing people linking to [a study on the carbon emissions caused by having more children] to argue that you should have fewer kids [...]

> There are a lot of reasons that this isn't a good way to look at the question of having kids, the biggest of which is that it ignores that people have many other effects other than emitting greenhouse gases. People earn money (that we can tax), people consume services, people create things that benefit others, people pollute in non-warming ways, etc, not to mention that people have their own internal experience that has value. Whether it would be better for people to have more children or fewer children in general is not at all a settled question, and looking only at emissions is an incredibly limited way to try and answer it.

> But even then, let's think about what this 58.6 tCO2e means. [...] we could take the 2016 Giving What We Can estimate for the cost-effectiveness [of an organization that reduces greenhouse emissions] [...] Taken literally, this would be $80/year for [offsetting] 58.6 tCO2e. Now, I'm not that confident in these numbers, since charity evaluation is very hard to do well, but even taking 10x the top of their range gives us just $1,096/year.

> This paper is advocating having fewer children, in a country [2] with a per-capita income of over $40k/year, to avoid somewhere between $80 and $6,000 in yearly emissions! Definitely put thought into whether to have children, and consider what else you could do with the money and time, but emissions should be a very small factor in the decision.Kids and Global Warming