Tag: rationality

Growing up, my mom used to tell my siblings and I that when we were upset and didn’t want to be, we could choose to be happy instead. The whole concept seemed ridiculous to me. “I can’t just flip my emotions on and off like a light switch,” I remember telling her.

But the problem was, she was right. It’s entirely possible to “flip your emotions on and off like a light switch”. There’s a lot of research backing up that statement—not surprising, my mom graduated with a Master’s in psychology, I should have known she didn’t pull this idea out of nowhere. Further, though it took me longer than I would care to admit, I did personally realize the wisdom in her two-word advice, “choose happy”.

So the research says. But I doubted it. For years, until I realized the truth of it independently. Today, I’m going to dissect the reasons I doubted it, because I feel many people probably have the same doubts when reading articles like this one.

I had two reasons to doubt “choose happy”. The first was that I was afraid people would look at me weird if I went from crying to laughing in the span of less than two minutes. The unaltered procession of human emotions is a slow ebb and flow, and a drastic change would make people ask uncomfortable questions.

They probably would have done that. But I wish someone had told me that there are things much more important in life than seeming strange. Spending a majority of my time feeling depressed and anxious for no reason was dramatically worse than it would have been to have some people think I was odd. I should have weighed the pros and cons of feeling the emotion versus letting it go.

The second reason I doubted the wisdom of “choose happy” was that I thought all emotions were important. I thought that they were always there for a reason, even if I couldn’t find what that reason was. It was a gradual realization that led me to the simple fact that some emotions don’t make sense – they’re the result of hormonal imbalances, meaningless stressors, mental overstimulation, and many other things which don’t need to be dwelled on.

Nowadays, I think about emotions in the context of net utility. Is feeling this emotion useful to me? If I’m feeling embarrassed about a stupid mistake, that feeling can be useful, to prompt me to fix the mistake immediately. But after I’ve done everything I can to fix the mistake, including making the appropriate social reparations, I can let the emotion go, because it’s served its purpose. Continuing to feel embarrassed even when I can no longer do anything about the mistake, including learn from it, is pointless.

And if the emotion didn’t have any purpose to begin with – say, if I’m feeling angry because I’ve had a long difficult day at work, which is not even slightly connected to any particular problem that can be solved – I can analyze the cause, decide it’s pointless, and let go of the emotion.

How do you let go of emotions? After your brain stops intuitively holding on because it thinks they’re important, or that it would be weird to let go, it’s typically as simple as focusing on something else. If just passively thinking about something else doesn’t completely fix it, try smiling, putting on a fun or silly song, deliberately focusing on happy thoughts, or even closing your eyes and imagining a pleasant location to hang out for a while. (I’m deliberately giving advice that doesn’t require getting up, because I personally don’t like advice that says “get up! stretch! jog! sweat!” – it does genuinely work, but it’s always delivered in a very pushy way. That being said, if you haven’t already heard this advice from a hundred thousand people, being outside and/or exercising does in fact make you healthier and happier, so try it if you feel inclined.)

So the list of question to ask when you feel any emotion is:

What emotion is it? Is that really what I’m feeling? Emotions are frequently very transparent, but they can become tangled. Further, some emotions can mask others: a lot of men have a tendency to express anger when they’re truly sad, for example. If your emotions are unclear, sort them out.

What probably caused this emotion? Go over salient events in your mind and find the proximate cause. It doesn’t have to be anything major and it frequently isn’t. You’re looking for a cause, not a good reason.

Does this emotion have net positive utility? Feeling negative emotions has inherent negative utility, but that may be outweighed by the positive utility of the action it makes you take: learning from a mistake, apologizing for a misdeed, fixing an internal or external problem, etc. Figure out if the emotion is prompting you to do anything useful, and if it isn’t, if you really need to keep it. Compute the net utility.

An important note about these utility evaluations: A common trap I’ve seen many people fall into is where they keep a negative emotion around because they believe it prompts them to do something good which, in fact, they would do anyway. In particular, a lot of high-achievers end up with the misconception that being miserable is what prompts them to achieve things, when in fact, they would achieve more if they were happier. Therefore, strongly doubt any utility evaluation that leads you to the belief that you need to be miserable all the time in order to get things done.

If you determined that the emotion has net positive utility, keep it around, but only as long as it continues to be useful. As soon as you’ve done everything useful that the emotion was prompting you to do, throw it away. There is no reason to be miserable longer than necessary.

If you determined that the emotion has net negative utility, toss it immediately, using any of the tricks described above.

A final note about the utility of positive emotions: feeling good is a good thing. I’ve seen people be happy but wonder whether they really should be feeling happy. You can dissect the emotion and what actions it makes you take to figure this out, but don’t decide you need to be unhappy because it’s uncommon to see sane adults who visibly care about anything. Emotions are good to keep if they’re useful, and being happy uniformly makes your life better, so ceteris paribus, happiness is useful, and therefore, happiness is almost always good to keep.

We met in Baltimorewhen the hot lights of the dance floor drove us out to the gardensbefore the pouring of the rain drove us back in.

We got engaged in Pittsburghunder the warm yellow glow of artificial lamplightand I handed him the ring I’d bought with less ceremony than I’d likethough he seemed to love it anyway.

We’ll get married in San Franciscosurrounded by the warm California sunby new and old friendsand by possibilities for our future spent together forever.

We’ll grow old among the starswith the distant descendants of humanity at our sideaccomplishing feats and forging friendships we can’t even dream of today.

And we’ll dieif in fact we must dieafter impossible problems have been solved after incomprehensible battles have been foughtafter amazing spoils have been wrought:we’ll die knowing that whatever else has come to passhumanity has won.

I’ve identified as a rationalist for about five years now. The dictionary definitions are a bit off from what I mean, so here’s my definition.

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed “truth” or “accuracy”, and we’re happy to call it that.

Instrumental rationality: achieving your values. Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning”.

Of course, these two definitions are really subsets of the same general concept, and they intertwine considerably. It’s somewhat difficult to achieve your values without believing true things, and similarly, it’s difficult (for a human, at least) to search for truth in absence of wanting to actually do anything with it. Still, it’s useful to distinguish the two subsets, since it helps to distinguish the clusters in concept-space.

So if that’s what I mean by rationality, then why am I a rationalist? Because I like believing true things and achieving my values. The better question here would be “why is everyone not a rationalist?”, and the answer is that, if it was both easy to do and widely known about, I think everyone would be.

Answering why it isn’t well-known is more complicated than answering why it isn’t easy, so, here are a handful of the reasons for the latter. (Written in the first person, because identifying as a rationalist doesn’t make me magically exempt from any of these things, it just means I know what they are and I do my best to fix them.)

I’m running on corrupted hardware. Looking at any list of cognitive biases will confirm this. And since I’m not a self-improving agent—I can’t reach into my brain and rearrange my neurons; I can’t rewrite my source code—I can only really make surface-level fixes to these extremely fundamental bugs. This is both difficult and frustrating, and to some extent scary, because it’s incredibly easy to break things irreparably if you go messing around without knowing what you’re doing, and you would be the thing you’re breaking.

I’m running on severely limited computing power. “One of the single greatest puzzles about the human brain,” Eliezer Yudkowsky wrote, “is how the damn thing works at all when most neurons fire 10-20 times per second, or 200Hz tops. […] Can you imagine having to program using 100Hz CPUs, no matter how many of them you had? You’d also need a hundred billion processors just to get anything done in realtime. If you did need to write realtime programs for a hundred billion 100Hz processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up next time, instead of recomputing them from scratch. […] It’s a good guess that the actual majority of human cognition consists of cache lookups.” Since most of my thoughts are cached, when I get new information, I need to resist my brain’s tendency to rely on those cached thoughts (which can end up in my head by accident and come from anywhere), and actually recompute my beliefs from scratch. Else, I end up with a lot of junk.

I can’t see the consequences of the things I believe. Now, on some level being able to do this (with infinite computing power) would be a superpower: in that circumstance all you’d need is a solid grasp of quantum physics and the rest would just follow from there. But humans don’t just lack the computing power; we can believe, or at least feel like we believe, two inherently contradictory things. This concept is, in psychology, called “cognitive dissonance”.

As a smart human starting from irrationality, knowing more information can easily hurt me. Smart humans naturally become very good at clever arguing—arguing for a predetermined position with propositions convoluted enough to confuse and confound any human arguer, even one who is right—and can thus use their intelligence to defeat itself with great efficiency. They argue against the truth convincingly, and can still feel like they’re winning while running away from the goal at top speed. Therefore, in any argument, I have to dissect my own position just as carefully, if not more carefully, than I dissect those of my opponents. Otherwise, I come away more secure in my potentially-faulty beliefs, and more able to argue those beliefs against the truth.

This is a short and incomplete list, of some of the problems that are easiest to explain. It’s by no means the entire list, or the list which would lend the most emotional weight to the statement “it’s incredibly difficult to believe true things”. But I do hope that it shed at least a little light on the problem.

If rationality is really so difficult, then, why bother?

In my case, I say “because my goal is important enough to be worth the hassle”. In general, I think that if you have a goal that’s worth spending thirty years on, that goal is also worth trying to be as rational as humanly possible about. However, I’d go a step further. Even if the goal is worth spending a few years or even months on, it’s still worth being rational about, because not being rational about it won’t just waste those years or months; it may waste your whole career.

Why? Because the universe rarely arrives at your doorstep to speak in grave tones, “this is an Important Decision, make it Wisely”. Instead, small decisions build to larger ones, and if those small decisions are made irrationally, you may never get the chance to make a big mistake; the small ones may have already sealed your doom. Here’s a personal example.

From a very young age, I wanted to go to Stanford. I learned that my parents had met there when I was about six, and I decided that I was going to go too. Like most decisions made by six-year-olds, this wasn’t based on any meaningful intelligence, let alone the full cost-benefit analysis that such a major life decision should have required. But I was young, and I let myself believe the very convenient thought that following the standard path would work for me. This was not, itself, the problem. The problem was that I kept on thinking this simplified six-year-old thought well into my young adulthood.

As I grew up, I piled all sorts of convincing arguments around that immature thought, rationalizing reasons I didn’t actually have to do anything difficult and change my beliefs. I would make all sorts of great connections with smart interesting people at Stanford, I thought, as if I couldn’t do the same in the workforce. I would get a prestigious degree that would open up many doors, I thought, as if working for Google isn’t just as prestigious but will pay you for the trouble. It will be worth the investment, the cached thoughts of society thought for me, and I didn’t question them.

I continued to fail at questioning them every year after, until the beginning of my senior year. At that point, I was pretty sick of school, so this wasn’t rationality, but a motivated search. But it was a search nonetheless, and I did reject the cached thoughts which I’d built up in my head for so long, and as I took the first step outside my bubble of predetermined cognition, I instantly saw a good number of arguments against attending Stanford. I realized that it had a huge opportunity cost, in both time and money. Four years and hundreds of thousands of dollars should not have been parted with that lightly.

And yet, even after I realized this, I was not done. It would have been incredibly easy to reject the conclusion I’d made because I didn’t want all that work to have been a waste. I was so close: I had a high SAT, I’d gotten good scores on 6 AP tests, including the only two computer science APs (the area I’d been intending to major in), and I’d gotten National Merit Commended Scholar status. All that would have been left was to complete my application, which I’m moderately confident I would have done well on, since I’m a good writer.

That bitterness could have cost me my life. Not in the sense that I would die for it immediately, but in the sense that everyone is dying for anything they spend significant time on, because everyone is dying. And it was here that rationality was my saving grace. I knew about the sunk cost fallacy. I knew that at this point I should scream “OOPS” and give up. I knew that at this point I should lose.

I bit my tongue, and lost.

I don’t know where I would end up if I hadn’t been able to lose here. The optimistic estimate is that I would have wasted four years, but gotten some form of financial aid or scholarship such that the financial cost was lower, and further, that in the process of attending college, I wouldn’t gain any more bad habits, I wouldn’t go stir-crazy from the practical inapplicability of the material (this was most of what had frustrated me about school before), and I would come out the other end with a degree but not too much debt and a non-zero number of gained skills and connections. That’s a very optimistic estimate, though, as you can probably tell given the way I wrote out the details. (Writing out all the details that make the optimistic scenario implausible is one of my favorite ways of combatting the planning fallacy.) There are a lot more pessimistic estimates, and it’s much more likely that one of those would happen.

Just by looking at the decision itself, you wouldn’t think of it as a particularly major one. Go to college, don’t go to college. How bad could it be, you may be tempted to ask. And my answer is, very bad. The universe is not fair. It’s not necessarily going to create a big cause for a big event: World War I was caused by some dude having a pity sandwich. Just because you feel like you’re making a minor life choice doesn’t mean you are, and just because you feel like you should be allowed to make an irrational choice just this once doesn’t mean the universe isn’t allowed to kill you anyway.

I don’t mean to make this excessively dramatic. It’s possible that being irrational here wouldn’t have messed me up. I don’t know, I didn’t live that outcome. But I highly doubt that this was the only opportunity I’ll get to be stupid. Actually, given my goals, I think it’s likely I’ll get a lot more, and that the next ones will have much higher stakes. In the near future, I can see people—possibly including me—making decisions where being stupid sounds like “oops” followed by the dull thuds of seven billion bodies hitting the floor.

This is genuinely the direction the future is headed. We are becoming more and more able to craft our destines, but we are flawed architects, and we must double and triple check our work, else the whole world collapses around us like a house on a poor foundation. If that scares you, irrationality should scare you. It sure terrifies the fuck out of me.

Cluster analysis is the process of quantitatively grouping data in such a way that observations in the same group are more similar to each other than to those in other groups. This image should clear it up.

Whenever you do a cluster analysis, you do it on a specific set of variables: for example, I could cluster a set of customers against the two variables of satisfaction and brand loyalty. In that analysis, I might identify four clusters: (loyalty:high, satisfaction:low), (loyalty:low, satisfaction:low), (loyalty:high, satisfaction:high), and (loyalty:low, satisfaction:high). I might then label these four clusters to identify their characteristics for easy reference: “supporters”, “alienated”, “fans” and “roamers”, respectively.

What does that have to do with language?

Let’s take a word, “human”. If I define “human” as “featherless biped”, I’m effectively doing three things. One, I’m clustering an n-dimensional “reality-space”, which contains all the things in the universe graphed according to their properties, against the two variables ‘feathered’ and ‘bipedal’. Two, I’m pointing to the cluster of things which are (feathered:false, bipedal:true). Three, I’m labeling that cluster “human”.

This, the Aristotelian definition of “human”, isn’t very specific. It’s only clustering reality-space on two variables, so it ends up including some things that shouldn’t actually belong in the cluster, like apes and plucked chickens. Still, it’s good enough for most practical purposes, and assuming there aren’t any apes or plucked chickens around, it’ll help you to identify humans as separate from other things, like houses, vases, sandwiches, cats, colors, and mathematical theorems.

If we wanted to be more specific with our “human” definition, we could add a few more dimensions to our cluster analysis—add a few more attributes to our definition—and remove those outliers. For example, we might define “human” as “featherless bipedal mammals with red blood and 23 pairs of chromosomes, who reproduce sexually and use syntactical combinatorial language”. Now, we’re clustering reality-space against seven dimensions, instead of just two, and we get a more accurate analysis.

Despite this, we really can’t create a complete list of all the things that most real categories have in common. Our generalizations are leaky in some way, around the edges: our analyses aren’t perfect. (This is absolutely the case with every other cluster analysis, too.) There are always observations at the edges that might be in any number of clusters. Take a look at the graph above in this post. Those blue points at the top left edge, should they really be blue, or red or green instead? Are there really three clusters, or would it be more useful to say there are two, or four, or seven?

We make these decisions when we define words, too. Deciding which cluster to place an observation happens all the time with colors: is it red or orange, blue or green? Splitting one cluster into many happens when we need to split a word in order to convey more specific meaning: for example, “person” trisects into “human”, “alien”, and “AI”. Maybe you could split the “person” cluster even further than that. On the other end, you combine two categories into one when sub-cluster distinctions don’t matter for a certain purpose. The base-level category “table” substitutes more specific terms like “dining table” and “kotatsu” when the specifics don’t matter.

You can do a cluster analysis objectively wrong. There is math, and if the math says you’re wrong, you’re wrong. If your WCSS is so high that you have a cluster that you can’t label more distinctly than “everything else”, or if it’s so low you’ve segregated your clusters beyond the point of usefulness, then you’ve done it wrong.

Many people think “you can define a word any way you like”, but this doesn’t make sense. Words are cluster analyses of reality-space, and if cluster analyses can be wrong, words can also be wrong.