A personal blog on various topics.

Menu

Post navigation

I read an abridged version of this at the conclusion of Petrov Day in Boston this year.

This year, Petrov Day is a more solemn occasion than usual. Last week, it became known to us that our holiday’s namesake, Stanislav Petrov, has died. (His death was actually on May 19, but the news went unreported until last week—a testament to the obscurity he lived his life in.)

We don’t have a eulogy for Petrov, because we didn’t know much about him as a person. There was a documentary about him in 2014, but you do not come to know a man by watching a documentary. And yet, our community has built a shared myth around him, and around the events of that fateful night—a myth of big red buttons and impossibly high-stakes decisions and the salvation of humanity from nuclear annihilation.

Given the well-known danger of myths and stories to rational thought, one might well ask whether this is wise. Why do we do it?

This isn’t a list of saints. Haber, for one, is also remembered as the father of chemical warfare. And for all the importance of their actions, they were still only human, with greater or lesser personal flaws just like anyone else. But they saved the world.

In Petrov’s case, when the universe handed him an encounter with X-risk, he had the expertise and strength of rationality to know what was the right thing to do, and the courage and integrity to actually do it.

We choose to remember Stanislav Petrov so that, if the day should come when one of us is called upon to do the right thing in a situation of truly great import, we might find it within us to do as he did.

Let us take a moment of silence, to honor the memory of a man now lost to us forever, whose life, like any other, was beyond measure—and who, by being the right man in the right place at the right time, saved us all.

This was the Moment of Darkness speech that I gave at Boston Secular Solstice 2016.

Epistemic status: I spent several weeks thinking about this, but wrote it in a couple hours before the ceremony, because writing is aversive and I’m an inveterate procrastinator. Although I believe the claim about power laws to be true in some broad sense, this is based primarily on half-remembered “conventional wisdom” that I suspect I absorbed by cultural osmosis from the works of Nassim Nicholas Taleb. It is nowhere near as well-justified in the speech itself as it ought to be; the two statistics cited were the only ones I could find in the time available.

Four years ago, in New York City, in a ceremony much like this one, Ray Arnold, the creator of Secular Solstice, spoke about this. He said:

We have people in this room, right now, who are working on fixing big problems in the medical industry. We have people in this room who are trying to understand and help fix the criminal justice system. We have people in this room who are dedicating their lives to eradicating global poverty. We have people in this room who are literally working to set in motion plans to optimize everything ever. We have people in this room who are working to make sure that the human race doesn’t destroy itself before we have a chance to become the people we really want to be.

And while they aren’t in this room, there are people we know who would be here if they could, who are doing their part to try and solve this whole death problem once and for all.

And I don’t know whether and how well any of us are going to succeed at any of these things, but…

God damn, people. You people are amazing, and even if only one of you made a dent in some of the problems you’re working on, that… that would just be incredible.

Indeed it would. It would be incredible. And I believe that the same is true of the people in this room today, four years later. We recognize that these things matter, that they matter more than most of society recognizes, more even than any of us can really visualize. The utilitarian significance of even the least of those problems is easily in the millions of lives.

But we are also a community of truth seekers. It’s not enough for the stories we tell ourselves about what we’re going to accomplish to be motivating; they have to be accurate. So, what should we expect when we set out to change the world?

I didn’t have time to find more numbers, but as far as I know, they basically all look roughly like this. According to a study by the Financial Times [PDF], 40% of millennials think they’ll have a global impact. In other words, almost 40% of millennials are extremely miscalibrated.

So we see that truly changing the world is a rare event. But that, too, is part of the standard story; because it’s rare, you’ll have to work very hard and be very resourceful to pull it off. But you can still do it if you try, right?

This is the kind of thing that our brains tell us because of availability bias. As Bruce Schneier put it: “We tend to exaggerate [the probability of] spectacular, strange and rare events, and downplay ordinary, familiar and common ones.” He was talking about things like terrorist attacks, but it’s just as true of things like making a major scientific breakthrough. “Stories engage us at a much more visceral level [than data], especially stories that are vivid, exciting or personally involving.”

Remember Spock from Star Trek? Spock often says something along the lines of, “Captain, if you steer the Enterprise directly into a black hole, our probability of survival is only 2.837%.” Yet nine times out of ten the Enterprise is not destroyed. The people who write this stuff have no idea what scientists mean by “probability”. They suppose that a probability of 99.9% is something like feeling really sure. They suppose that Spock’s statement expresses the challenge of successfully steering the Enterprise through a black hole, like a video game rated five stars for difficulty. What we mean by “probability” is that if you utter the words “two percent probability” on fifty independent occasions, it better not happen more than once.

(At least, not in the limit.)

There are fewer than fifty of us in this room, and the probability of an individual achieving the kinds of ambitions we’re talking about here is probably lower than two percent.

So that’s the problem: if you want to change the world, the outside view says it’s probably going to be all for nothing. Any attempt to do so in a way that avoids that problem is unlikely to have a large individual impact.

This problem is so omnipresent in any attempt to do something big that it’s common to just ignore it as background noise. We shouldn’t. We seek the truth because avoiding it inevitably leads to failure when dealing with hard problems. If we’re serious about solving them, we have to face this harsh reality head-on.

Another option is unjustified optimism. This seems to be unfortunately common in our community. Some higher-level rationalists have implemented an advanced form of this where they make their monkey brains believe one thing even as they know the truth to be something else. This might work, if you can pull it off.

A third option is to forcibly suppress your monkey brain’s risk-aversion and submit yourself in obedience to the power laws: spend your life doing the risky thing, knowing that it will probably all be for naught, because it’s the right thing to do. If you’re working on more incremental things like malaria nets, this can work out okay. You can be like the man who throws the starfish back into the sea every day; others may say it doesn’t matter because more will always wash up, but it mattered to that one. If you’re aiming for something more all-or-nothing, like getting AGI right, then this may be psychologically harder.

The second reason is that our community might actually be awesome enough to beat the outside-view odds.

Four years ago, when Ray spoke those words, it didn’t necessarily look that way. That was around the time I found the community online, though I did not meet most of you until later. To be sure, some interesting things had already happened by then, and maybe the future will look back and see some of them as the beginning of the Great Change (or maybe not). But in those past four years, I’ve been lucky enough to watch us grow up, and start accumulating a track record of changing the world in real ways.

Not everything we’ve attempted has yet borne such undeniable fruit, and we’ve had our share of failures. But I think we’ve earned the right to call ourselves a community that’s capable of getting results, and I think there’ll be more soon, including in areas we haven’t even thought of yet.

The law of the universe is that you can’t beat the odds. But conditional on the right things? Yeah, I think you can.

I opened last week’s Secular Solstice by reading this. I haven’t edited it for publication; it reads like a blog post because that’s how I naturally write. (Probably not the optimal format for a speech, but oh well.)

If you haven’t been to a Secular Solstice before, you might be asking yourself: What exactly are we, as secular humanists, doing here, in this building which is for ostensibly religious services, participating in a ritual which includes readings and group singing and other elements that sound suspiciously like a religious service?

If you have been to a Secular Solstice before, then go ahead and ask yourself that question anyway, because I think it turns out to be kind of a deep question.

Religion has been a feature of most human societies for the past 12,000 years or so, though some aspects of it are older. We don’t entirely understand how it began. One story, which you may have heard before if you attended a previous Solstice in New York, is that human brains have evolved to be good at dealing with other humans, so when early humans encountered natural phenomena like the weather, they tended to attribute them to human-like agents—the gods. And so they would start asking these gods for things like a good harvest, and over time these prayers and practices evolved into religion as we know it today.

It’s a neat story, but it’s not the whole story. Why the social aspects of religion? Why pray and sing and have all these rituals in public, with other people? Why isn’t religion just a personal thing?

Perhaps—and I’m not an anthropologist, I don’t know how much agreement there is on this among experts—but perhaps religion helped tribes to coordinate.

Rituals and music served to provide group bonding experiences. Myths and legends helped give the tribe a sense of shared purpose. Religious laws and commandments helped them live harmoniously with one another. Taken together, these factors could help make the members of a tribe more able to trust one another, to cooperate in prisoner’s dilemmas. And perhaps those tribes that could do this were rewarded with a greater ability to survive and reproduce than those that could not.

And sometimes, they were able to coordinate to do more than just that. Sometimes, they were able to do things like spend hundreds of years dragging hundreds of tons of stones over 150 miles to build something like Stonehenge, the ancient archaeological wonder. We don’t fully understand why Stonehenge was built, though it seems to have had some kind of ritual purpose. One theory, which has garnered a fair bit of academic support, is that Stonehenge was a place of healing. These ancient people coordinated to bring this place into existence so that the sick and injured could be ritually healed there, through the power of ringing sounds that were made by hitting the rocks.

Unfortunately, human physiology doesn’t actually work that way, and so these rituals didn’t really heal the sick. That’s the danger of this kind of cultural evolution; it can only get you so far. It might be useful to have beliefs that give you a sense of belonging with your fellow humans, but it’d be even nicer if these beliefs were, y’know, actually true. So that you don’t spend hundreds of years building a healing center that doesn’t actually work. And these sorts of evolved religions didn’t quite get there.

But of course, I’m not telling you anything here that you don’t already know. The more interesting question for secular humanists is: Can we get these benefits of religion—particularly, in the context of Secular Solstice, the benefits of ritual—without giving up the benefits that come from believing things that are actually true?

Some of us think that we can. And for the fifth year now, Secular Solstice has been a test of that hypothesis.

One way to help make this work, of course, is to be explicit about what exactly we’re trying to do. This wouldn’t have worked for the early humans—they didn’t know enough about their own history or how the universe actually functions—but it can work for us. So, instead of praying to the gods and hoping that this incidentally brings us closer together, I can just tell you directly that the theme for this Solstice is coordination—the benefits it can bring us, the challenges in making it work, and ultimately, why we as a species will need to get better at it, if we want to survive and prosper.

Speaking of the gods…perhaps we shouldn’t dispense with them entirely. They may not exist in a literal sense, but they can still be useful as metaphors. An abstract concept can be easier to wrap your head around if you give it a name and a face, as the ancients did. We may have had more practice with high-level abstract reasoning than they did, but this stuff is still complicated and we need all the help we can get in understanding it. Provided, of course, that we remain aware of what we’re doing. So in that sense, you may hear the names of a few gods spoken as part of this ritual tonight.

So that’s why we’re here tonight. We can bring ourselves closer together, and perhaps, in time, build our own Stonehenge—but one based on the truths we’ve learned about the universe.

On Tumblr, Bartlebyshop brings up concerns that she’s had with effective altruist writing that have kept her away from it. I don’t think this is an isolated case; I think it indicates a problem that this community has been having.

Obviously some of this is the unavoidable result of universal human political dynamics, but I do think that, to a certain extent, this is something that we ought to be trying to fix. EAs are disproportionately likely to be the kinds of people who love to argue about things, and it’s important to do so in order to find the most effective things to do, but it can also be tiring when EA spaces have been temporarily taken over by the latest iteration of some perennial argument that you’re not interested in. It may be worth asking ourselves if this is something that could be ameliorated with better social technology.

Although some online EA spaces are devoted to a particular cause or focus area, most of them are fairly general-purpose in terms of what kinds of discussions happen there. With the benefit of hindsight, I think it might have been a good idea to instead set up different spaces for different types of discussions.

In particular, four kinds of discussions come to mind, each of which might do best in its own space:

Philosophical arguing. This would be where people could talk about things like consequentialism/metaethics, the drowning child argument, the moral value of the far future and of animals, the relative importance of EA’s major focus areas, etc. Right now, this is the area that I think most needs to be contained; a lot of people are allergic to these kinds of arguments, both because they often lead to extreme conclusions and because they’ve been going on forever and probably aren’t going to be solved to everyone’s satisfaction anytime soon. At the same time, we want these discussions to be part of EA; they are a major reason why many EAs are EAs, and a major guiding force in many EAs’ donation decisions and life decisions in general.

Comparative analysis of causes and charities. I confess that this is the area that I personally am most interested in, and I sometimes think that the other areas have driven this one largely out of sight, except at a few blogs like GiveWell’s. (Of course, this is probably because many people find this kind of analysis boring, and that’s fine; we want to have something for everyone.) In spaces devoted to this, we’d have discussions of research by organizations like GiveWell and OpenPhil and ACE, and of what interventions and causes are most promising. (We’d want this to focus on analysis that doesn’t depend on, or that explicitly conditions on, extremely deep value judgments of the kind argued over in the “philosophical arguing” section.)

Mutual support. This would be a place for people who’ve allowed EA to shape their lives—by donating a significant percentage of their income, or going vegetarian, or choosing an effectiveness-oriented career path—to talk about their shared life experiences as EAs, and to request and give advice. I think that Giving What We Can has a comparative advantage here, and also that a lot of important discussions in this sphere are happening on blogs like The Unit of Caring on Tumblr.

Community organizing. Here we’ve got things like .impact, discussions among meetup organizers, and other sorts of meta-concerns. To a large extent, these things already happen in their own special-purpose spaces, but there might still be more of them in general-purpose spaces than is ideal.

Of course, since the established EA spaces already exist and we can’t change the past, there isn’t room for a clean partition. However, I do wonder if there’s anything that individuals within EA communities can do to move things in this general direction, and whether doing so is likely to be a good idea.

TL;DR: Iff the potential for people like you to benefit from such an event is a significant causal component of the possibility of that event happening.

Consider the following hypothetical situations:

The murder-mystery situation: Your wealthy elderly relative plans to name you as their heir. Should you decline?

You’re a geologist studying a particular active volcano. If it erupted, measurements from the eruption would provide extremely important and valuable data for your research, providing a boost to your career; however, it would also cause mass loss of life and property damage. Should you precommit not to publish any papers using data from such an eruption?

You’re a worker in a high-income country, and a major issue this election season is a proposed protectionist trade reform that would increase the wages of workers like you in domestic industries, but stifle economic opportunities for those in developing countries. Alternatively, if you don’t buy into the economic assumptions that lead to that scenario being bad, the proposed reform is a trade liberalization that would decrease prices of consumer goods in your country, but lead to exploitation of workers in developing countries. Should you precommit not to take a job in a protected industry/buy foreign-produced goods, or, if you have to do so, to buy an ethics offset in the form of a charitable donation?

In all three of these situations, something bad might happen in the future, but you stand to benefit if it does. You can turn down the benefit, but doing so won’t help the people who were harmed by the bad event in the first place. The question is whether, ethically speaking, you should precommit to turn it down.

It would be useful to have a general principle which serves as an answer to this question. At this point it’d be nice to dramatically reveal one, but I kind of already did that at the top of the page. So instead I’ll discuss the two most obvious alternative answers and why I don’t find them satisfactory.

The first alternative answer is an unconditional yes; if you can anticipate a future situation where you would have the chance benefit from something bad, you should precommit not to take that opportunity. This answer is bad because it leaves free utility on the ground. In many cases, it will lead to your pointlessly punishing yourself for something that you had no control over, to no one else’s benefit. Obviously this outcome is to be avoided if possible.

The second alternative answer is an unconditional no; it’s always okay to take whatever opportunities come your way as long as you don’t directly cause anyone else to be harmed in the process. This answer is bad because your future action is not the only causal variable in play; other people’s expectation of what you will do in the future may influence their own behavior. If whether or not you stand to benefit from something affects whether that thing will happen—possibly because someone who’s looking out for your interests has some measure of control over it—then it’s ethically obligatory for you to take this into account and, if you determine that the event is bad overall, make sure that you don’t stand to benefit from it.

Even if you personally have little control over the event in question, it’s appropriate to consider not only the precommitment that you’d make as an individual, but the precommitment that you’d make if you were deciding for everyone who is like you in relevant ways—that is, everyone whose position is close enough to yours and who uses a sufficiently similar reasoning process. Otherwise, you end up with the outcome where everyone defects because it seems individually rational for each of them. (At this point I’d normally wave my hands and say “something something timeless decision theory”, but honestly I don’t yet understand the math well enough to know if that’s at all applicable here.)

The answer I propose is a compromise between these two positions. If the potential for people like you to benefit from a bad event is a significant causal component of the possibility of that event happening—that is, if the event would be significantly less likely to happen if that potential were gone—then you should remove that potential for yourself, by precommiting to decline the benefit. Note that it has to be a significant causal component; for instance, if society as a whole has any say in the event happening vs. not happening, then to the extent that society cares positively about you at all (which it probably does, at least a little), there’s at least that much of a causal component there. But if it’s only that tiny amount, and not a situation where the interests of people like you are the primary driver of the possibility, then feel free to pick up that free utility off the ground.

So, with this in mind, how would I resolve the situations at the top of this post?

If you are, in fact, a character in a murder mystery, then definitely decline; not only is it ethically obligatory, but as an added bonus it also makes you less likely to be suspected! In real life, I think you only need to do this if there’s a significant possibility of the relative being murdered for the inheritance. This is a bit of a gray area, since you never really know, but in general I’d say it’s fine.

No need to make a precommitment here; your research and career have absolutely no causal impact whatsoever on whether the volcano erupts.

Here I think you should precommit not to benefit. The reason the bad trade policy is being proposed is presumably because workers/consumers in your country want it, and you yourself fall in that class. You’re similar enough to other members of it that you should make the precommitment, because if they all did the same, then the bad proposal would go away (since nobody would have any political incentive to back it) and the people in developing countries would benefit.

Causal modeling is a way of trying to predict the consequences of something that you might do or that might happen, based on cause-and-effect relationships. For instance, if I drop a bowling ball while I’m holding it in front of me, gravity will cause it to accelerate downward until it lands on my foot. This, in turn, will cause me to experience pain. Since I don’t want that, I can infer from this causal model that I should not drop the bowling ball.

People are predictable in certain ways; consequently, other people’s actions and mental states can be included in a causal model. For example, if I bake cookies for my friend who likes cookies, I predict that this will cause her to feel happy, and that will cause her to express her appreciation. Conversely, if I forget her birthday, I predict that this will cause her to feel interpersonally neglected.

All of this is obvious; it’s what makes social science work (not to mention advertising, competitive games, military strategy, and so forth).

But what happens when you include your own actions as effects in a causal model? Or your own mental states as causes in it?

We can come up with trivial examples: If I’m feeling hungry, I predict that this will cause me to go to the kitchen and make a snack. But this doesn’t really tell me anything useful; if that situation comes up, this analysis plays no part in my decision to make a snack. I just do it because I’m hungry, not because I know that my hunger causes me to do it. (Of course, the reason I do it is because I predict that, if I make a snack and eat it, I will stop being hungry; and that causal model does play an important role in my decision. But in that case, my actions are the cause and my mental state is the effect, whereas for the purposes of this post I’m interested in causal models where the reverse is true.)

Here’s a nontrivial example: If I take a higher-paying job that requires me to move to a city where I have no friends (and don’t expect to easily make new ones) in order to donate the extra income to an effective charity, I predict that this will cause me to feel lonely and demoralized, which will cause me to resent the ethical obligations towards charity that I’ve chosen to adopt. This will make me less positively inclined towards effective altruism and less likely to continue donating.

This wasn’t entirely hypothetical (and I did not in fact take the higher-paying job, opting instead to stay in my preferred city). Furthermore, I see effective altruists frequently use this sort of argument as a rationale for not making sacrifices that are larger than they are willing to make. (Such as taking a job you don’t like, or capping your income, or going vegan, or donating a kidney.)

I believe that we ought to be more careful about this than we currently are. Hence the title of this post.

The thing about these predictions is that they can become self-fulfilling prophecies. At the end of the day, you’re the one who decides your actions. If you give yourself an excuse not to ask “okay, but what if I did the thing anyway?” then you’re more likely to end up deciding not to do the thing. Which may have been the desired outcome all along—I really didn’t want to move—but if you’re not honest with yourself about your reasons for doing what you’re doing, that can screw over your future decision-making process. Not to mention that the thing you’re not doing, may, in fact, have been the right thing to do. Maybe you even knew that it was.

(The post linked at the top provides a framework which mitigates this problem a bit in the case of effective altruism, but doesn’t eliminate it. You still have to defend your decision not to increase your total altruism budget in a given category—or overall, if you go with one of the alternative approaches Ozy mentions in passing that involve quantifying and offsetting the value of an action.)

But the other thing about the Dark Arts is that sometimes we need to use them. In the case of causal self-modeling, that’s because sometimes your causal self-model is accurate. If I devoted all my material and mental resources to effective altruism, I probably really would burn out quickly.

The thing about that assessment is that it’s based on the outside view, not on my personal knowledge of my own psyche. This provides a defense against this kind of self-deception.

Similarly, a valuable question to ask is: To what extent is the you who’s making this causal model right now, the same you as the you who’s going to make the relevant decisions? This is how I justify my decision not to go vegan at this time. I find it difficult to find foods that I like, and I predict that if I stopped eating dairy I would eat very poorly and my health would take a turn for the worse. That would be the result of in-the-moment viscerally-driven reactions that I can predict, but not control, from here while making my far-mode decision.

So in the end, we do have to make these kinds of models, and there are ways to protect ourselves from bias while doing so. But we should never forget that it’s a dangerous game.

Following the tradition established by Scott Aaronson of Umeshisms and Malthusianisms, I propose the term “Hofstadterism”.

A Hofstadterism is a principle which claims that you should adjust your thinking, or your analysis of some situation, in a particular direction—and that the principle remains applicable even if you think you’ve already accounted for it.

The concept isn’t particularly closely related to the general philosophy or works of Douglas Hofstadter; I’m just using the term as a generalization of the eponymous law quoted above. In its original context, Hofstadter’s law was a commentary on predictions of artificial intelligence; famously, on more than one occasion in the history of AI, seemingly-promising initial progress led to widespread optimistic projections of future progress that then failed to arrive on schedule. Since then, “Hofstadter’s law” has been used more broadly to refer to the planning fallacy.

Hofstadterisms seem paradoxical. If the correct answer is always to update in the same direction—in the original example, to always make your estimated completion time later, no matter how late it already is—then don’t you end up predicting that it will take an infinite amount of time?

If you apply the principle literally, yes. (Hofstadterisms do not make good machine-learning rules.) However, humans don’t actually do this; even if you really take a Hofstadterism seriously, you’re not actually in real life going to apply it infinitely many times. (Hence the saying, which I unfortunately can’t find a source for on Google: “Any infinite regression is at most three levels deep.” I suppose you could think of this as the anti-Hofstadterism.) In practice, you’re eventually going to arrive at what seems like the position which best balances all the relevant factors that you know. Hofstadterisms are useful when we know, from the outside view, of a tendency for this seemingly-balanced analysis to actually end up being skewed in a particular direction. They offer the opportunity to correct for those biases which remain even after everything appears to be corrected for.

One of the most important kinds of Hofstadterism is the ethical injunction—at least, according to the way such injunctions are used by consequentialists (as opposed to actual deontologists). In theory, consequentialists ought not to have any absolute rules of ethics, other than the fundamental rule of seeking the best possible consequences—which provides no definite constraints over our actions. In practice, we find that certain rules are not merely useful, but essential—to the point where, if you think that it’s right for you to abandon them, you’re wrong. Hence the paradoxical-sounding principle: “For the good of the tribe, do not cheat to seize power even for the good of the tribe.”

One important pitfall to beware of is to try to apply a Hofstadterism to a situation that’s actually a memetic prevalence debate.

To follow the original example from that post: Suppose you believe that our culture demonizes selfishness, and this distresses you because you’re afraid that it makes people psychologically unhealthy. You try to fight this by promoting selfishness as a virtue, giving people in your social group copies of Atlas Shrugged, whatever. Suppose you spread this idea, and it starts to take hold in your social environment, to the point where you hear others espousing it—and yet you still see so many people feeling bad about having needs and wanting to do things for themselves. You might be tempted to think that a Hofstadterism applies here: “Everyone ought to be more selfish, even if they think that they’ve already accounted for the idea that they ought to be more selfish.”

It’s hypothetically possible that you’ve uncovered a deep and insidious bias that pervades human nature. But that’s not the most likely possibility. In this scenario, you should instead consider the possibility that your social environment has become an echo chamber, and that people outside it simply never got the message in the first place.

Hofstadterisms are a powerful tool. Use them wisely.

P. S. Also in the tradition of Scott Aaronson: Anyone have any other ideas for particular Hofstadterisms?

Long ago there lived an old woman who had a wish. She wished more than anything to see for herself the difference between heaven and hell. The monks in the temple agreed to grant her request. They put a blindfold around her eyes, and said, “First you shall see hell.”

When the blindfold was removed, the old woman was standing at the entrance to a great dining hall. The hall was full of round tables, each piled high with the most delicious foods — meats, vegetables, fruits, breads, and desserts of all kinds! The smells that reached her nose were wonderful.

The old woman noticed that, in hell, there were people seated around those round tables. She saw that their bodies were thin, and their faces were gaunt, and creased with frustration. Each person held a spoon. The spoons must have been three feet long! They were so long that the people in hell could reach the food on those platters, but they could not get the food back to their mouths. As the old woman watched, she heard their hungry desperate cries. “I’ve seen enough,” she cried. “Please let me see heaven.”

And so again the blindfold was put around her eyes, and the old woman heard, “Now you shall see heaven.” When the blindfold was removed, the old woman was confused. For there she stood again, at the entrance to a great dining hall, filled with round tables piled high with the same lavish feast. And again, she saw that there were people sitting just out of arm’s reach of the food with those three-foot long spoons.

But as the old woman looked closer, she noticed that the people in heaven were plump and had rosy, happy faces. As she watched, a joyous sound of laughter filled the air.

And soon the old woman was laughing too, for now she understood the difference between heaven and hell for herself. The people in heaven were using those long spoons to feed each other.

If you found this a little glurgey and of questionable value in delivering an actual moral lesson, well, that makes at least two of us. But even if Wikipedia calls it an allegory, a metaphor can still be applicable in more than one domain.

My suggestion is that this story can be a metaphor for dealing with cognitive bias.

The idea is that there are some things that we can do for other people more easily than we can do them for ourselves. This isn’t garden-variety comparative advantage; this is the idea that sometimes we have a comparative disadvantage in dealing with something that affects us, specifically because it affects us instead of somebody else. This isn’t the case in most domains, but I think it may be the case in the domain of rationality. We all know that identifying skewed thinking from the inside is really hard, since many biases insidiously warp our thinking in such a way as to prevent us from seeing them.

One thing I’ve noticed is that occasionally, when I’m developing or expressing an opinion on something—particularly questions of political significance, in the “tribal politics” sense, but sometimes in other domains—I have this vague sense that my thought process might not be entirely trustworthy. It feels as though there’s something going on in my brain that shapes my beliefs around tribal affiliation, or some other bias, rather than correct reasoning. Unfortunately, this is where my self-awareness seems to end; pushing harder on this feeling doesn’t reveal any clues as to where the fault might lie.

According to the message that I most often see promoted in the rationality community, you must cultivate the extremely difficult skill of pushing through that feeling, seeing the distortions in your thought process for what they are, and fixing them—and that, while of course you can cultivate this skill alongside others, in the end, you are on your own. In this view, the ultimate goal is complete cognitive self-reliance.

I want to suggest a different, complementary approach: treating rationality as a social process.

If cognitive bias is causing me to say something obviously stupid about a particular topic, then other people are likely to notice what’s going on better than I am; indeed, if this weren’t the case, then the rationality community wouldn’t have been able to recognize recurring failure modes in domains like politics. So if that vague feeling comes into my brain, and I suspect that this is in fact what’s going on, might others be able to help me see through it? “Hey, I feel like idea X must be true, because argument Y, but I also feel like I’ve got a blind spot here and am failing to account for something obvious—does any of this sound wrong to you?”

It is better to find one fault in yourself than a thousand in someone else—but if finding a fault in someone else is more than a thousand times easier, then that implies the highest-expected-value thing to do is look for faults in each other.

(Especially since most of us are never going to be completely cognitively self-reliant no matter how hard we try, as even the most ardent rationality evangelists will acknowledge. And since, taking the outside view, most people who think that they are completely cognitively self-reliant are wrong.)

Of course, there are some obvious failure modes that rationality-as-a-social-process can fall into, and it won’t work in just any social context. In an environment where treating arguments as soldiers is already completely normalized, asking people who disagree with you to tell you how your opinions are biased isn’t going to bring you any closer to the truth. Sometimes you have to defect in the prisoner’s dilemma, so to speak. This implies that, if we care about finding truth, we should work to create spaces where this kind of constructive criticism is normalized, and participants in the discourse can have an expectation—backed up by social norms which are enforced in the usual ways—that a request for such criticism won’t be taken as an opportunity for an opposing “army” to gain ground without similarly subjecting itself to potential criticism. And the other big issue is trust; this whole process does no good unless I can take the critic’s assessment of my rationality seriously, which means I have to trust their rationality, as well as their good intentions.

Overall, despite the very real pitfalls, I think that the role of feedback from others in rationality is underappreciated, and that we who seek to overcome our biases would do well to rely more heavily on it. Of course, I could be totally wrong about this—but that’s what the comments section is for.

The preceding five posts in this series were about issues in effective altruism where I have a pretty good idea where I stand. In contrast, I’m going to end the series by talking about a problem that I haven’t figured out how to solve.

About seven million children are going to die this year of preventable poverty-related causes. According to GiveWell’s cost-effectiveness estimates, it is possible to save the life of one of those children by donating approximately $3,340 to the Against Malaria Foundation.

So here are two possible options that hypothetically might be available to me:

Do nothing. If I take this option, about seven million kids will die this year.

Donate $3,340 to AMF. If I take this option, about seven million kids will die this year.

From this perspective, the only difference between those two scenarios is that I’m $3,340 poorer in the second one. This does not make the second one look very appealing.

Of course, in reality there’s another major difference: on average, a child’s life is saved in the second scenario. In absolute terms, the difference between a world where that child lives and a world where they die is huge. It’s an entire human life, with all its joys and accomplishments over the course of decades. It is easily worth far, far more than $3,340.

But I’m never going to meet that child, or know anything about them. And when I look at the effects on a larger scale, the fraction of the overall problem that I’ve solved is so minuscule as to not be noticeable. Literally nothing I can do is ever going to make a significant dent in global poverty.

If everybody made a significant personal sacrifice, then we could easily solve global poverty, and far more than that. Everybody would get to feel the satisfaction of knowing that they’d saved not one life, but millions upon millions. I think you could frame this in such a way that most people would see the sacrifice as worth it. But we don’t have any effective ability to coordinate a solution like that, so I can’t rely on everybody else going along with what I do; I have to decide alone.

The fundamental problem here is that the number of people affected by global poverty is large, and human brains are really, really bad at dealing with large numbers. We can see that visibly saving one person’s life is worth a personal sacrifice. We can see that making a significant dent in the overall problem of global poverty is worth a personal sacrifice. But when the life you save is invisible, and the dent insignificant? That’s harder for our brains to see as actually worth it.

Even though it totally is.

There are a couple of things that I can do for myself to help mitigate this problem. One of them is to remind myself that I’m not relying on the warm glow. If donating feels like throwing money into a bottomless pit, but I know that it saves lives on average and is the right thing to do, then that’s enough for me to get myself to do it. And that’s what actually counts.

Another thing I do is follow the work of organizations that do research, and are transparent enough about it that I feel like I have a good picture of what progress they’re making. GiveWell, with their regular blog posts and research updates, is the paragon of this (with GiveDirectly earning an honorable mention). Reading their material gives me a sense that we are, slowly but surely, making concrete progress towards actually solving global poverty and the other giant problems in this world.

But still, it’s a problem. It’s all in my head, but it’s still real. And I think other effective altruists struggle with it too. If anybody has any effective techniques or ways of looking at the problem that help make dealing with it easier, I’m all ears.