Suppose you have a lot of reasons to believe a thing, and no good reasons to not believe it. Should this make you more or less likely to believe it, relative to a case where the considerations are a bit more mixed?

At first glance, it appears you should believe it more, since things with lots of good reasons for them and no reasons against them tend to be true.

However often when people have many reasons for a thing and no reasons against it, it is because they have been collecting them, probably unintentionally. Humans seem to do this when they have a belief they care about.

For instance, when I was younger I could have told you fifteen reasons that logging old growth forests in Tasmania was harmful. They were more economically valuable as a tourism attraction, and the logging was perpetuating corruption, and the forests harbored endangered species, and so on. Somehow, coincidentally, at least almost all the considerations aligned.

For another instance, vegetarians often think that vegetarianism is very easy, and healthy, and moral, and good for the environment, and more enjoyable. Meat eaters often think that vegetarianism is inconvenient, and unhealthy, and not morally important, and worse for the environment in certain ways, and unpleasant. It’s even less likely that all the considerations align, but in random different directions depending on who does the set of unbiased analyses. I think here both groups are aware of some people on the other side doing this.

This seems common enough that if you find yourself with a collection of considerations all pointing in one direction, you should be somewhat worried.

On the other hand, often you have lots of aligned reasons because there is some fundamental reason behind them all. For instance, if you can’t prove a statement in math, because every different way you can think of to try to prove it fails, this may be because it is false. Here you expect to get entirely evidence pointing in one direction.

A less clear case is whether exercise is good. It seems there are lots of different good reasons to exercise. But all of them go through you being healthy, so this is not so surprising — you will look better, you will feel better, you will live for longer, you will be more sane and happy.

Some situations are more conducive to evidence all pointing one way, or coming out in different directions. If the question is whether something is on net good, and it has a variety of effects, probably some should point in different directions. If the ‘considerations’ are a number of somewhat noisy measurements of a hidden quantity, then if the first measurement is high, probably the others will be too.

In whatever situation, you should also expect some things to come out with all of the evidence pointing in one direction, by chance. You might also expect all of the considerations to come out one way due to a selection effect in combination with chance. For instance, if you thought of a particular example because it was the case which you think is most overwhelmingly lopsided, then it was selected for being lopsided, and so this is less surprising than if another random belief had this character.

I think if you find yourself with lots of aligned beliefs like this, you should consider asking yourself:

Is there some root consideration pushing all the other consideration one way?

Am I motivated to believe this thing?

Is this a kind of situation where I should expect all of the evidence to point in one direction?

Like this:

I wrote recently about considerations in choosing how weird to be. Today let us consider the question from an impersonal perspective: what is the socially optimal allocation of weirdness? Society and weirdness are complicated, so again let us just discuss some considerations.

Social costs of people being judged badly

When individuals avoid being weird, it is often because they want to be judged well in some way. From an impersonal perspective, does it matter if you judge me badly? This seems to depend on the extent to which people judge one another absolutely, versus relatively, and whether people care about the judgement absolutely or relatively.

If you judge me as a relatively bad friend, and then you replace me with a different friend, this seems bad for me, but good for your other friend, so socially neutral. If you judge me as an absolutely bad friend, this might hurt me without providing a compensating benefit to someone else. It will hurt me more if I care about my absolute quality as a friend than if I’m mostly worried about being at least as good as most people. It seems to me that a combination of these things happens in practice. So the private costs of being judged for weirdness partly translate to social costs.

It also matters how much you care about making judgments in a particular way (e.g. correctly). If you actually don’t want to interact with people with the wrong political beliefs for instance, then if I hide my political beliefs and we become friends this will be bad for you. If you merely don’t want to have awkward political discussions, then it is fine if I hide my beliefs.

Signaling race

In some cases, ‘not weird’ is continually and narrowly redefined, to make locating it a reasonable sign of social savvy. For instance, if you are a girl in high school, you might learn that it is weird to not own any barbie dolls. However once you manage to get a barbie doll, you may find that it is the wrong barbie doll, or that barbie dolls are no longer the normal thing any more and are now the preserve of weird kids like you. This race presumably takes some amount of effort from the weird and the non-weird people alike, which would be averted if people didn’t try to avoid weirdness.

Neutral views

Suppose everyone chooses one topic on which to spend their weirdness budget, and there they think deeply and advocate hard for what they think is right. On all the other topics, they take the most common position. Then virtually every view on every topic will be directed by conformity, and it won’t matter that each person put thought and effort into their own cause. The status quo will reign forever, on almost every issue. If everyone has many more implicit votes than they have weirdness to fund them with, then public opinion is almost completely uninformative. Thus in such a case it seems probably better overall for people to be at least weird enough that public opinion is informed by thought. This can happen for instance if people express a lot more minority views, or if there are multiple non-weird views on every issue.

Economies of scale and congestion

It is good for efficient consumption if people aren’t weird with respect to tastes in information goods like music and TV. For instance, people who don’t enjoy Game of Thrones are just going to miss out on what could have been basically free pleasure. For goods where one person using them means another person cannot, there is more of a trade-off. There are still often economies of scale, so others sharing your tastes makes it easier for you to get what you want (e.g. it is very easy to get Coca-Cola relative to rice milk, which is not because rice is hard to grow). However other people can also get in your way and buy up the things you want, so it can be better for people to be more weird for some tastes. For instance, it’s better if people have different favorite mountains to climb, if everyone likes to climb in peace.

Standards

There are often costs from people using different standards. For instance, when I took the GRE I suffered a cost from having learned to type using the Dvorak keyboard layout, because the GRE computers can only use Qwerty. I and a bunch of French people also suffered costs when I went to France and sat in their train seats and we couldn’t talk about it, and when they closed their restaurants for meal times, unexpectedly.

Variety

Weirdnesses offer variety, which has various benefits. Some people like it for its own sake. It also naturally allows experimentation, which enriches the lives of the non-weird later. For instance, it seems good that some people want to entirely live on synthetic nutrient slurries, because eventually they might find some that are delicious and well tried enough that it becomes a common lifestyle choice.

Variety also produces robustness. That some people like to live in the countryside means everyone can’t be killed by an epidemic so easily. That some people keep a thousand cans of beans in their basement makes society even safer.

Information

Honesty about weirdness is useful people who contribute to policy to learn information about people’s values. For instance, if almost everyone who was homosexual decided that it wasn’t an optimal place to seem unusual, and avoided mentioning it ever, then nobody would ever have known how important improving the treatment of homosexuals was.

***

In sum, from society’s perspective, it seems pretty unclear how weird it is best for people to be. Several considerations point in different directions. Incidentally, it also seems very unlikely to align with how weird individuals want to be.

Like this:

Slate Star Codex writes about a patient (or patient amalgam) who was suicidal, apparently for want of a few thousand dollars:

…So what bothered me is that psychiatric hospitalization costs about $1,000 a day. Average length of stay for a guy like him might be three to five days. So we were spending $5,000 on his psychiatric hospitalization, which was USELESS, so that we could send him out and he could attempt suicide again…

…Problem is, you don’t have to be an economics PhD to realize that “give $5,000 to anyone who attempts suicide and says they need it” might create some bad incentives.

I have no good solution to this…

I’m curious about solutions to this. However I’m going to talk about a slightly different situation, where the person in question is driven by desperation to be in a drug experiment which will make the rest of their life of neutral value. The drug, Neutrazine, has no social value, and is being trialed for entirely morally neutral reasons.

So we want to be able to give people a few thousand dollars at times when their not taking Neutrazine is worth more than a few thousand dollars to us, and where a few thousand dollars would be enough to keep them away from Neutrazine, without causing them to get into such situations more readily, or to lie to you about whether they are really so badly off that they would take Neutrazine.

This sounds kind of hopeless: if you are willing to rescue people in a bad situations, and they know this ahead of time, surely some people will get into bad situations more and/or lie about it.

This actually seems like a case where moral hazard should be avoidable. The person in question has the option to make their life worth nothing using Neutrazine, from any initial level of value. This is worth a positive amount to them, in the cases where you are hoping to help them. But if you give them just the same positive amount in money, this also makes their life neutral and takes the Neutrazine option off the table (because it would do nothing). So it doesn’t change the expected value from their perspective at all, and thus doesn’t influence their decisions ahead of time. Yet it is much better from your perspective, because you valued their life a lot more than a few thousand dollars.

This might seem unsatisfactory, in that you got all the gains. However you could give them some gains, without influencing their behavior much. Also, there may be gains to their future self that were discounted more than you would like. And it might be that a person joining a Neutrazine trial will tend to be underestimating their future opportunities (due to the selection effect), so it is better for their life to be neutral according to their expectations than guaranteed to be neutral, on average.

This isn’t a solution, because it requires you to know how valuable things are to the other person. As mentioned earlier, they can just tell you their life is worse than it is. People whose lives are not bad at all can claim they are going on Neutrazine. Partial solutions to this could come from mechanism design, neuroimaging or lie detection. I’ll talk about the mechanism design option.

We have a collection of people whose lives have varying degrees of value to them. We would like to distinguish them, but they all look the same. One obvious difference is their willingness to join a Neutrazine trial. Once we have an action like this, that people with worse lives are more willing to take, we can use it to construct a choice that people will make differently, and which will also differentially help those who need it.

Here is an imperfect one: offer a bundle of $1,000 and a %10 chance of joining a Neutrazine trial. This is of negative value for people whose lives have more than $9,000 of value to them, and positive for those whose lives are worse than that. This isn’t great, in that you help some people who are less desperate, and you can only help people a small amount, but it seems better than the apparent status quo.

Like this:

Suppose you are in the business of making charity recommendations to others. You have found two good charities which you might recommend: 1) Help Ugly Children, and 2) Help Cute Children. It turns out ugly children are twice as easy to help, so 1) is the more effective place to send your money.

You are about to recommend HUC when it occurs to you that if you ask other people to help ugly children, some large fraction will probably ignore your advice, conclude that this effectiveness road leads to madness, and continue to support 3) Entertain Affluent Adults, which you believe is much less effective than HUC or HCC. On the other hand, if you recommend Help Cute Children, you think everyone will take it up with passion, and much more good will be done directly as a result.

Like this:

It is oftensaidthat you should spend your weirdness budget wisely. You should wear a gender-appropriate suit, and follow culture-appropriate sports, and use good grammar, and be non-specifically spiritual, and support moderate policies, and not have any tattoos around either of your eyes. And then on the odd occasion, when it happens to come up, you should gather up your entire weirdness budget and make a short, impassioned speech in favor of invertebrate equality. Or whatever you think is the very most effective use of weirdness. In short: you only get so much weirdness, so don’t use it up dressing like a clown or popularizing alternative sleep schedules.

While I agree the oddball activist will often get less airtime than her unassuming analog, and that weirdness is often a cost, the issue seems more complex. Let us better explore weirdness budgeting.

Model #1: Weirdness is badness

A first simple model is that people don’t like weird things, so if you have any, they will like you less in expectation. Weirdness is a kind of badness. On this model, I suppose the reason you would want to be weird at all is that you just are weird, and it is hard or unpleasant to keep it under control.

Some characteristics are certainly like this. For instance, being shockingly unable to open corkscrews, or tending to fart really loudly. These are just bad characteristics though, and don’t seem like they need to be budgeted differently from other bad but not weird characteristics, like being lazy and stupid. I don’t think this is what people have in mind when they say to spend your weirdness budget wisely.

Model #2: Weirdness is rarity is bad

Here is a closely related model. Weird traits are not inherently bad, but they are inherently unusual, and being unusual is inherently bad. On this model, the reason you want to have a weird trait could be that you like the trait, and so you want to make it less unusual.

If many people feel that way, then on this model, weird traits are tragedies of the commons. e.g. If everyone could be naked in the street, the world would be a better place for everyone. But sadly, because nobody does it, anyone who starts is socially punished. So it is only the very altruistic person who will pull off their pants and be ostracized for the common good.

Model #3: Weirdness among the cool kids is bad

This is like the last model, but explains why you would want to budget your weirdness. In it, it doesn’t matter how common a trait is, it matters how common it is among cool people (or perhaps how differentially common it is among cool people). So then you don’t want to help popularize too many weird traits, because the more weird traits you have the less cool you seem, and thus the less your vote in favor of those traits counts.

I think there is a hint of truth to these models so far. Kinds of unusualness are inherently bad, unusualness is often bad, and having traits makes those traits less unusual among people like you. However I highly doubt that people are mostly weird out of altruism, or even altruism combined with inability to control their weirdness. People love being weird. (Often.)

Model #4: Weirdness is divisive

Some weird traits are unambiguously bad. Some are unambiguously good, and empirically, these don’t appear to use up weirdness budget. If you are weirdly hilarious this probably means you can get away with more other weirdness, not less.

Many traits a bit good and a bit bad: they please some people while scaring off others. If a trait is ‘weird’, probably it displeases most people, and appeals to few. But this isn’t necessarily a bad deal, even from a selfish perspective.

For one thing, it might please the few a lot. Being into 15th Century East Asian architecture will seem merely not that interesting to the vast majority of people, while exceptionally exciting to the few who share your interest.

For another thing, it matters how much you care about different levels of liking. For many circumstances, the big value is in having everyone think you are basically ok. If you are widely considered basically ok, you can be trusted on routine issues, you can have a job, you can have friends, you can be taken seriously. If you are basically ok and have one weird opinion, you can be a datapoint suggesting that weird opinion is ok for basically ok people to have.

However if you want people to buy your book, or change continents to live with you, or fund your experimental research organization, then you need some people to really like you. But luckily, you don’t need that many. And when the bar is high, and you only need to meet it a few times, you want high variance. If you can pick up a trait that 90% of the population dislikes, but the remainder likes, you might take it. Because ten percent of people liking you can be way better than everyone being indifferent. And then you might do it again, and again. Until eventually, you marry the last person and ignore the rest.

Of course, there are also traits that 60% of people are indifferent to, and 40% of people love, and these are a better deal, and you should start there, all things equal. But there are many other reasons to have particular traits, e.g. you already have them, and it would be effort to hide or destroy them. Generally, it is easy for a trait you want to have for other reasons to be positive value on social grounds in spite of being weird and seeming bad to many people.

Causes and policy views tend to fit in this ‘divisive’ category. If you advocate for abolishing the minimum wage, some people will love you more, and some people will hate you more. Causes are often political, which means that which people like you more and which people hate you more is correlated between them. This would make spending a bunch of weirdness an even better deal. Once you have advocated for abolishing the minimum wage, if you mostly care about some people liking you a lot, you may as well go on to support a slew of other free market policies, because the same people as liked you the first time will like you more, instead of you losing half of them at every step.

Model #4.1: Weirdness is divisive, the goal is spreading weird traits

So far we assumed you wanted to be liked or taken seriously a certain amount by other people. What if we suppose you have a set of weird traits you are in favor of, which you may choose to express or not, and your primary goal is to spread them? (As described in #2). For instance, suppose you care a lot about animal suffering, and also the far future, and think cryonics should be much more common, and think public displays of affection should be normal, and that polyphasic sleep is a thing everyone should try.

As described in #4, variance gets you smaller numbers of people who feel more positively toward you, and sometimes this is worth it. For instance, if nobody will take any of your ideas seriously unless they think you are incredibly impressive. There are a couple of important features specific to the ambition of spreading weird traits however.

One is that to spread a weird trait, you generally have to have it, or associate yourself with it somehow. That potentially makes expressing more of your traits better, aside from its effect on how well respected or liked you are. Suppose you want people to agree with you on cryonics and the far future. Then even if talking about both topics reduces much people are willing to listen to you, it might be worth it because now your small remaining group of admirers think about twice as many topics you want them to think about. This assumes they don’t just reduce their attention to your first topic proportionally.

Note that the incentives here are different for narrowly directed advocacy organizations and their members. You might do best advocating for whales and bad haircuts, but your whale organization would strongly prefer you just stick to the whales.

Another feature of the divisiveness model when you care about spreading traits is that people disliking you has particularly negative effects when you are trying to spread traits. Often, causing half of humanity to mildly dislike you is not so bad, because it will just mean you don’t interact with them on a personal basis much, and you weren’t that socially ambitious anyway. However when people dislike you they will often associate your particular traits with dislike. It might still be worth trading some people disliking you for others liking you extra, but this consideration makes such trades worse than they would have been.

Model #5: Weirdness is local

It could be that most of what matters is weirdness relative to those around you, and that different groups find different things weird, and that you can change who is around you. This picture seems true for some kinds of traits, such as a weird sense of humor. In this case, you can either explicitly search for your people, or just act as you want to in the long run, scare away those who find it weird, and be left with a suitable group. In this model, being weird in a specific way has a one-time (though perhaps large and drawn out) cost, and then you can do it for free, forever. So in this model the wisest way to spend your so-called weirdness budget might be fast and completely.

Model #6 Weirdness as a signal

If weirdness is just a generic bad sign, or is a sign that you match with some groups of people or others, earlier models will perhaps suffice. But being weird often suggests other specific things about a person.

As soon as being weird is probably a bad option, then it also becomes a sign of lack of awareness, or self-control. For instance, if someone wears a ripped shirt to a job interview, one probably infers that they are clueless about customs, don’t own a nice shirt, or that have some other mysterious agenda that one probably doesn’t want to be involved with. These kind of signals lead to the basic situation described in model 2, where things are not intrinsically bad become so by virtue of being weird. However this means that you can be more weird in certain ways without using up weirdness budget if you counteract the signaling on its own. For instance, if you enter a job interview and say ‘I’m sorry that my shirt is torn—I actually got it caught on a shrubbery on my way in here’, then the interviewer will no longer infer that you don’t know about social customs, though may infer that you were interacting unusually with a shrubbery.

Model #7: Weirdness is honest

The usual consequence of advice to be thrifty with weirdness is that people end up with a collection of views and interests that they keep hidden from the world. Sometimes this might be actively deceptive, for instance when people with unspeakable views claim to have no views. But mostly avoiding being weird is just implicit misrepresentation. This suggests a range of considerations associated with honesty in general. Honesty has virtues and costs.

The costs of honesty as they apply here are I think mostly covered above—if you have traits that are widely acknowledged as bad, or make you seem like someone you don’t want to be seen as, or whatever, it is costly to let them be seen. I think there are some benefits of honesty that haven’t fit under other above models however.

It’s more interesting to know about a relatively complete, ‘authentic’ person than a flat, disconnected one-issue front that an unknown person has chosen to erect. People are usually interested in hearing about people more than ideas, so if you present yourself as a person this will probably interest them more. And a person generally has an array of idiosyncrasies and unusual concerns, including some that are not the most effective thing to be concerned about, and some characteristics that everyone agrees are actively bad.

Relatedly, revealing a relatively full array of your views and interests means people know you better, which tends to improve your relationship with them. I’d guess this is true even for people who observe you from far away on the internet. I think I feel more sympathetic to an author who admits they have characteristics beyond an interest in the subject matter.

Another virtue of honesty is that if people see the larger picture behind the particular view you are espousing, your behavior will make more sense, so you will seem more reasonable and interesting. For instance, if you advocate for developing world aid for a while, and then suddenly change to advocating for space travel, you might seem flakey. Whereas if you say all along that you care about doing the most cost-effective thing, and are open minded about causes, and are considering a bunch of them on an ongoing basis, and explain why you think these different causes are cost-effective, then this might seem consistent instead of actively inconsistent. Relatedly, as your views evolve it seems more natural for those who were interested before to remain interested if they understand the bigger picture of your motives.

Relatedly, particular weird views will often make more sense in the context of your larger set of weird views. If you espouse cryonics on its own, and don’t mention that you also think it will be possible to upload human minds onto computers, the cryonics will seem much more ambitious than it otherwise would.

Then there is just the usual problem that dishonesty is confusing and tangly. Views on some topics strongly suggest views on other topics, so if topics are out of bounds, you have to make sure you don’t imply anything about them. This is probably much easier in practice than it first seems, because people are not great at drawing inferences. I wouldn’t be surprised if using abstract language was enough to successfully hide most controversial statements most of the time. However there are probably other things like this.

If you tell people what you really care about, you can have more useful conversations with them, because they can give feedback and suggestions that actually matter to you. For instance, if I spend most of my time thinking about how to improve my life, but I write as if all I care about is resolving puzzles in social science, then your comments can only help me with puzzles in social science.

It can feel better to be honest. However this might just be down to better relationships and avoiding the mental taxation associated with maintaining an inoffensive front.

This is not an exhaustive account of the virtues of weirdness as honesty. Also note that none of the benefits I mentioned apply strongly all of the time. They are just considerations that sometimes matter, and sometimes make it better to be pretty weird.

***

Ok, those are all of my models of weirdness for now, and of how it is appropriate to splurge/invest in it. I suspect at least many of them have some truth, and apply to varying degrees to various weirdnesses in varying parts of the real world. There are probably other important dynamics I have missed. Overall, I’m still not sure how weird it is good to be in general. It seems plausible that many people should be relatively weird across the board, rather than saving it all up for one issue. I suspect some people are best off being weird while others should be more normal overall, and it is harder to tell what is best on the current margin, where some people are weird and some are normal. My guess is that you should often treat weirdness differently depending on what you want to achieve (basic respectability? Fame? A boyfriend? A good relationship with your audience? A good relationship with your organization?), and the nature of the weirdness in question (How much do some people like it? How much do others not? Does it send specific signals? Is it just bad?).

Like this:

This is Katja Grace’s blog. It is about the idiosyncratic class of things Katja considers to be on the frontier of important and interesting. Empirically, it tends to be about human behavior, social institutions and rules, anthropic reasoning, personal experimentation and improvement, philanthropy, and the prospect of robots replacing humans. Katja is responsible for omissions as well as actions, and aspires to save the world at some point.