Meteuphorichttps://meteuphoric.wordpress.com
Fri, 20 Mar 2015 20:15:06 +0000enhourly1http://wordpress.com/https://secure.gravatar.com/blavatar/aecfd073c78a69a918bba38e17092fdb?s=96&d=https%3A%2F%2Fs2.wp.com%2Fi%2Fbuttonw-com.pngMeteuphorichttps://meteuphoric.wordpress.com
Why do all the considerations point the same way?https://meteuphoric.wordpress.com/2015/03/18/why-do-all-the-considerations-point-the-same-way/
https://meteuphoric.wordpress.com/2015/03/18/why-do-all-the-considerations-point-the-same-way/#commentsThu, 19 Mar 2015 06:17:00 +0000http://meteuphoric.wordpress.com/?p=5576Continue reading →]]>Suppose you have a lot of reasons to believe a thing, and no good reasons to not believe it. Should this make you more or less likely to believe it, relative to a case where the considerations are a bit more mixed?

At first glance, it appears you should believe it more, since things with lots of good reasons for them and no reasons against them tend to be true.

However often when people have many reasons for a thing and no reasons against it, it is because they have been collecting them, probably unintentionally. Humans seem to do this when they have a belief they care about.

For instance, when I was younger I could have told you fifteen reasons that logging old growth forests in Tasmania was harmful. They were more economically valuable as a tourism attraction, and the logging was perpetuating corruption, and the forests harbored endangered species, and so on. Somehow, coincidentally, at least almost all the considerations aligned.

For another instance, vegetarians often think that vegetarianism is very easy, and healthy, and moral, and good for the environment, and more enjoyable. Meat eaters often think that vegetarianism is inconvenient, and unhealthy, and not morally important, and worse for the environment in certain ways, and unpleasant. It’s even less likely that all the considerations align, but in random different directions depending on who does the set of unbiased analyses. I think here both groups are aware of some people on the other side doing this.

This seems common enough that if you find yourself with a collection of considerations all pointing in one direction, you should be somewhat worried.

On the other hand, often you have lots of aligned reasons because there is some fundamental reason behind them all. For instance, if you can’t prove a statement in math, because every different way you can think of to try to prove it fails, this may be because it is false. Here you expect to get entirely evidence pointing in one direction.

A less clear case is whether exercise is good. It seems there are lots of different good reasons to exercise. But all of them go through you being healthy, so this is not so surprising — you will look better, you will feel better, you will live for longer, you will be more sane and happy.

Some situations are more conducive to evidence all pointing one way, or coming out in different directions. If the question is whether something is on net good, and it has a variety of effects, probably some should point in different directions. If the ‘considerations’ are a number of somewhat noisy measurements of a hidden quantity, then if the first measurement is high, probably the others will be too.

In whatever situation, you should also expect some things to come out with all of the evidence pointing in one direction, by chance. You might also expect all of the considerations to come out one way due to a selection effect in combination with chance. For instance, if you thought of a particular example because it was the case which you think is most overwhelmingly lopsided, then it was selected for being lopsided, and so this is less surprising than if another random belief had this character.

I think if you find yourself with lots of aligned beliefs like this, you should consider asking yourself:

Is there some root consideration pushing all the other consideration one way?

Am I motivated to believe this thing?

Is this a kind of situation where I should expect all of the evidence to point in one direction?

]]>https://meteuphoric.wordpress.com/2015/03/18/why-do-all-the-considerations-point-the-same-way/feed/4Katja GraceSocially optimal weirdnesshttps://meteuphoric.wordpress.com/2015/03/16/socially-optimal-weirdness/
https://meteuphoric.wordpress.com/2015/03/16/socially-optimal-weirdness/#commentsTue, 17 Mar 2015 03:57:00 +0000http://meteuphoric.wordpress.com/?p=5573Continue reading →]]>I wrote recently about considerations in choosing how weird to be. Today let us consider the question from an impersonal perspective: what is the socially optimal allocation of weirdness? Society and weirdness are complicated, so again let us just discuss some considerations.

Social costs of people being judged badly

When individuals avoid being weird, it is often because they want to be judged well in some way. From an impersonal perspective, does it matter if you judge me badly? This seems to depend on the extent to which people judge one another absolutely, versus relatively, and whether people care about the judgement absolutely or relatively.

If you judge me as a relatively bad friend, and then you replace me with a different friend, this seems bad for me, but good for your other friend, so socially neutral. If you judge me as an absolutely bad friend, this might hurt me without providing a compensating benefit to someone else. It will hurt me more if I care about my absolute quality as a friend than if I’m mostly worried about being at least as good as most people. It seems to me that a combination of these things happens in practice. So the private costs of being judged for weirdness partly translate to social costs.

It also matters how much you care about making judgments in a particular way (e.g. correctly). If you actually don’t want to interact with people with the wrong political beliefs for instance, then if I hide my political beliefs and we become friends this will be bad for you. If you merely don’t want to have awkward political discussions, then it is fine if I hide my beliefs.

Signaling race

In some cases, ‘not weird’ is continually and narrowly redefined, to make locating it a reasonable sign of social savvy. For instance, if you are a girl in high school, you might learn that it is weird to not own any barbie dolls. However once you manage to get a barbie doll, you may find that it is the wrong barbie doll, or that barbie dolls are no longer the normal thing any more and are now the preserve of weird kids like you. This race presumably takes some amount of effort from the weird and the non-weird people alike, which would be averted if people didn’t try to avoid weirdness.

Neutral views

Suppose everyone chooses one topic on which to spend their weirdness budget, and there they think deeply and advocate hard for what they think is right. On all the other topics, they take the most common position. Then virtually every view on every topic will be directed by conformity, and it won’t matter that each person put thought and effort into their own cause. The status quo will reign forever, on almost every issue. If everyone has many more implicit votes than they have weirdness to fund them with, then public opinion is almost completely uninformative. Thus in such a case it seems probably better overall for people to be at least weird enough that public opinion is informed by thought. This can happen for instance if people express a lot more minority views, or if there are multiple non-weird views on every issue.

Economies of scale and congestion

It is good for efficient consumption if people aren’t weird with respect to tastes in information goods like music and TV. For instance, people who don’t enjoy Game of Thrones are just going to miss out on what could have been basically free pleasure. For goods where one person using them means another person cannot, there is more of a trade-off. There are still often economies of scale, so others sharing your tastes makes it easier for you to get what you want (e.g. it is very easy to get Coca-Cola relative to rice milk, which is not because rice is hard to grow). However other people can also get in your way and buy up the things you want, so it can be better for people to be more weird for some tastes. For instance, it’s better if people have different favorite mountains to climb, if everyone likes to climb in peace.

Standards

There are often costs from people using different standards. For instance, when I took the GRE I suffered a cost from having learned to type using the Dvorak keyboard layout, because the GRE computers can only use Qwerty. I and a bunch of French people also suffered costs when I went to France and sat in their train seats and we couldn’t talk about it, and when they closed their restaurants for meal times, unexpectedly.

Variety

Weirdnesses offer variety, which has various benefits. Some people like it for its own sake. It also naturally allows experimentation, which enriches the lives of the non-weird later. For instance, it seems good that some people want to entirely live on synthetic nutrient slurries, because eventually they might find some that are delicious and well tried enough that it becomes a common lifestyle choice.

Variety also produces robustness. That some people like to live in the countryside means everyone can’t be killed by an epidemic so easily. That some people keep a thousand cans of beans in their basement makes society even safer.

Information

Honesty about weirdness is useful people who contribute to policy to learn information about people’s values. For instance, if almost everyone who was homosexual decided that it wasn’t an optimal place to seem unusual, and avoided mentioning it ever, then nobody would ever have known how important improving the treatment of homosexuals was.

***

In sum, from society’s perspective, it seems pretty unclear how weird it is best for people to be. Several considerations point in different directions. Incidentally, it also seems very unlikely to align with how weird individuals want to be.

]]>https://meteuphoric.wordpress.com/2015/03/16/socially-optimal-weirdness/feed/0Katja GraceHow to pay for lives to be worth livinghttps://meteuphoric.wordpress.com/2015/03/14/how-to-pay-for-lives-to-be-worth-living/
https://meteuphoric.wordpress.com/2015/03/14/how-to-pay-for-lives-to-be-worth-living/#commentsSat, 14 Mar 2015 20:00:00 +0000http://meteuphoric.wordpress.com/?p=5569Continue reading →]]>Slate Star Codex writes about a patient (or patient amalgam) who was suicidal, apparently for want of a few thousand dollars:

…So what bothered me is that psychiatric hospitalization costs about $1,000 a day. Average length of stay for a guy like him might be three to five days. So we were spending $5,000 on his psychiatric hospitalization, which was USELESS, so that we could send him out and he could attempt suicide again…

…Problem is, you don’t have to be an economics PhD to realize that “give $5,000 to anyone who attempts suicide and says they need it” might create some bad incentives.

I have no good solution to this…

I’m curious about solutions to this. However I’m going to talk about a slightly different situation, where the person in question is driven by desperation to be in a drug experiment which will make the rest of their life of neutral value. The drug, Neutrazine, has no social value, and is being trialed for entirely morally neutral reasons.

So we want to be able to give people a few thousand dollars at times when their not taking Neutrazine is worth more than a few thousand dollars to us, and where a few thousand dollars would be enough to keep them away from Neutrazine, without causing them to get into such situations more readily, or to lie to you about whether they are really so badly off that they would take Neutrazine.

This sounds kind of hopeless: if you are willing to rescue people in a bad situations, and they know this ahead of time, surely some people will get into bad situations more and/or lie about it.

This actually seems like a case where moral hazard should be avoidable. The person in question has the option to make their life worth nothing using Neutrazine, from any initial level of value. This is worth a positive amount to them, in the cases where you are hoping to help them. But if you give them just the same positive amount in money, this also makes their life neutral and takes the Neutrazine option off the table (because it would do nothing). So it doesn’t change the expected value from their perspective at all, and thus doesn’t influence their decisions ahead of time. Yet it is much better from your perspective, because you valued their life a lot more than a few thousand dollars.

This might seem unsatisfactory, in that you got all the gains. However you could give them some gains, without influencing their behavior much. Also, there may be gains to their future self that were discounted more than you would like. And it might be that a person joining a Neutrazine trial will tend to be underestimating their future opportunities (due to the selection effect), so it is better for their life to be neutral according to their expectations than guaranteed to be neutral, on average.

This isn’t a solution, because it requires you to know how valuable things are to the other person. As mentioned earlier, they can just tell you their life is worse than it is. People whose lives are not bad at all can claim they are going on Neutrazine. Partial solutions to this could come from mechanism design, neuroimaging or lie detection. I’ll talk about the mechanism design option.

We have a collection of people whose lives have varying degrees of value to them. We would like to distinguish them, but they all look the same. One obvious difference is their willingness to join a Neutrazine trial. Once we have an action like this, that people with worse lives are more willing to take, we can use it to construct a choice that people will make differently, and which will also differentially help those who need it.

Here is an imperfect one: offer a bundle of $1,000 and a %10 chance of joining a Neutrazine trial. This is of negative value for people whose lives have more than $9,000 of value to them, and positive for those whose lives are worse than that. This isn’t great, in that you help some people who are less desperate, and you can only help people a small amount, but it seems better than the apparent status quo.

Can you design something better?

]]>https://meteuphoric.wordpress.com/2015/03/14/how-to-pay-for-lives-to-be-worth-living/feed/1Katja GraceFor whom should recommendations be effective?https://meteuphoric.wordpress.com/2015/03/12/for-whom-should-recommendations-be-effective/
https://meteuphoric.wordpress.com/2015/03/12/for-whom-should-recommendations-be-effective/#commentsFri, 13 Mar 2015 03:56:23 +0000http://meteuphoric.wordpress.com/?p=5566Continue reading →]]>Suppose you are in the business of making charity recommendations to others. You have found two good charities which you might recommend: 1) Help Ugly Children, and 2) Help Cute Children. It turns out ugly children are twice as easy to help, so 1) is the more effective place to send your money.

You are about to recommend HUC when it occurs to you that if you ask other people to help ugly children, some large fraction will probably ignore your advice, conclude that this effectiveness road leads to madness, and continue to support 3) Entertain Affluent Adults, which you believe is much less effective than HUC or HCC. On the other hand, if you recommend Help Cute Children, you think everyone will take it up with passion, and much more good will be done directly as a result.

What do you recommend?

]]>https://meteuphoric.wordpress.com/2015/03/12/for-whom-should-recommendations-be-effective/feed/11Katja GraceThe economy of weirdnesshttps://meteuphoric.wordpress.com/2015/03/08/the-economy-of-weirdness/
https://meteuphoric.wordpress.com/2015/03/08/the-economy-of-weirdness/#commentsSun, 08 Mar 2015 20:29:38 +0000http://meteuphoric.wordpress.com/?p=5559Continue reading →]]>It is oftensaidthat you should spend your weirdness budget wisely. You should wear a gender-appropriate suit, and follow culture-appropriate sports, and use good grammar, and be non-specifically spiritual, and support moderate policies, and not have any tattoos around either of your eyes. And then on the odd occasion, when it happens to come up, you should gather up your entire weirdness budget and make a short, impassioned speech in favor of invertebrate equality. Or whatever you think is the very most effective use of weirdness. In short: you only get so much weirdness, so don’t use it up dressing like a clown or popularizing alternative sleep schedules.

While I agree the oddball activist will often get less airtime than her unassuming analog, and that weirdness is often a cost, the issue seems more complex. Let us better explore weirdness budgeting.

Model #1: Weirdness is badness

A first simple model is that people don’t like weird things, so if you have any, they will like you less in expectation. Weirdness is a kind of badness. On this model, I suppose the reason you would want to be weird at all is that you just are weird, and it is hard or unpleasant to keep it under control.

Some characteristics are certainly like this. For instance, being shockingly unable to open corkscrews, or tending to fart really loudly. These are just bad characteristics though, and don’t seem like they need to be budgeted differently from other bad but not weird characteristics, like being lazy and stupid. I don’t think this is what people have in mind when they say to spend your weirdness budget wisely.

Model #2: Weirdness is rarity is bad

Here is a closely related model. Weird traits are not inherently bad, but they are inherently unusual, and being unusual is inherently bad. On this model, the reason you want to have a weird trait could be that you like the trait, and so you want to make it less unusual.

If many people feel that way, then on this model, weird traits are tragedies of the commons. e.g. If everyone could be naked in the street, the world would be a better place for everyone. But sadly, because nobody does it, anyone who starts is socially punished. So it is only the very altruistic person who will pull off their pants and be ostracized for the common good.

Model #3: Weirdness among the cool kids is bad

This is like the last model, but explains why you would want to budget your weirdness. In it, it doesn’t matter how common a trait is, it matters how common it is among cool people (or perhaps how differentially common it is among cool people). So then you don’t want to help popularize too many weird traits, because the more weird traits you have the less cool you seem, and thus the less your vote in favor of those traits counts.

I think there is a hint of truth to these models so far. Kinds of unusualness are inherently bad, unusualness is often bad, and having traits makes those traits less unusual among people like you. However I highly doubt that people are mostly weird out of altruism, or even altruism combined with inability to control their weirdness. People love being weird. (Often.)

Model #4: Weirdness is divisive

Some weird traits are unambiguously bad. Some are unambiguously good, and empirically, these don’t appear to use up weirdness budget. If you are weirdly hilarious this probably means you can get away with more other weirdness, not less.

Many traits a bit good and a bit bad: they please some people while scaring off others. If a trait is ‘weird’, probably it displeases most people, and appeals to few. But this isn’t necessarily a bad deal, even from a selfish perspective.

For one thing, it might please the few a lot. Being into 15th Century East Asian architecture will seem merely not that interesting to the vast majority of people, while exceptionally exciting to the few who share your interest.

For another thing, it matters how much you care about different levels of liking. For many circumstances, the big value is in having everyone think you are basically ok. If you are widely considered basically ok, you can be trusted on routine issues, you can have a job, you can have friends, you can be taken seriously. If you are basically ok and have one weird opinion, you can be a datapoint suggesting that weird opinion is ok for basically ok people to have.

However if you want people to buy your book, or change continents to live with you, or fund your experimental research organization, then you need some people to really like you. But luckily, you don’t need that many. And when the bar is high, and you only need to meet it a few times, you want high variance. If you can pick up a trait that 90% of the population dislikes, but the remainder likes, you might take it. Because ten percent of people liking you can be way better than everyone being indifferent. And then you might do it again, and again. Until eventually, you marry the last person and ignore the rest.

Of course, there are also traits that 60% of people are indifferent to, and 40% of people love, and these are a better deal, and you should start there, all things equal. But there are many other reasons to have particular traits, e.g. you already have them, and it would be effort to hide or destroy them. Generally, it is easy for a trait you want to have for other reasons to be positive value on social grounds in spite of being weird and seeming bad to many people.

Causes and policy views tend to fit in this ‘divisive’ category. If you advocate for abolishing the minimum wage, some people will love you more, and some people will hate you more. Causes are often political, which means that which people like you more and which people hate you more is correlated between them. This would make spending a bunch of weirdness an even better deal. Once you have advocated for abolishing the minimum wage, if you mostly care about some people liking you a lot, you may as well go on to support a slew of other free market policies, because the same people as liked you the first time will like you more, instead of you losing half of them at every step.

Model #4.1: Weirdness is divisive, the goal is spreading weird traits

So far we assumed you wanted to be liked or taken seriously a certain amount by other people. What if we suppose you have a set of weird traits you are in favor of, which you may choose to express or not, and your primary goal is to spread them? (As described in #2). For instance, suppose you care a lot about animal suffering, and also the far future, and think cryonics should be much more common, and think public displays of affection should be normal, and that polyphasic sleep is a thing everyone should try.

As described in #4, variance gets you smaller numbers of people who feel more positively toward you, and sometimes this is worth it. For instance, if nobody will take any of your ideas seriously unless they think you are incredibly impressive. There are a couple of important features specific to the ambition of spreading weird traits however.

One is that to spread a weird trait, you generally have to have it, or associate yourself with it somehow. That potentially makes expressing more of your traits better, aside from its effect on how well respected or liked you are. Suppose you want people to agree with you on cryonics and the far future. Then even if talking about both topics reduces much people are willing to listen to you, it might be worth it because now your small remaining group of admirers think about twice as many topics you want them to think about. This assumes they don’t just reduce their attention to your first topic proportionally.

Note that the incentives here are different for narrowly directed advocacy organizations and their members. You might do best advocating for whales and bad haircuts, but your whale organization would strongly prefer you just stick to the whales.

Another feature of the divisiveness model when you care about spreading traits is that people disliking you has particularly negative effects when you are trying to spread traits. Often, causing half of humanity to mildly dislike you is not so bad, because it will just mean you don’t interact with them on a personal basis much, and you weren’t that socially ambitious anyway. However when people dislike you they will often associate your particular traits with dislike. It might still be worth trading some people disliking you for others liking you extra, but this consideration makes such trades worse than they would have been.

Model #5: Weirdness is local

It could be that most of what matters is weirdness relative to those around you, and that different groups find different things weird, and that you can change who is around you. This picture seems true for some kinds of traits, such as a weird sense of humor. In this case, you can either explicitly search for your people, or just act as you want to in the long run, scare away those who find it weird, and be left with a suitable group. In this model, being weird in a specific way has a one-time (though perhaps large and drawn out) cost, and then you can do it for free, forever. So in this model the wisest way to spend your so-called weirdness budget might be fast and completely.

Model #6 Weirdness as a signal

If weirdness is just a generic bad sign, or is a sign that you match with some groups of people or others, earlier models will perhaps suffice. But being weird often suggests other specific things about a person.

As soon as being weird is probably a bad option, then it also becomes a sign of lack of awareness, or self-control. For instance, if someone wears a ripped shirt to a job interview, one probably infers that they are clueless about customs, don’t own a nice shirt, or that have some other mysterious agenda that one probably doesn’t want to be involved with. These kind of signals lead to the basic situation described in model 2, where things are not intrinsically bad become so by virtue of being weird. However this means that you can be more weird in certain ways without using up weirdness budget if you counteract the signaling on its own. For instance, if you enter a job interview and say ‘I’m sorry that my shirt is torn—I actually got it caught on a shrubbery on my way in here’, then the interviewer will no longer infer that you don’t know about social customs, though may infer that you were interacting unusually with a shrubbery.

Model #7: Weirdness is honest

The usual consequence of advice to be thrifty with weirdness is that people end up with a collection of views and interests that they keep hidden from the world. Sometimes this might be actively deceptive, for instance when people with unspeakable views claim to have no views. But mostly avoiding being weird is just implicit misrepresentation. This suggests a range of considerations associated with honesty in general. Honesty has virtues and costs.

The costs of honesty as they apply here are I think mostly covered above—if you have traits that are widely acknowledged as bad, or make you seem like someone you don’t want to be seen as, or whatever, it is costly to let them be seen. I think there are some benefits of honesty that haven’t fit under other above models however.

It’s more interesting to know about a relatively complete, ‘authentic’ person than a flat, disconnected one-issue front that an unknown person has chosen to erect. People are usually interested in hearing about people more than ideas, so if you present yourself as a person this will probably interest them more. And a person generally has an array of idiosyncrasies and unusual concerns, including some that are not the most effective thing to be concerned about, and some characteristics that everyone agrees are actively bad.

Relatedly, revealing a relatively full array of your views and interests means people know you better, which tends to improve your relationship with them. I’d guess this is true even for people who observe you from far away on the internet. I think I feel more sympathetic to an author who admits they have characteristics beyond an interest in the subject matter.

Another virtue of honesty is that if people see the larger picture behind the particular view you are espousing, your behavior will make more sense, so you will seem more reasonable and interesting. For instance, if you advocate for developing world aid for a while, and then suddenly change to advocating for space travel, you might seem flakey. Whereas if you say all along that you care about doing the most cost-effective thing, and are open minded about causes, and are considering a bunch of them on an ongoing basis, and explain why you think these different causes are cost-effective, then this might seem consistent instead of actively inconsistent. Relatedly, as your views evolve it seems more natural for those who were interested before to remain interested if they understand the bigger picture of your motives.

Relatedly, particular weird views will often make more sense in the context of your larger set of weird views. If you espouse cryonics on its own, and don’t mention that you also think it will be possible to upload human minds onto computers, the cryonics will seem much more ambitious than it otherwise would.

Then there is just the usual problem that dishonesty is confusing and tangly. Views on some topics strongly suggest views on other topics, so if topics are out of bounds, you have to make sure you don’t imply anything about them. This is probably much easier in practice than it first seems, because people are not great at drawing inferences. I wouldn’t be surprised if using abstract language was enough to successfully hide most controversial statements most of the time. However there are probably other things like this.

If you tell people what you really care about, you can have more useful conversations with them, because they can give feedback and suggestions that actually matter to you. For instance, if I spend most of my time thinking about how to improve my life, but I write as if all I care about is resolving puzzles in social science, then your comments can only help me with puzzles in social science.

It can feel better to be honest. However this might just be down to better relationships and avoiding the mental taxation associated with maintaining an inoffensive front.

This is not an exhaustive account of the virtues of weirdness as honesty. Also note that none of the benefits I mentioned apply strongly all of the time. They are just considerations that sometimes matter, and sometimes make it better to be pretty weird.

***

Ok, those are all of my models of weirdness for now, and of how it is appropriate to splurge/invest in it. I suspect at least many of them have some truth, and apply to varying degrees to various weirdnesses in varying parts of the real world. There are probably other important dynamics I have missed. Overall, I’m still not sure how weird it is good to be in general. It seems plausible that many people should be relatively weird across the board, rather than saving it all up for one issue. I suspect some people are best off being weird while others should be more normal overall, and it is harder to tell what is best on the current margin, where some people are weird and some are normal. My guess is that you should often treat weirdness differently depending on what you want to achieve (basic respectability? Fame? A boyfriend? A good relationship with your audience? A good relationship with your organization?), and the nature of the weirdness in question (How much do some people like it? How much do others not? Does it send specific signals? Is it just bad?).

]]>https://meteuphoric.wordpress.com/2015/03/08/the-economy-of-weirdness/feed/10Katja GraceAI Impactshttps://meteuphoric.wordpress.com/2015/02/02/ai-impacts/
https://meteuphoric.wordpress.com/2015/02/02/ai-impacts/#commentsMon, 02 Feb 2015 19:38:49 +0000http://meteuphoric.wordpress.com/?p=5541Continue reading →]]>I’ve been working on a thing with Paul Christiano that might interest some of you: the AI Impacts project. The basic idea is to apply the evidence and arguments that are kicking around in the world and various disconnected discussions respectively to the big questions regarding a future with AI. For instance, these questions:

What should we believe about timelines for AI development?

How rapid is the development of AI likely to be near human-level?

How much advance notice should we expect to have of disruptive change?

What are the likely economic impacts of human-level AI?

Which paths to AI should be considered plausible or likely?

Will human-level AI tend to pursue particular goals, and if so what kinds of goals?

Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?

Today, public discussion on these issues appears to be highly fragmented and of limited credibility. More credible and clearly communicated views on these issues might help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI. The goal of the project is to clearly present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant. The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence. These posts are intended to be continuously revised in light of outstanding disagreements and to make explicit reference to those disagreements.

In the medium run we’d like to provide a good reference on issues relating to the consequences of AI, as well as to improve the state of understanding of these topics. At present, the site addresses only a small fraction of questions one might be interested in, so only suitable for particularly risk-tolerant or topic-neutral reference consumers. However if you are interested in hearing about (and discussing) such research as it unfolds, you may enjoy our blog. If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form. Cross-posted from Less-Wrong.

]]>https://meteuphoric.wordpress.com/2015/02/02/ai-impacts/feed/3Katja GraceWhen should an Effective Altruist be vegetarian?https://meteuphoric.wordpress.com/2014/11/21/when-should-an-effective-altruist-be-vegetarian/
https://meteuphoric.wordpress.com/2014/11/21/when-should-an-effective-altruist-be-vegetarian/#commentsSat, 22 Nov 2014 05:04:07 +0000http://meteuphoric.wordpress.com/?p=5533Continue reading →]]>I have lately noticed several people wondering why more Effective Altruists are not vegetarians. I am personally not a vegetarian because I don’t think it is an effective way to be altruistic.

As far as I can tell the fact that many EAs are not vegetarians is surprising to some because they think ‘animals are probably morally relevant’ basically implies ‘we shouldn’t eat animals’. To my ear, this sounds about as absurd as if Givewell’s explanation of their recommendation of SCI stopped after ‘the developing world exists, or at least has a high probability of doing so’.

(By the way, I do get to a calculation at the bottom, after some speculation about why the calculation I think is appropriate is unlike what I take others’ implicit calculations to be. Feel free to just scroll down and look at it).

I think this fairly large difference between my and many vegetarians’ guesses at the value of vegetarianism arises because they think the relevant question is whether the suffering to the animal is worse than the pleasure to themselves at eating the animal. This question sounds superficially plausibly relevant, but I think on closer consideration you will agree that it is the wrong question.

The real question is not whether the cost to you is small, but whether you could do more good for the same small cost.

Similarly, when deciding whether to donate $5 to a random charity, the question is whether you could do more good by donating the money to the most effective charity you know of. Going vegetarian because it relieves the animals more than it hurts you is the equivalent of donating to a random developing world charity because it relieves the suffering of an impoverished child more than foregoing $5 increases your suffering.

Trading with inconvenience and displeasure

My imaginary vegetarian debate partner objects to this on grounds that vegetarianism is different from donating to ineffective charities, because to be a vegetarian you are spending effort and enjoying your life less rather than spending money, and you can’t really reallocate that inconvenience and displeasure to, say, preventing artificial intelligence disaster or feeding the hungry, if don’t use it on reading food labels and eating tofu. If I were to go ahead and eat the sausage instead – the concern goes – probably I would just go on with the rest of my life exactly the same, and a bunch of farm animals somewhere would be the worse for it, and I scarcely better.

I agree that if the meat eating decision were separated from everything else in this way, then the decision really would be about your welfare vs. the animal’s welfare, and you should probably eat the tofu.

However whether you can trade being vegetarian for more effective sacrifices is largely a question of whether you choose to do so. And if vegetarianism is not the most effective way to inconvenience yourself, then it is clear that you should choose to do so. If you eat meat now in exchange for suffering some more effective annoyance at another time, you and the world can be better off.

Imagine an EA friend says to you that she gives substantial money to whatever random charity has put a tin in whatever shop she is in, because it’s better than the donuts and new dresses she would buy otherwise. She doesn’t see how not giving the money to the random charity would really cause her to give it to a better charity – empirically she would spend it on luxuries. What do you say to this?

If she were my friend, I might point out that the money isn’t meant to magically move somewhere better – she may have to consciously direct it there. She might need to write down how much she was going to give to the random charity, then look at the note later for instance. Or she might do well to decide once and for all how much to give to charity and how much to spend on herself, and then stick to that. As an aside, I might also feel that she was using the term ‘Effective Altruist’ kind of broadly.

I see vegetarianism for the sake of not managing to trade inconveniences as quite similar. And in both cases you risk spending your life doing suboptimal things every time a suboptimal altruistic opportunity has a chance to steal resources from what would be your personal purse. This seems like something that your personal and altruistic values should cooperate in avoiding.

It is likely too expensive to keep track of an elaborate trading system, but you should at least be able to make reasonable long term arrangements. For instance, if instead of eating vegetarian you ate a bit frugally and saved and donated a few dollars per meal, you would probably do more good (see calculations lower in this post). So if frugal eating were similarly annoying, it would be better. Eating frugally is inconvenient in very similar ways to vegetarianism, so is a particularly plausible trade if you are skeptical that such trades can be made. I claim you could make very different trades though, for instance foregoing the pleasure of an extra five minute’s break and working instead sometimes. Or you could decide once and for all how much annoyance to have, and then choose most worthwhile bits of annoyance, or put a dollar value on your own time and suffering and try to be consistent.

Nebulous life-worsening costs of vegetarianism

There is a separate psychological question which is often mixed up with the above issue. That is, whether making your life marginally less gratifying and more annoying in small ways will make you sufficiently less productive to undermine the good done by your sacrifice. This is not about whether you will do something a bit costly another time for the sake of altruism, but whether just spending your attention and happiness on vegetarianism will harm your other efforts to do good, and cause more harm than good.

I find this plausible in many cases, but I expect it to vary a lot by person. My mother seems to think it’s basically free to eat supplements, whereas to me every additional daily routine seems to encumber my life and require me to spend disproportionately more time thinking about unimportant things. Some people find it hard to concentrate when unhappy, others don’t. Some people struggle to feed themselves adequately at all, while others actively enjoy preparing food.

There are offsetting positives from vegetarianism which also vary across people. For instance there is the pleasure of self-sacrifice, the joy of being part of a proud and moralizing minority, and the absence of the horror of eating other beings. There are also perhaps health benefits, which probably don’t vary that much by people, but people do vary in how big they think the health benefits are.

Another way you might accidentally lose more value than you save is in spending little bits of time which are hard to measure or notice. For instance, vegetarianism means spending a bit more time searching for vegetarian alternatives, researching nutrition, buying supplements, writing emails back to people who invite you to dinner explaining your dietary restrictions, etc. The value of different people’s time varies a lot, as does the extent to which an additional vegetarianism routine would tend to eat their time.

On a less psychological note, the potential drop in IQ (~5 points?!) from missing out on creatine is a particularly terrible example of vegetarianism making people less productive. Now that we know about creatine and can supplement it, creatine itself is not such an issue. An issue does remain though: is this an unlikely one-off failure, or should we worry about more such deficiency? (this goes for any kind of unusual diet, not just meat-free ones).

How much is avoiding meat worth?

Here is my own calculation of how much it costs to do the same amount of good as replacing one meat meal with one vegetarian meal. If you would be willing to pay this much extra to eat meat for one meal, then you should eat meat. If not, then you should abstain. For instance, if eating meat does $10 worth of harm, you should eat meat whenever you would hypothetically pay an extra $10 for the privilege.

This is a tentative calculation. I will probably update it if people offer substantially better numbers.

All quantities are in terms of social harm.

Eating 1 non-vegetarian meal

< eating 1 chickeny meal (I am told chickens are particularly bad animals to eat, due to their poor living conditions and large animal:meal ratio. The relatively small size of their brains might offset this, but I will conservatively give all animals the moral weight of humans in this calculation.)

< -$0.08 given to the Humane League (ACEestimates the Humane League spares 3.4 animal lives per dollar). However since the humane league basically convinces other people to be vegetarians, this may be hypocritical or otherwise dubious.

< causing 12.5 days of chicken life (broiler chickens are slaughtered at between 35-49 days of age)

= causing 12.5 days of chicken suffering (I’m being generous)

< -$0.50 subsidizing free range eggs, (This is a somewhat random example of the cost of more systematic efforts to improve animal welfare, rather than necessarily the best. The cost here is the cost of buying free range eggs and selling them as non-free range eggs. It costs about 2.6 2004 Euro cents [= US 4c in 2014] to pay for an egg to be free range instead of produced in a battery. This corresponds to a bit over one day of chicken life. I’m assuming here that the life of a battery egg-laying chicken is not substantially better than that of a meat chicken, and that free range chickens have lives that are at least neutral. If they are positive, the figure becomes even more favorable to the free range eggs).

< losing 12.5 days of high quality human life (assuming saving one year of human life is at least as good as stopping one year of an animal suffering, which you may disagree with.)

= -$1.94-5.49 spent on GiveWell’s top charities (This was GiveWell’s estimate for AMF if we assume saving a life corresponds to saving 52 years – roughly the life expectancy of children in Malawi. GiveWell doesn’t recommend AMF at the moment, but they recommend charities they considered comparable to AMF when AMF had this value.

GiveWell employees’ median estimate for the cost of ‘saving a life’ through donating to SCI is $5936 [see spreadsheet here]. If we suppose a life is 37 DALYs, as they assume in the spreadsheet, then 12.5 days is worth 5936*12.5/37*365.25 = $5.49. Elie produced two estimates that were generous to cash and to deworming separately, and gave the highest and lowest estimates for the cost-effectiveness of deworming, of the group. They imply a range of $1.40-$45.98 to do as much good via SCI as eating vegetarian for a meal).

Given this calculation, we get a few cents to a couple of dollars as the cost of doing similar amounts of good to averting a meat meal via other means. We are not finished yet though – there were many factors I didn’t take into account in the calculation, because I wanted to separate relatively straightforward facts for which I have good evidence from guesses. Here are other considerations I can think of, which reduce the relative value of averting meat eating:

Chicken brains are fairly small, suggesting their internal experience is less than that of humans. More generally, in the spectrum of entities between humans and microbes, chickens are at least some of the way to microbes. And you wouldn’t pay much to save a microbe.

Eating a chicken only reduces the number of chicken produced by some fraction. According to Peter Hurford, an extra 0.3 chickens are produced if you demand 1 chicken. I didn’t include this in the above calculation because I am not sure of the time scale of the relevant elasticities (if they are short-run elasticities, they might underestimate the effect of vegetarianism).

Vegetable production may also have negative effects on animals.

Givewell estimates have been rigorously checked relative to other things, and evaluations tend to get worse as you check them. For instance, you might forget to include any of the things in this list in your evaluation of vegetarianism. Probably there are more things I forgot. That is, if you looked into vegetarianism with the same detail as SCI, it would become more pessimistic, and so cheaper to do as much good with SCI.

It is not at all obvious that meat animal lives are not worth living on average. Relatedly, animals generally want to be alive, which we might want to give some weight to.

Animal welfare in general appears to have negligible predictable effect on the future (very debatably), and there are probably things which can have huge impact on the future. This would make animal altruism worse compared to present-day human interventions, and much worse compared to interventions directed at affecting the far future, such as averting existential risk.

My own quick guesses at factors by which the relative value of avoiding meat should be multiplied, to account for these considerations:

Thus given my estimates, we scale down the above figures by 0.05*0.5*0.9*0.9*0.2*0.1 =0.0004. This gives us $0.0008-$0.002 to do as much good as eating a vegetarian meal by spending on GiveWell’s top charities. Without the factor for the future (which doesn’t apply to these other animal charities), we only multiply the cost of eating a meat meal by 0.004. This gives us a price of $0.0003 with the Humane League, or $0.002 on improving chicken welfare in other ways. These are not price differences that will change my meal choices very often! I think I would often be willing to pay at least a couple of extra dollars to eat meat, setting aside animal suffering. So if I were to avoid eating meat, then assuming I keep fixed how much of my budget I spend on myself and how much I spend on altruism, I would be trading a couple of dollars of value for less than one thousandth of that.

I encourage you to estimate your own numbers for the above factors, and to recalculate the overall price according to your beliefs. If you would happily pay this much (in my case, less than $0.002) to eat meat on many occasions, you probably shouldn’t be a vegetarian. You are better off paying that cost elsewhere. If you would rarely be willing to pay the calculated price, you should perhaps consider being a vegetarian, though note that the calculation was conservative in favor of vegetarianism, so you might want to run it again more carefully. Note that in judging what you would be willing to pay to eat meat, you should take into account everything except the direct cost to animals.

There are many common reasons you might not be willing to eat meat, given these calculations, e.g.:

You don’t enjoy eating meat

You think meat is pretty unhealthy

You belong to a social cluster of vegetarians, and don’t like conflict

You think convincing enough others to be vegetarians is the most cost-effective way to make the world better, and being a vegetarian is a great way to have heaps of conversations about vegetarianism, which you believe makes people feel better about vegetarians overall, to the extent that they are frequently compelled to become vegetarians.

‘For signaling’ is another common explanation I have heard, which I think is meant to be similar to the above, though I’m not actually sure of the details.

You aren’t able to treat costs like these as fungible (as discussed above)

You are completely indifferent to what you eat (in that case, you would probably do better eating as cheaply as possible, but maybe everything is the same price)

You consider the act-omission distinction morally relevant

You are very skeptical of the ability to affect anything, and in particular have substantially greater confidence in the market – to farm some fraction of a pig fewer in expectation if you abstain from pork for long enough – than in nonprofits and complicated schemes. (Though in that case, consider buying free-range eggs and selling them as cage eggs).

You think the suffering of animals is of extreme importance compared to the suffering of humans or loss of human lives, and don’t trust the figures I have given for improving the lives of egg-laying chickens, and don’t want to be a hypocrite. Actually, you still probably shouldn’t here – the egg-laying chicken number is just an example of a plausible alternative way to help animals. You should really check quite a few of these before settling.

However I think for wannabe effective altruists with the usual array of characteristics, vegetarianism is likely to be quite ineffective.

]]>https://meteuphoric.wordpress.com/2014/11/21/when-should-an-effective-altruist-be-vegetarian/feed/23Katja GraceSeán Ó hÉigeartaigh on FHI and CSERhttps://meteuphoric.wordpress.com/2014/10/25/sean-o-heigeartaigh-on-fhi-and-cser/
https://meteuphoric.wordpress.com/2014/10/25/sean-o-heigeartaigh-on-fhi-and-cser/#commentsSun, 26 Oct 2014 02:01:00 +0000http://meteuphoric.wordpress.com/?p=5521Continue reading →]]>This is the last part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt, Paul Christiano, Paul Penley, Gordon Irlam, and Alexander Berger, and Robert Wiblin.

Nick Beckstead interviewed Seán Ó hÉigeartaigh on the Future of Humanity Institute (FHI) and the Center for the Study of Existential Risk (CSER). The notes are here.

]]>https://meteuphoric.wordpress.com/2014/10/25/sean-o-heigeartaigh-on-fhi-and-cser/feed/1Katja GraceseanRobert Wiblin on the Copenhagen Consensus Centerhttps://meteuphoric.wordpress.com/2014/10/22/robert-wiblin-on-the-copenhagen-consensus-center/
https://meteuphoric.wordpress.com/2014/10/22/robert-wiblin-on-the-copenhagen-consensus-center/#commentsThu, 23 Oct 2014 01:50:00 +0000http://meteuphoric.wordpress.com/?p=5518Continue reading →]]>This post summarizes a conversation which was part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt, Paul Christiano, Paul Penley, Gordon Irlam, and Alexander Berger.

Participants

Summary

Robert talked extensively with the Copenhagen Consensus Center (CCC) while investigating them as a potential Giving What We Can recommendation for funding[1]. This is Katja’s summary of relevant things Robert learned, from a conversation on the 16th of January 2014, supplemented with facts from the CCC website.

Activities and impact

At a high level, CCC’s main activity is prioritizing a broad variety of altruistic interventions based on cost-effectiveness. They do this by commissioning top economists to research and write about good spending opportunities, using a cost-benefit analysis framework. They do secondary research, assembling existing academic evidence into actionable priorities for governments and philanthropists. An important feature of this work is that they squeeze the analysis of a wide range of topics into the same framework, so one can make reasonable comparisons, given a lot of assumptions.

CCC also devotes substantial efforts to encouraging people to use this decision-support, and in general to prioritize based on good data analysis. In the Millenium Development Goals project for instance, money is probably divided between one third on research and two thirds on dissemination of that research.

CCC usually has around one main project at a time, and as one finishes they dovetail into another. They have around 4-5 core staff, and bring in extra contractors for a lot of the work. The annual budget is $1-2M, and the cost of core staff is probably only around a couple of hundred thousand dollars annually.

The value from CCC’s work does not usually come from finding unsuspected good interventions. It is rather from linking together evidence to make a strong cases for activities that are already believed to be good among experts, but which aren’t widely supported. CCC has for instance highlighted the high value of health interventions relating to nutrition and contagious disease. The notion that these are very good interventions is not unusual among development people, but most of the money is spent elsewhere, so there is a lot of value in making such cases. That CCC usually reaches such plausible conclusions suggests their research method is sensible. Their view on climate change is an exception to this trend; it is quite unusual.

CCC have provided a number of documents on the impact of their work[2]. They have numerous examples of people listening to them and doing things. They can also point to media coverage and a modest number of cases where they said to do something, and talked about it and soon afterwards the person did something like that. It is hard to establish causation, but this is suggestive evidence.

Very few people do anything similar to CCC. Cause prioritization is rare. Doing comparison work at all is a somewhat unique selling point, as is asking for quantitative estimates on things that are not often quantified. Talking about why climate change is not the best cause is also a niche activity.

Contributing to CCC

When Robert spoke to them, CCC was looking for funding for their post-2015 (Millenium Development Goals) project[3]. Their website suggests they still are, along with an American Prosperity Consensus 2014 project and a Global Consensus 2016 project. If the post-2015 is not completely funded there will be less outreach than hoped. They will engage less with the media, and won’t be able to afford some events with officials, where they intend to describe the research and try to persuade them.

Their other recent work includes a book on how much problems have cost the world[4]. Much of the data in it had never been published before, since economic models are seldom run “backwards” – into history.

CCC is currently looking for two summer interns for their back-office in Budapest, Hungary. The desired profile for these positions includes graduate education in an area relevant to CCC’s work (in particular, relating to research project management and outreach) and an interest in and aptitude for digital media (including social media, search, video, web sites). Interns will be assigned tasks and mini-projects within the post-2015 project and the general outreach program, and report to the post-2015 engagement manager and/or the post-2015 project manager. Good mutual match could lead to a permanent position.

]]>https://meteuphoric.wordpress.com/2014/10/22/robert-wiblin-on-the-copenhagen-consensus-center/feed/3Katja GraceLOGO CHANGESAlexander Berger on GiveWell Labshttps://meteuphoric.wordpress.com/2014/10/19/alexander-berger-on-givewell-labs/
https://meteuphoric.wordpress.com/2014/10/19/alexander-berger-on-givewell-labs/#commentsMon, 20 Oct 2014 01:33:00 +0000http://meteuphoric.wordpress.com/?p=5512Continue reading →]]>This post summarizes a conversation which was part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt, Paul Christiano, Paul Penley, and Gordon Irlam.

Notes

This is a summary made by Katja of points made by Alexander Berger during a conversation about GiveWell Labs and cause prioritization research on March 5th 2014.

GiveWell Labs

Focus

GiveWell Labs is trying to answer the same basic question as GiveWell: “what’s the best way to spend money?” However GiveWell Labs is answering this question for larger amounts of money, which is less straightforward. Causes are a more useful unit than charities for very large donors. So instead of trying to figure out which charity one should give to this year, they are asking which program areas a foundation should work on.

Givewell’s relationship with Good Ventures is a substantial reason for focussing on the needs of big donors, and GiveWell Labs research has been done in partnership with Good Ventures. The long term vision is to have ongoing research into which areas that should be covered are not, while providing support for a wide range of foundations working on problems they have previously identified as important.

Approach to research

GiveWell Labs primarily aggregates information, rather than producing primary research. It also puts a small amount of effort into publicizing its research.

Their research process focuses on answering these questions:

how important is the issue?

how tractable is it?

how crowded is it?

They attempt to answer the questions at increasing levels of depth for a variety of areas. It is not certain that these are key criteria for determining returns through a program, but they seems correct intuitively.

Most research is done through speaking to experts (rather than e.g. reading research papers). The ‘importance’ question is the only one likely to have academic research on it.

The learning process

GiveWell Labs is prioritizing learning value and diversification at the moment, and not aiming to make decisions about cause priorities once and for all. Alexander would guess that the impact of GiveWell Labs’ current efforts is divided roughly equally between immediate useful research output and the value of trying this project and seeing how it goes.

In the time it has existed, GiveWell Labs has learned a lot. A big question at the moment is how much confidence to have in a cause before making the choice to dive into deeper research on it.

Spending money

Starting to spend money is probably a big part of diving deeper. Spending money is useful for learning more about an area for two reasons. Firstly, it makes you more credible. Secondly, it encourages people to make proposals. People don’t tend to have proposals readily formulated. They respond to the perception of concrete available funding. This means you will get a better sense of the opportunities if you are willing to spend money.

Transferability of learning

Alexander doesn’t know whether methodological insights discovered in one cause prioritization effort are likely to be helpful to others. One relevant factor is that people at GiveWell Labs have priors about what’s likely to be successful that are partly based on what they have learned before starting the process. But if you didn’t share the starting priors, you might not end up with the same current beliefs. This might be true regarding explicit expected value calculations, and how to weigh robustness or reliability against a high upside, in particular. If you don’t share the same prior, the lessons learned may not be very communicable.

Funding cause prioritization

Adding resources to GiveWell

An outside funder trying to donate to GiveWell Labs couldn’t change the distribution from GiveWell’s conventional research to GiveWell Labs. It would also be hard to change the total work done by donating. Donating would mainly change the amount of time GiveWell would spend raising other funds, and the amount to which they depend on Good Ventures.

Other cause prioritization efforts

Projects like Katja’s cause prioritization shallow investigation are unlikely to be done by Givewell.

Katja’s Structured Case on AI project is also unlikely to overlap based on GiveWell’s current plans. If Alexander were working on something like this, he would typically initially try to effectively aggregate the views of credible people, rather than initially forming object level views. For instance, he would like to know what would happen if Eliezer could sit down with highly credentialed AI researchers and try to convince them of his view. The AI Structured Case on the other hand is more directed at detailing object level arguments.

Cause prioritization work can become fairly abstract. Givewell Labs tries to keep it grounded in looking for concrete funding opportunities. Others may have comparative advantages in more philosophical investigations, which avoids overlapping, but is also less likely to be informative to GiveWell Labs. GiveWell is unlikely to focus on prediction markets, though it’s not out of the question.

General considerations for funding such research

If others were going to do more concrete work, it is a hard question whether it would be better at this point for them to overlap with GiveWell Labs to provide a check, or avoid overlapping to provide broader coverage.

Answering high level questions such as ‘how good is economic growth?’ doesn’t seem very decision relevant in most cases. This is largely because these issues are hard to pin down, rather than because they are unlikely to make a large difference to evaluations if we could pin them down, though Alexander is also doubtful that they would make a large difference. For instance, Alexander doesn’t expect indirect effects of interventions to be large relative to immediate effects, while Holden Karnofsky (co-executive director of GiveWell) does, but their views on this do not seem to play a big role in their disagreements over what GiveWell Labs should prioritize.

When deciding what to do on cause prioritization, it is important to keep in mind how it will affect anything, such as who will pay attention, and what decisions they will change as a result.

Similar projects

Nick Beckstead and Carl Shulman do similar work in their own time.

Alexander’s understanding is that Copenhagen Consensus Center is doing something a bit different, especially around modeling cost effectiveness estimates. They also seem to be less focussed on influencing specific decisions.

Alexander is not aware of any obvious further people one should talk to that Katja has not thought of.