David_Moss

EA Sur­vey Cause Selec­tion data some­what speaks to this. One differ­ence is that we didn’t do forced rank­ing on the cause pri­ori­ti­sa­tion scale, e.g. peo­ple could rate more than one char­ity as “near top pri­or­ity,” but we can still com­pare the % of peo­ple who se­lected each cause as “near top pri­or­ity” (the sec­ond high­est rank­ing that could be given).

Below I show what % of peo­ple se­lected each cause as “near top” pri­or­ity for those who se­lected AI, Poverty or An­i­mal Welfare as “top pri­or­ity” (I could do this for the other causes on re­quest).

As you might ex­pect, peo­ple who rate AI as top are more in­clined to rate other LTF/​x-risk causes as near top pri­or­ity and more peo­ple who rate Poverty as top, rate Cli­mate Change as near top (these tended to fol­low similar pat­terns in the analy­ses in our main re­port on this). Among peo­ple who se­lected An­i­mal Welfare as top, the largest num­ber se­lected Poverty as near top pri­or­ity.

Notably Biose­cu­rity ap­pears as the cause most se­lected as “near top” by AI ad­vo­cates and the sec­ond most se­lected cause for those who rate Poverty top. This is in line with the re­sults dis­cussed in the main post where Biose­cu­rity re­ceived the high­est % of “near top” rat­ings of any cause (slightly higher than Global Poverty) though very low num­bers of “top pri­or­ity” rat­ings, mean­ing that it is only mid­dle of the pack (5/​11) in terms of “top or near top pri­or­ity” ratings

in­di­vi­d­u­als are more likely to ex­hibit con­sis­tency when they fo­cus ab­stractly on the con­nec­tion be­tween their ini­tial be­hav­ior and their val­ues, whereas they are more likely to ex­hibit li­cens­ing when they think con­cretely about what they have ac­com­plished with their ini­tial be­hav­ior—as long as the sec­ond be­hav­ior does not blatantly threaten a cher­ished identity

So broadly speak­ing I would ex­pect the act(s) in ques­tion mak­ing pub­lic (or just mak­ing pri­vately salient to you) a par­tic­u­lar moral iden­tity (as a per­son who acts well) would in­crease moral con­sis­tency effects, whereas acts which em­pha­sis the amount of good you have done would in­crease li­cens­ing effects.

Much dis­cus­sion of Mo­ral Cir­cle Ex­pan­sion seems ham­pered by lack of con­cep­tual clar­ity about what the Mo­ral Cir­cle means.

There are a lot of dis­tinc­tions that need to be drawn, but here are two po­si­tions on one di­men­sion:

The moral cir­cle merely refers to which (groups or types of) en­tities are viewed as pos­si­ble tar­gets of moral regard

The moral cir­cle refers to the amount of ac­tual moral con­cern granted to such entities

A lot more dis­tinc­tions should be drawn on this di­men­sion alone (e.g. for “ac­tual moral con­cern” are we in­ter­ested in ab­stract at­ti­tudes of con­cern, ac­tual amount of effort ex­tended, or ac­tual treat­ment ex­tended), but even these suffice for now.

On the first view, which seems some­what closer to origi­nal uses of the term, it does seem like re­trench­ment of the Mo­ral Cir­cle should be ex­pected to be quite rare, at least once you reach con­texts like our own (in WEIRD so­cieties) where there are ex­tremely preva­lent memes about at least po­ten­tially con­sid­er­ing en­tities as pos­si­ble moral tar­gets if they might be per­sons in any sense (or more gen­er­ally in con­texts where the con­di­tions for con­sid­er­ing the pos­si­bil­ity of in­clud­ing some group in the moral cir­cle are as ex­ten­sive and plu­ral as they are now). It seems rel­a­tively hard for groups to fall en­tirely out of the moral cir­cle in the first sense, in such cases, ex­cept in cases like those you men­tion where we de­cide that cer­tain en­tities don’t ex­ist or aren’t sen­tient.

With the more ex­pan­sive sec­ond sense of Mo­ral Cir­cle (which seems to be what peo­ple are us­ing), where all that is re­quired for Mo­ral Cir­cle ex­pan­sion/​re­trac­tion is an in­crease or re­duc­tion in moral con­cern ex­tended (as seems to be im­plied by ex­am­ples such as more/​less care be­ing granted to the el­derly and so on), it seems like the Mo­ral Cir­cle should be ex­pected to be ex­pand­ing and re­tract­ing near con­stantly on an in­di­vi­d­ual or group ba­sis. This is es­pe­cially so if we un­der­stand de­gree of moral con­cern as mean­ing the ac­tual ex­tend to which needs are weighted and help ex­tended (in which case this will, al­most nec­es­sar­ily, be per­vaded by trade­offs in a near zero sum fash­ion) which is why fur­ther dis­tinc­tion be­ing drawn within this cat­e­gory is so im­por­tant.

And this pro­lifer­a­tion of ar­gu­ments is (weak) ev­i­dence against their qual­ity: if the con­clu­sions of a field re­main the same but the rea­sons given for hold­ing those con­clu­sions change, that’s a warn­ing sign for mo­ti­vated cog­ni­tion (es­pe­cially when those be­liefs are con­sid­ered so­cially im­por­tant).

I’m not sure these con­sid­er­a­tions should be too con­cern­ing in this case for a cou­ple of rea­sons.

I agree that it’s con­cern­ing where “con­clu­sions… re­main the same but the rea­sons given for hold­ing those con­clu­sions change” in cases where peo­ple origi­nally (pu­ta­tively) be­lieve p be­cause of x, then x is shown to be a weak con­sid­er­a­tion and so they switch to cit­ing y as a rea­son to be­lieve y. But from your post it doesn’t seem like that’s nec­es­sar­ily what has hap­pened, rather than a con­clu­sion be­ing overde­ter­mined by mul­ti­ple lines of ev­i­dence. Of course, par­tic­u­lar peo­ple in the field may have switched be­tween some of these rea­sons, hav­ing de­cided that some of them are not so com­pel­ling, but in the case of many of the rea­sons cited above, the differ­ences be­tween the po­si­tions seem suffi­ciently sub­tle that we should ex­pect cases of peo­ple clar­ify­ing their own un­der­stand­ing by shift­ing to closely re­lated po­si­tions(e.g. it seems plau­si­ble some­one might rea­son­ably switch from think­ing that the main prob­lem is know­ing how to pre­cisely de­scribe what we value to think­ing that the main prob­lem is not know­ing how to make an agent try to do that).

It also seems like a pro­lifer­a­tion of ar­gu­ments in favour of a po­si­tion is not too con­cern­ing where there are plau­si­ble rea­sons why should ex­pect mul­ti­ple of the con­sid­er­a­tions to ap­ply si­mul­ta­neously. For ex­am­ple, you might think that any kind of pow­er­ful agent typ­i­cally pre­sents a threat in mul­ti­ple differ­ent ways, in which case it wouldn’t be sus­pi­cious if peo­ple cited mul­ti­ple dis­tinct con­sid­er­a­tions as to why they were im­por­tant.

I think you can get a very rough sense of pos­si­ble changes by com­par­ing the re­sults from differ­ent years (as in the first two graphs in the post), but given the difficul­ties in in­ter­pret­ing these differ­ences I would be wary of pre­sent­ing these as % changes. Aside from pos­si­ble differ­ences in the sam­ple across differ­ent years, chang­ing cat­e­gories for causes would also ob­vi­ously dis­tort things (we start with a fairly strong pre­sump­tion against chang­ing cat­e­gories for this rea­son, but in some cases, the de­vel­op­ment of Men­tal Health as a field be­ing one, it’s un­avoid­able).

Yeh, I cer­tainly think this would be valuable, al­though it would need to be weighed against the fact that we already have more than 10 causes listed, which may be push­ing it. We may be able to ac­com­mo­date this by split­ting out the ques­tions into ques­tions about broader cause ar­eas and then about more spe­cific causes.

Do you have a num­ber for av­er­age earn­ings of non-stu­dents who are earn­ing to give?$52,000 is a pretty low num­ber for that cat­e­gory.

The num­bers are likely low­ered (as they were el­se­where) by a lot of fairly new, lower earn­ing/​donat­ing peo­ple, who are just start­ing out on that ca­reer path. Me­dian dona­tions for (non-stu­dent) E2G were $3000 and $70,000 in­come. Only above the 63rd per­centile in this cat­e­gory were peo­ple earn­ing more than $100,000.

How did the sur­vey define the differ­ence be­tween “earn­ing to give” and “other”, if at all?

Thanks Greg. Th­ese were se­lected a pri­ori (though in­formed by our prior analy­ses of the data).

Due to miss­ing data there was some difficulty do­ing step­wise elimi­na­tion with the com­plete dataset. We’ve added a model in­clud­ing all in­ter­ac­tions to the re­gres­sion doc­u­ment. This had a slightly bet­ter AIC (3093 vs 3114).

The peo­ple who se­lected ‘re­search’ were dis­pro­por­tionately stu­dents com­pared to the other cat­e­gories. Ex­clud­ing all stu­dents across cat­e­gories, 251 peo­ple se­lected re­search, and me­dian in­come and dona­tions were still sig­nifi­cantly lower.

Thanks Ben. Yeh, this is 3 peo­ple in 2009 and 3 peo­ple in 2010 (out of 2473 re­sponses to these ques­tions over­all). There are a hand­ful of similar er­rors for Do­ing Good Bet­ter. Every year, there are a few peo­ple who seem to get the years wrong in this way (alongside a lot of re­sponses say­ing ex­plic­itly that they don’t re­mem­ber).

Anec­do­tally, (both in the sur­vey and el­se­where) I find a sur­pris­ing num­ber of peo­ple con­fuse CEA, 80K and GWWC (not to men­tion, Re­think Char­ity, its var­i­ous pro­jects and Char­ity Science).

Agreed. A per my re­ply to you here we’re still go­ing to talk about the in­fluence of differ­ent lev­els of in­volve­ment with re­gards to cause se­lec­tion and in a post ad­dress­ing your ques­tion about lev­els of in­volve­ment and differ­ent routes by which peo­ple get in­volved in EA.