I’m not con­fi­dent in my re­sponse, but when think­ing about the move­ment as a whole, I’d sug­gest a ra­tio closer to 50%, if not as high as 80%.

The ra­tio of dona­tion op­por­tu­nity gen­er­ated per staff of EA orgs sug­gests a need for a higher-earn­ing to give ra­tio.

This con­sid­er­a­tion is definitely more spec­u­la­tive, but de­pend­ing on how you look at the num­bers, it’s pos­si­ble to jus­tify an earn­ing-to-give ra­tio above 29% and pos­si­bly as high as 99%.

The Size of the Opportunity

The biggest ex­am­ple of gen­er­at­ing a lot of donat­ing op­por­tu­nity per staff mem­ber is the Against Malaria Foun­da­tion. They only havetwo full-time staff, yet ac­cord­ing to GiveWell their 2015 room for more fund­ing was around $5M. This sug­gests that a very high tal­ent EA di­rect worker could gen­er­ate as much as $2.5M for earn­ing-to-give peo­ple to at­tempt to fill with dona­tions.

Give Directly has 25 staff and has a room for more fund­ing of some­where be­tween $1M to $25M (with the po­ten­tial for much more), which is some­where be­tween $40K to >$1M per staff mem­ber.

How Much Do Peo­ple Earn to Give?

80K found in their 2014 sur­vey that peo­ple do­ing earn­ing-to-give were donat­ing on av­er­age $13K/​year for 2013, with an ex­pec­ta­tion to rise to $56K/​year within three years if peo­ple’s plans are taken at face value. A fol­low-up in 2015 found a new av­er­age of $16.5K/​year for 2014, though for a more limited sam­ple.

Ac­cord­ing to the EA Sur­vey (fo­cused on 2013 data), the mean dona­tion in our sam­ple from EAs who met the crite­ria for earn­ing-to-give (>=$60K an­nual in­come and >=10% dona­tions) was $9.5K/​year.

Now Let’s Do Math

Us­ing the best pos­si­ble num­bers for earn­ing-to-give, imag­ine that an ad­di­tional di­rect work per­son was like the AMF staff and gen­er­ated $2.5M in dona­tion op­por­tu­nity. Now imag­ine I took the low value of earn­ing-to-give, $9.5K/​year. Us­ing these two num­bers, it would take 263 peo­ple earn­ing to give to match one per­son do­ing di­rect work, sug­gest­ing that 99.6% of peo­ple should earn-to-give.

If in­stead I look at GiveDirectly’s low num­ber of $40K per staff mem­ber com­pared to a high value of earn­ing-to-give ($56K/​year), I calcu­late you need 1 per­son earn­ing to give for ev­ery 1.4 peo­ple do­ing di­rect work, sug­gest­ing a ra­tio of 29%.

The Prob­lem With Fund­ing Salaries

Of course, it does seem clear that marginal di­rect work­ers won’t be as pro­duc­tive at cre­at­ing earn­ing-to-give op­por­tu­nity as AMF staff have been, so there will be less to fund per marginal di­rect worker.

Fur­ther­more, AMF may not be typ­i­cal of char­i­ties marginal di­rect work­ers are mov­ing into. Char­i­ties like CEA or MIRI scale not by cre­at­ing op­por­tu­ni­ties to fund pro­grams, but rather through hiring staff, and this ends up with a lower amount of dona­tion op­por­tu­nity to staff mem­ber (equal to their salary).

Still, even if we ex­pect earn­ing-to-give peo­ple to be pri­mar­ily fo­cused on fund­ing salaries, we may need more than 15%. Con­sider the clas­sic story of ETG who takes a job in fi­nance to fund two di­rect work­ers, dou­bling their im­pact. First, if salaries + over­head per staff are ~$50K/​yr (though they can be much lower), a 15% earn­ing to give ra­tio would re­quire the av­er­age earn­ing to give dona­tion to be over $330K, which seems un­re­al­is­ti­cally high, even for the next three years. A more re­al­is­tic ra­tio where the av­er­age (not me­dian) earn­ing-to-give per­son donates $50K/​year (still more than is cur­rently hap­pen­ing) would be 1:1, or 50%.

Se­cond, the AMF model should not be ig­nored when think­ing about dona­tion op­por­tu­nity cre­ated per marginal di­rect worker, and it’s quite pos­si­ble that in some sce­nar­ios we may need many earn­ing-to-give peo­ple to sup­port the op­por­tu­ni­ties cre­ated. It’s an open ques­tion what pro­por­tion of EA re­sources should be go­ing to GiveWell’s scal­able health char­i­ties, which of­ten gen­er­ate mil­lions of dol­lars in dona­tion op­por­tu­nity per em­ployee, than other EA orgs where the dona­tion op­por­tu­nity per em­ployee is quite less. Those who think the ma­jor­ity of EA fund­ing should fo­cus on GiveWell’s top char­i­ties, or that GiveWell top char­i­ties should be the de­fault giv­ing op­por­tu­nity for the typ­i­cal donor ab­sent good rea­son to donate el­se­where, should also be far more in­clined to think that more peo­ple should earn-to-give.

EA Orgs Have More Room For More Fund­ing Than Peo­ple Are Giv­ing Credit For

Re­cently I’ve been in­volved in some EA pro­jects that I’ve wanted to fundraise for. On three oc­ca­sions with three sep­a­rate EAs the re­ac­tion was always “Huh, I’m re­ally sur­prised that hasn’t been funded already.” …And many of those pro­jects still need money.

There is sup­posed to be an ex­plo­sion in earn­ing to give. But I don’t think it’s ar­rived quite yet. From my point of view many plau­si­bly im­por­tant pro­jects still want more fund­ing.

Just to men­tion a few pro­jects I’m aware of --

Ac­cord­ing to Jacy’s speech at EA Global, An­i­mal Char­ity Eval­u­a­tors is quite fund­ing con­strained and would hire more re­searchers if they had the money.

And other pro­jects I’m aware of are on the hori­zon and may make more pub­lic pitches soon.

...Of course, it’s pos­si­ble you may not be­lieve in sup­port­ing these pro­jects. If so, de­pend­ing on what you want to sup­port, it’s pos­si­ble the re­main­ing room for more fund­ing may be quite small. But the be­lief I’ve been hear­ing from some that all EA pro­jects are get­ting fully funded all the time and that EA Ven­tures will sim­ply take care of the rest is not cur­rently true.

Earn­ing to Give May Have Bet­ter Ca­reer Capital

Ca­reer cap­i­tal, as 80,000 Hours defines it, are the skills, cre­den­tials, con­nec­tions, and other benefits you get from a ca­reer that help you have more im­pact in the fu­ture. Since what mat­ters is your to­tal im­pact and not your cur­rent im­pact, it’s quite plau­si­ble that young peo­ple (in­clud­ing my­self) should be fo­cus­ing much more on ca­reer cap­i­tal than hav­ing an im­pact now.

Earn­ing to give ca­reers usu­ally have good ca­reer cap­i­tal op­por­tu­ni­ties. Usu­ally the most im­por­tant skills (80K sug­gests com­puter pro­gram­ming, web de­vel­op­ment, statis­tics, ma­chine learn­ing, in­de­pen­dent work, self-mo­ti­va­tion, sales, com­mu­ni­ca­tion, and man­age­ment) are typ­i­cally found in abun­dance in good for-profit ca­reers that also hap­pen to have good earn­ing to give po­ten­tial.

Yes, you can cer­tainly find this cap­i­tal in non-ETG ca­reers, and I want to be care­ful to not fall into a char­ity vs. for-profit ETG di­chotomy when many ad­di­tional op­tions ex­ist (e.g., re­search, academia, and poli­tics). But I gen­er­ally be­lieve that it is eas­iest to find ca­reer cap­i­tal within earn­ing-to-give ca­reers and it be­comes much eas­ier to tran­si­tion out of earn­ing to give and into other ca­reers than vice versa (with maybe the ex­cep­tion of academia).

Earn­ing to Give Fits More People

Fur­ther­more, psy­cholog­i­cally, earn­ing-to-give seems to me to be a bet­ter fit for the av­er­age EA than di­rect work. Many EAs are already work­ing in a com­pany and can sim­ply move to donate more of their salary or fo­cus on in­creas­ing their salary, rather than quit their job and start a new one. Fur­ther­more, EA di­rect work is fre­quently con­cen­trated in cer­tain cities that re­quire re­lo­ca­tion, which can be a big choice.

Another ad­van­tage to earn­ing to give is it’s of­ten eas­ier to ac­com­plish for peo­ple who have less al­tru­is­tic mo­ti­va­tion or di­rect work tal­ent. Of course, this cer­tainly misses 80K’s point be­cause they were talk­ing about peo­ple on the mar­gin and peo­ple who were high-tal­ent and high-mo­ti­va­tion who were open to many ca­reer paths. That is a differ­ent ques­tion. But in the ac­tual move­ment, it’s im­por­tant to note that many peo­ple are not in the top 0.1% of drive and tal­ent. Yet they still have a valuable thing to con­tribute—a share of their job through earn­ing to give. It’s of­ten much eas­ier to get, keep, and do an earn­ing-to-give job than it is to do EA di­rect work.

Lastly, earn­ing to give fre­quently gives much more ca­reer cap­i­tal than other jobs early on in one’s ca­reer, which would al­low for peo­ple to launch more suc­cess­ful non-ETG ca­reers in the fu­ture. It’s gen­er­ally a lot eas­ier to move from earn­ing to give to not earn­ing than vice versa, given con­straints on per­sonal sav­ings and on var­i­ous in­dus­tries.

Conclusion

Ob­vi­ously the choice be­tween earn­ing to give or some­thing else is a per­sonal one that is sen­si­tive to your per­sonal skills and fit. But I think if you have the abil­ity to en­ter a par­tic­u­larly high-earn­ing ca­reer, there are good rea­sons to do so, and more peo­ple should be con­sid­er­ing it than the cur­rent wis­dom seems to sug­gest.

There are cer­tainly good rea­sons to think 80K’s ar­gu­ment is cor­rect—there is already a lot of money in EA through Good Ven­tures and other high net-worth in­di­vi­d­u­als and there is a po­ten­tial that ex­ist­ing earn­ing-to-give EAs may start donat­ing much more soon. I’m ex­cited how much the EA move­ment might grow through more di­rect work. How­ever, I think we are still a ways off un­til earn­ing-to-give peo­ple start re­li­ably fund­ing the re­main­ing 85% of the move­ment.

But un­til things change, I’d sug­gest that 15% seems to me to be quite small, and I’d sug­gest an earn­ing-to-give ra­tio at 50% or even as high as 80% for the move­ment as a whole.

My num­bers are a lit­tle differ­ent to yours at around 25-50%. I think the rea­son is that I en­vi­sion many EAs go­ing into ‘in­fluence’ ar­eas like foun­da­tions, other NGOs, poli­tics, academia, jour­nal­ism, and policy. Those that do prob­a­bly won’t earn large sums but nei­ther will they be draw­ing sub­stan­tial fund­ing from the EA com­mu­nity; they are sort of ex­cluded from both sides the earn­ing-to-givers vs. di­rect-work­ers anal­y­sis you do above. When I do a similar anal­y­sis, I get a similar con­clu­sion of want­ing very roughly 1 ETG per­son for each 1 di­rect EA org per­son, but then I en­visage 0 − 50% of EAs do­ing the or­thg­o­nal ‘in­fluence’ op­tions, hence 25-50% in both ETG and ‘di­rect’ work.

Part of the dis­agree­ment here is also surely how nar­rowly you define ‘Earn­ing to Give’; I think 80k prob­a­bly meant some­thing much stronger than “>=$60K an­nual in­come and >=10% dona­tions”; more like >$100k gross an­nual in­come and >=50% dona­tions. I think both defi­ni­tions have merit, it’s just worth clar­ify­ing from the out­set what you are talk­ing about.

Edit: I would also add that my ex­pe­rience of fund­ing things this year is that we are in­deed a few years away from the pro­jected (and I think rea­son­able to ex­pect) Earn­ing-to-Give ex­plo­sion. I can’t think of a ma­jor cause area that doesn’t cur­rently have both a meta-char­ity and di­rect char­ity con­strained by funds. Ob­vi­ously this is sub­ject to po­ten­tially rapid change, but this was a sig­nifi­cant up­date for me so I wanted to share.

My num­bers are a lit­tle differ­ent to yours at around 25-50%. I think the rea­son is that I en­vi­sion many EAs go­ing into ‘in­fluence’ ar­eas like foun­da­tions, other NGOs, poli­tics, academia, jour­nal­ism, and policy. Those that do prob­a­bly won’t earn large sums but nei­ther will they be draw­ing sub­stan­tial fund­ing from the EA com­mu­nity; they are sort of ex­cluded from both sides the earn­ing-to-givers vs. di­rect-work­ers anal­y­sis you do above. When I do a similar anal­y­sis, I get a similar con­clu­sion of want­ing very roughly 1 ETG per­son for each 1 di­rect EA org per­son, but then I en­visage 0 − 50% of EAs do­ing the or­thg­o­nal ‘in­fluence’ op­tions, hence 25-50% in both ETG and ‘di­rect’ work.

This was a sig­nifi­cant fac­tor for us. I could eas­ily see a fu­ture where it’s best for the ma­jor­ity of EAs go to work in re­search, in­ter­na­tional orgs, policy etc.; which already drives the per­centage un­der 50.

would also add that my ex­pe­rience of fund­ing things this year is that we are in­deed a few years away from the pro­jected (and I think rea­son­able to ex­pect) Earn­ing-to-Give ex­plo­sion. I can’t think of a ma­jor cause area that doesn’t cur­rently have both a meta-char­ity and di­rect char­ity con­strained by funds. Ob­vi­ously this is sub­ject to po­ten­tially rapid change, but this was a sig­nifi­cant up­date for me so I wanted to share.

What do you think is the best strat­egy to ac­count for this pos­si­bil­ity? How much should we pre­pare? I figure plenty of peo­ple go­ing into earn­ing to give is fine any­way, since I ex­pect these ca­reers are in­deed more likely to build ca­reer cap­i­tal al­low­ing a solid tran­si­tion out of earn­ing to give and into an­other ca­reer ap­proach in the fu­ture if need be. Also, it seems 80,000 Hours now seems to recom­mend some­one en­ter a ca­reer with ad­di­tional po­ten­tial for di­rect im­pact and/​or out­sized op­por­tu­nity for ca­reer cap­i­tal when­ever they recom­mend earn­ing to give any­way.

EA Ven­tures is a good start as far as prepa­ra­tion goes. I’m not go­ing to make the ar­gu­ment ‘Likely ex­pan­sion of EtG means many fewer peo­ple should do it’ be­cause I think it’s quite weak for the rea­sons you gave. But I do think that peo­ple with the abil­ity to cre­ate fund­ing op­por­tu­ni­ties (which I think is ac­tu­ally a rel­a­tively small num­ber of peo­ple) should be try­ing to do so. We could do with more founders and a more di­verse/​com­plete set of or­gani­sa­tions.

I need to think about this is­sue more, but I think there might be a cou­ple of prob­lems with the es­ti­mates.

1) Let’s di­vide prob­lems into the world into ‘fund­ing con­strained’ and ‘tal­ented con­strained’. What you’ve done is pick the most fund­ing con­strained causes we know (Givedi­rectly and AMF) and then say “wow these can ab­sorb a lot of funds”, which is not sur­pris­ing, be­cause they were se­lected for that prop­erty.

But there are other causes where it looks like a tal­ented per­son could make a big differ­ence but where it’s not easy for money to buy progress. Th­ese are causes that are more con­strained by in­no­va­tion, lead­er­ship, co­or­di­na­tion, and so on. Some ar­eas that might fall in this cat­e­gory in­clude EA move­ment build­ing, much of re­search, green en­ergy, much of policy, in­ter­na­tional re­la­tions. We asked Holden to spec­u­late on what they might be here:
https://​80000hours.org/​2014/​10/​in­ter­view-holden-karnofsky-on-cause-se­lec­tion/​

We asked biomed­i­cal re­searchers to es­ti­mate how much money they would trade for a re­searcher with good per­sonal fit, and they of­ten named figures of around $1m per year, more than most peo­ple could donate.

Tak­ing tal­ent gaps into ac­count too, it be­comes far less clear where the ideal bal­ance lies.

It seems likely the world is more tal­ent con­strained than fund­ing con­strained, if that ques­tion makes sense.

2) The figures for how much the typ­i­cal etg per­son will donate might be a big un­der­es­ti­mate. You can’t eas­ily in­fer the long-term av­er­age from the 80k sur­veys be­cause those are sur­veys of ppl very early in the ca­reer—in­deed some of the ppl are still at col­lege. Many EAs have long-term earn­ing po­ten­tial over $1m, so will be donat­ing $100-$500k per year, so your es­ti­mate could be out by a fac­tor of 10.

3) You’re com­par­ing the most tal­ented di­rect work­ers (Rob Mather) with the typ­i­cal et­ger. It would be more fair to com­pare equally tal­ented peo­ple. The peo­ple with best fit for earn­ing to give will be able to donate many mil­lions within a cou­ple of years, which is similar to the amount of room for fund­ing cre­ated by a staff mem­ber at AMF. So that might sug­gest a 1:1 ra­tio of etg to di­rect work.

And if you think of the typ­i­cal salaries at an EA org (~$50k per year), one tal­ented et­ger will be able to cover the salaries of ~20 peo­ple.

4) The EA move­ment is pretty small so it seems very achiev­able to pull in funds from el­se­where, and there’s been a strong track record of do­ing this e.g. most of SCI’s fund­ing has come from Gates; Thiel funded a bunch of things; CEA has a bunch of ex­ter­nal donors.

5) What about value of in­for­ma­tion? An EA move­ment where 95% of peo­ple etg as soft­ware en­g­ineers while 5% do di­rect work is go­ing to have very stunted learn­ing op­por­tu­ni­ties. I’d pre­fer to see EAs work­ing in a wide va­ri­ety of causes and sec­tors, then shar­ing what they learn with each other. A similar con­sid­er­a­tion ap­plies to the EA move­ment build­ing a wide port­fo­lio of skills so it can ad­dress big prob­lems in the fu­ture.

6) I’m un­sure about ca­reer cap­i­tal. I’m tempted to agree that for the me­dian per­son etg might nor­mally offer bet­ter ca­reer cap­i­tal, but if you’re es­pe­cially tal­ented it may be bet­ter just to fo­cus on do­ing some­thing im­pres­sive in an im­por­tant cause.
https://​80000hours.org/​2015/​07/​what-peo­ple-miss-about-ca­reer-cap­i­tal-ex­cep­tional-achieve­ments/​
I also think peo­ple un­der­es­ti­mate the ca­reer cap­i­tal you get from work­ing at EA orgs. e.g. I think I gained far bet­ter ca­reer cap­i­tal from work­ing at 80k than I could have done in fi­nance, and I had good op­tions there.

7) I’m un­sure etg fits more peo­ple. Bear in mind that the com­mon sense po­si­tion is that earn­ing to give is bizarre and no-one does it. Whereas loads of peo­ple want to work in teach­ing, non­prof­its, re­search and so on.

Also, if you find it hard to stay al­tru­is­ti­cally mo­ti­vated, then it’s prob­a­bly bet­ter to be among lots of other al­tru­ists rather than be­ing the only per­son in your com­pany etg.

I want to push back a bit against point #1 (“Let’s di­vide prob­lems into ‘fund­ing con­strained’ and ‘tal­ent con­strained’.) In my ex­pe­rience re­cruit­ing for MIRI, these con­straints are tightly in­ter­twined. To hire tal­ent, you need money (and to get money, you of­ten need re­sults, which re­quires tal­ent).

I think the “are they fund­ing con­strained or tal­ent con­strained?” model is in­cor­rect, and po­ten­tially harm­ful. In the case of MIRI, imag­ine we’re try­ing to hire a world-class re­searcher for $50k/​year, and can’t find one. Are we tal­ent con­strained, or fund­ing con­strained? (Our ac­tual re­searcher salaries are higher than this, but they weren’t last year, and they still aren’t any­where near com­pet­i­tive with in­dus­try rates.)

Fur­ther­more, there are all sorts of things I could be do­ing to loosen the tal­ent bot­tle­neck, but only if I knew the money was go­ing to be there. I could be set­ting up a re­searcher stew­ard­ship pro­gram, hav­ing sem­i­nars run at Berkeley and Stan­ford, and hiring ded­i­cated re­cruit­ing-fo­cused re­searchers who know the tech­ni­cal work very well and spend a lot of time prac­tic­ing get­ting peo­ple ex­cited—but I can only do this if I know we’re go­ing to have the money to sus­tain that pro­gram alongside our core re­search team, and if I know we’re go­ing to have the money to make hires. If we re­li­ably bring in only enough fund­ing to sus­tain mod­est growth, I’m go­ing to have a very hard time break­ing the tal­ent con­straint.

And that’s ig­nor­ing the op­por­tu­nity costs of be­ing un­der-funded, which I think are sub­stan­tial. For ex­am­ple, at MIRI there are nu­mer­ous ad­di­tional pro­grams we could be set­ting up, such as a vis­it­ing pro­fes­sor + post­doc pro­gram, or a sep­a­rate team that is ded­i­cated to work­ing closely with all the ma­jor in­dus­try lead­ers, or a ded­i­cated team that’s tak­ing a differ­ent re­search ap­proach, or any num­ber of other pro­jects that I’d be able to start if I knew the fund­ing would ap­pear. All those things would lead to new and differ­ent job open­ings, let­ting us draw from a wider pool of tal­ented peo­ple (rather than the hy­per-nar­row pool we cur­rently draw from), and so this too would loosen the tal­ent con­straint—but again, only if the fund­ing was there.

Right now, we have more trou­ble find­ing top-notch math tal­ent ex­cited about our ap­proach to tech­ni­cal AI al­ign­ment prob­lems than we have rais­ing money, but don’t let this fool you—the tal­ent con­straint would be much, much eas­ier to ad­dress with more money, and there are many things we aren’t do­ing (for lack of fund­ing) that I think would be high im­pact.

Ben, be­tween your com­ments these ones I made, and AGB’s com­ments above, I’m think­ing of writ­ing not a di­rect re­but­tal to Peter Hur­ford’s es­ti­mates of ideal pro­por­tion of ETG:di­rect-work, but a post called “When Should You Go Into Direct Work”, which would be a list of heuris­tic con­sid­er­a­tions of when some­one should con­sider go­ing into di­rect work vs. earn­ing to give. I think it’s im­por­tant to make a visi­ble re­sponse to Peter un­do­ing a po­ten­tial mis­con­cep­tion earn­ing to give is a bet­ter fit than some kind of di­rect work than a few dis­parate com­ments we’ve made. I es­pe­cially think your points 3 and 5-7 are im­por­tant con­sid­er­a­tions for in­di­vi­d­u­als EAs mak­ing ca­reer choices, sig­nifi­cant plan changes, etc.

Would you like to read or com­ment on the draft of such a post when it’s available?

Why didn’t you men­tion GiveDirectly [ETA: in the ‘orgs with room for more fund­ing’ sec­tion], an or­ga­ni­za­tion with nigh-bound­less room for more fund­ing? It just took $25MM from Good Ven­tures, has a his­tory of ex­tremely rapid growth, and its model should even­tu­ally al­low it to take many billions of dol­lars per year.

Also, con­trast earn­ing to give with other paths to in­fluence large quan­tities of funds, e.g. work­ing at a large foun­da­tion, at IARPA, or in a gov­ern­ment aid bu­reau­cracy. The av­er­age money moved in the rele­vant roles in those fields looks a lot larger than for earn­ing to give.

One con­cern I have with work­ing at a foun­da­tion is I don’t know how fea­si­ble it would be to move large amounts of money to more “out there” causes like x-risk which are plau­si­bly the most im­por­tant causes. This would surely be eas­ier at some foun­da­tions than oth­ers but I don’t know if it would be fea­si­ble at any large foun­da­tion that isn’t already mak­ing de­ci­sions about cause se­lec­tion.

First of all, be­fore the bulk of my re­sponse, there are globally catas­trophic or ex­is­ten­tial risks which will seem less “out there” than oth­ers. I think most laypeo­ple will re­spond bet­ter in­tel­lec­tu­ally and emo­tion­ally to miti­gat­ing the chances of a pan­demic, global food in­se­cu­rity, or the tail-risks of cli­mate change than A.I. catas­tro­phe or ge­o­mag­netic storms. As foun­da­tions like the Open Philan­thropy Pro­ject both find more and bet­ter op­por­tu­ni­ties for grants to miti­gate GCRs, and also nor­mal­ize grant­ing to these causes in com­ing years, it may be­come (much) eas­ier for the marginal effec­tive al­tru­ist at a foun­da­tion to make grants in this di­rec­tion. Any­way...

I agree it won’t be as fea­si­ble to move money to “out there” causes at some foun­da­tions, but I think an effec­tive al­tru­ist should take what they can get. I mean, we shouldn’t liter­ally be as blunt in our de­ci­sion-mak­ing as that, but I’ll give you an ex­am­ple.

Let’s imag­ine a stu­dent at Oxford Univer­sity named Mary has made a sig­nifi­cant plan change be­cause of 80,000 Hours, and she in­tend to go work at a foun­da­tion in an effort to in­fluence where the funds go. Be­cause of her strengths and con­nec­tions, Mary has great fit and op­por­tu­nity to make it in foun­da­tion work. She also stud­ies Philos­o­phy, Poli­tics, and Eco­nomics, a ma­jor which poises her to do foun­da­tion work in the U.K. bet­ter than, say, a ma­jor in Psy­chol­ogy or Chem­istry. How­ever, be­cause of her in­ter­ac­tion with effec­tive al­tru­ism, tMary has de­cided her per­sonal pri­or­ity is miti­gat­ing A.I. risk, even though al­most any foun­da­tion which would hire her would at best let her make grants to AMF or GiveDirectly. Should Mary still aim to work at a foun­da­tion?

I think there’s still a case to be made. What seems a dilemma may be two op­por­tu­ni­ties in dis­guise. By work­ing at a foun­da­tion mak­ing grants to AMF, Mary is hav­ing the great­est im­pact she could have for not just global poverty, but any cause, by do­ing di­rect work. If we as­sume she con­sid­ered ei­ther earn­ing to give or di­rect re­search on the value al­ign­ment prob­lem, and still con­cluded the best fit for her was grant­mak­ing in foun­da­tions, I don’t think this new de­vel­op­ment changes her com­par­a­tive ad­van­tage. So, she can take a job at that foun­da­tion. Mean­while, if she earns enough, she can still donate to MIRI on the side, as a form of in­di­rect im­pact. This com­bi­na­tion of choices en­sures she’s still hav­ing the great­est im­pact she could ex­pect to have at the be­gin­ning of her ca­reer. As she climbs the lad­der, builds ca­reer cap­i­tal, and gains a rep­u­ta­tion, Mary puts her­self in a po­si­tion to make grants di­re­clty to “out-there” causes at an­other foun­da­tion, if that ever be­comes a fu­ture pos­si­bil­ity.

Now, I don’t think this ex­am­ple can be used to jus­tify what ad­vice is given to effec­tive al­tru­ists in gen­eral. How­ever, when talk­ing about ca­reer se­lec­tion, the in­side view can mat­ter as much for an in­di­vi­d­ual effec­tive al­tru­ist mak­ing choices for them­self as much as the out­side view mat­ters to ad­vis­ing the marginal EA in the ab­stract. As some­one cur­rently ag­nos­tic be­tween causes due as much to my dis­po­si­tion to be­ing in­de­ci­sive as much as to my real un­cer­tainty, an in­de­ci­sive­ness which challenges my ca­reer se­lec­tion as well, I’d be ec­static at an op­por­tu­nity like the one I’ve de­vised for Mary. To have con­fi­dence that I don’t need to make an ul­ti­mate cause se­lec­tion be­tween two over­whelming op­tions be­fore I start hav­ing a lev­er­aged im­pact would make an EA ca­reer psy­cholog­i­cally eas­ier for me. I doubt I’m the only effec­tive al­tru­ist who feels this way.

Why didn’t you men­tion GiveDirectly, an or­ga­ni­za­tion with nigh-bound­less room for more fund­ing? It just took $25MM from Good Ven­tures, has a his­tory of ex­tremely rapid growth, and its model should even­tu­ally al­low it to take many billions of dol­lars per year.

I did men­tion GiveDirectly in the post, but I wrote the draft be­fore the $25M an­nounce­ment and un­der­es­ti­mated the up­side.

-

con­trast earn­ing to give with other paths to in­fluence large quan­tities of funds

Yep, that’s a good point. I think my ar­gu­ments ap­ply more to­ward the bal­ance of do­ing money mov­ing (e.g., earn­ing to give, foun­da­tions, IARPA, etc.) ver­sus di­rect work (e.g., work­ing at CEA, do­ing re­search, etc.), though this is not a perfect di­chotomy.

Hey this is a great dis­cus­sion to have so I’m re­ally glad you posted it. You haven’t changed my views, and I don’t have time right now to go into de­tails, and I haven’t read the com­ments yet, but I just wanted to raise a cou­ple of points where you think we dis­agree where we don’t. Note the ques­tion we an­swered:

“At this point in time, and on the mar­gin, what por­tion of al­tru­is­ti­cally mo­ti­vated grad­u­ates from a good uni­ver­sity, who are open to pur­su­ing any ca­reer path, should aim to earn to give in the long term?”

“Long term” in that ques­tion is brack­et­ing out the ‘ca­reer cap­i­tal’ ar­gu­ment for EtG which you dis­cuss above. I be­lieve that a higher pro­por­tion of ppl should EtG short term than should EtG long term bc of the ca­reer cap­i­tal benefits. (And I think say some­thing similar in the OP).

“open to pur­su­ing any ca­reer path” is brack­et­ing out the fol­low­ing con­sid­er­a­tion: “psy­cholog­i­cally, earn­ing-to-give seems to me to be a bet­ter fit for the av­er­age EA than di­rect work”. If we were just ask­ing “what % of the EA com­mu­nity should (in a sense of ‘should’ that takes into ac­count peo­ple’s psy­cholo­gies, etc) EtG?” and ran the sur­vey among the 80k team again, I sus­pect the num­ber would be higher than 15%. (And again, I thought in the OP I men­tioned this as an ar­gu­ment for non-EtG; there are many peo­ple who are go­ing to EtG what­ever hap­pens, so if you’re happy not-EtG that’s a rea­son in favour of not-EtG).

So I’m won­der­ing what % you’d give in an­swer to the ques­tion we were ask­ing, given clar­ifi­ca­tions 1 and 2? I’m wor­ried there’s some mis­com­mu­ni­ca­tion bc you seemed to be an­swer­ing “What % of the EA com­mu­nity should EtG at any one time” and we were an­swer­ing a nar­rower q? (I don’t think we’ll have the same view, but it might be closer)

On the sub­ject of non-dis­agree­ments, can I make an­other ping about the prob­a­ble large differ­ence in defi­ni­tions of Earn­ing-to-Give? Peter did give a defi­ni­tion “Ac­cord­ing to the EA Sur­vey (fo­cused on 2013 data), the mean dona­tion in our sam­ple from EAs who met the crite­ria for earn­ing-to-give (>=$60K an­nual in­come and >=10% dona­tions)...“. There isn’t one in the OP.

Or to put it more point­edly, would I be right in guess­ing that if we define ev­ery­one earn­ing >$60k and donat­ing >10% as Earn­ing to Give, you think more than 15% of peo­ple open to any ca­reer path should be do­ing that long term?

Peter did give a defi­ni­tion “Ac­cord­ing to the EA Sur­vey (fo­cused on 2013 data), the mean dona­tion in our sam­ple from EAs who met the crite­ria for earn­ing-to-give (>=$60K an­nual in­come and >=10% dona­tions)...”

If I were defin­ing it again, I’d fur­ther re­fine it to have the third crite­ria “and with the in­tent that the ma­jor­ity of one’s im­pact is com­ing from dona­tions”. For ex­am­ple if one earns $60K at a com­pany work­ing on im­prov­ing de­vel­op­ing world in­fras­truc­ture, it sounds like some­thing differ­ent than what I’d con­sider mak­ing a big differ­ence through dona­tions.

but I just wanted to raise a cou­ple of points where you think we dis­agree where we don’t.

I wrote my post know­ing we’d be talk­ing past each other some—I wanted to em­pha­size ca­reer cap­i­tal and psy­cholog­i­cal fit even know­ing that it they were be­ing brack­eted out by your care­fully worded ques­tion. Sorry that makes things con­fus­ing!

-

So I’m won­der­ing what % you’d give in an­swer to the ques­tion we were ask­ing, given clar­ifi­ca­tions 1 and 2?

It’s difficult to make even a rough guess about the “long term” fu­ture of EA (say >5 years) and I don’t think that such a rough guess is all that valuable when switch­ing out of ETG to some­thing more “di­rect” is usu­ally pretty easy.

The % is also fur­ther com­pli­cated in the thought that other peo­ple raised that I did con­sider buy not suffi­ciently—ca­reers that in­volve di­rect im­pact with­out re­quiring fund­ing from EAs (e.g., aca­demics).

On one hand, if more foun­da­tions like Good Ven­tures con­tinue to en­ter at the cur­rent rate they have and our cur­rent ETG peo­ple don’t value drift and have in­comes rise as they think they will, ETG will be get­ting less valuable. On the other hand, if fund­ing op­por­tu­ni­ties con­tinue to grow rapidly, es­pe­cially from The Open Philan­thropy Pro­ject, ETG will be get­ting more valuable. I’m not clear on which one of these trends will dom­i­nate. I don’t even know which trend is cur­rently win­ning, though I sus­pect the first one (mak­ing ETG less valuable over time).

I agree with your con­clu­sions here. A few weeks ago at Stan­ford EA we dis­cussed ca­reer al­ter­na­tives to earn­ing to give, and took a some­what differ­ent ap­proach from you here. We threw out a num­ber of ideas about ca­reers we per­son­ally could pur­sue and how valuable we thought they were. We more or less reached a con­sen­sus that we could all do more good by earn­ing to give than by do­ing any­thing else. This may have been more true for the peo­ple pre­sent than for EAs in gen­eral, but even so I sus­pect it’s still the case that 50+% of EAs should be earn­ing to give.

As you know, I en­dorse your po­si­tion, and think that in the ideal dis­tri­bu­tion—the one in which all of those not earn­ing to give are do­ing the most valuable things—even more than 80% of peo­ple would be ETG. (More pre­cisely, they’d be do­ing good pri­mar­ily by donat­ing, as this is the real is­sue here, not whether they do ETG in the sense of tak­ing high-pay­ing jobs pri­mar­ily in or­der to donate.)

Tom, there is po­ten­tial for effec­tive al­tru­ism to ex­pand in mul­ti­ple ways.

It could grow ex­po­nen­tially in the ab­solute num­ber of peo­ple who join the move­ment and pur­sue the most effec­tive ca­reers they can.

It could grow ex­pone­tially in terms of money moved to effec­tive char­i­ties, e.g., by Good Ven­tures, the amount of in­fluence it wields, or the amount of pro­jects it’s re­spon­si­ble for ini­ti­at­ing.

It’s pos­si­ble there will be a great in­crease in the num­ber of effec­tive giv­ing op­por­tu­ni­ties to ex­ist­ing or yet un­founded or­ga­ni­za­tions and pro­jects. Or, only the amount of money moved to ex­ist­ing effec­tive char­i­ties might sub­stan­tially in­crease, cre­at­ing or ex­ac­er­bat­ing fund­ing con­straints. Or, both. How would your ideal dis­tri­bu­tion of EtG rel­a­tive to other EA work change un­der such sce­nar­ios?

Changes like that would ab­solutely change my ideal dis­tri­bu­tion, in the ways that you’d pre­dict. :) I’m just scep­ti­cal that some of them will in fact hap­pen—e.g. that we’ll de­velop many GiveWell-beat­ing dona­tion tar­gets, able to ab­sorb a lot of money be­fore cap­ping out. I’m one of the peo­ple who Peter men­tioned as favour­ing di­rect poverty re­lief—and there are an awful lot of poor peo­ple out there.

Yeah, I think these changes are un­likely, I was just try­ing to test your thoughts on the sub­ject. I be­lieve there like­li­hood is high enough it should be some­thing in the back of our minds in case we need to quickly change our plans, but not so likely we need to take fo­cus away from what we’re cur­rently do­ing to make new plans, un­til we re­ceive real ev­i­dence such dra­matic changes will in­deed hap­pen.

For the record, for all val­ues of “Givewell-beat­ing dona­tion tar­get”, whether recom­mended tra­di­tional char­ity, or nar­row space for fund­ing for an Open Phil con­sid­er­a­tion as an in­cred­ible op­por­tu­nity, I ex­pect most in­ter­ven­tions a con­sen­sus of effec­tive al­tru­ist would agree, e.g., beat AMF, would only beat AMF for a few months, ba­si­cally so they re­ceive enough fund­ing to sus­tain an ex­per­i­ment to see if such a new ini­ti­a­tive would work and be scal­able. Once they re­ceive seed fund­ing, they wouldn’t be worth fund­ing again at least un­til re­sults con­firm it’s a valuable in­vest­ment, so they’d hit sharply diminish­ing marginal re­turns.

This is as­sum­ing we’re judg­ing the value of a cause or in­ter­ven­tion with only con­ven­tional mea­sures, like ex­pected or demon­stra­ble num­ber of QALYs, and not other things like the “im­por­tant/​valuable, crowded/​ne­glected, tractable” heuris­tic, e.g., Open Phil uses. I per­son­ally still don’t know what I would con­clude the out­put of that anal­y­sis would be.

As always your posts are very clear, con­struc­tive and go straight for the key points!

Here’s why I don’t agree, and am not much moved from my origi­nal es­ti­mate:

Firstly, as you note, the claim was only about the most mo­ti­vated/​tal­ented peo­ple, in the long run. The first point was there to deal with the fact that many peo­ple who are less mo­ti­vated will find it much eas­ier to earn to give than the al­ter­na­tives. The sec­ond is there to deal with the ca­reer cap­i­tal point—that many peo­ple should earn to give early on, but tran­si­tion out later on to have a di­rect im­pact. So inas­much as we are ad­dress­ing differ­ent au­di­ences, we don’t ac­tu­ally dis­agree as much as it might seem. That so many peo­ple who are not as flex­ible will pre­fer earn­ing to give, is a rea­son to do di­rect work if you’re open to both.

The post ne­glects that we can get enor­mous sums of money from out­side pre-ex­ist­ing sources, for ex­am­ple, Good Ven­tures. This could end up cov­er­ing many of the costs for peo­ple do­ing di­rect work, and dra­mat­i­cally re­duce the need for earn­ing to give. So prob­a­bly our es­ti­mates should be a wide range de­pend­ing on how that goes.

Earn­ings are log-nor­mal, so the av­er­age dona­tions per earn­ing to giver are much higher than the typ­i­cal cases you men­tion. Par­tic­u­larly so as many peo­ple are go­ing into en­trepreneur­ship, which al­lows you to make ei­ther a lot of money in your first 10 years, or switch to di­rect work. (Also note there is some­thing pe­cu­liar about the ar­gu­ment that each per­son who earns to give doesn’t donate much money, so that’s why more peo­ple should do it.)

I don’t think AMF or GiveDirectly are likely to con­tinue to be re­garded as the most effec­tive or­gani­sa­tions in the long run, so al­though they have ex­cep­tional spends per staff mem­ber, I an­ti­ci­pate that the places I would want to move money to will have many more staff for each dol­lar they spend.

Lots of promis­ing op­por­tu­ni­ties won’t re­quire any earn­ing to give to sup­port them—poli­tics, sci­ence re­searchers, aca­demics, work­ing in a foun­da­tion, jour­nal­ists, ac­tivists, prof­itable start-ups that are di­rectly valuable, etc. To me that’s already where I would want at least a third of the peo­ple we were talk­ing about to go.

A big thing this seems to be miss­ing is that there are other sources of money than “EAs earn­ing to give”. Philan­thropists and foun­da­tions could eas­ily fill GiveWell’s char­i­ties’ room for more fund­ing.

(I had taken a similar ap­proach a while ago, and no longer think that’s the right com­par­i­son to make.)

You’re right that stack­ing up earn­ing to give count vs. di­rect op­por­tu­nity count is an in­sta­ble ar­gu­ment. How­ever, it’s im­por­tant to not just as­sume foun­da­tions are rush­ing in to fill these fund­ing gaps. (Not say­ing you are mak­ing that as­sump­tion, of course.)

Right. It’s not that philan­thropists and foun­da­tions are already spend­ing their money op­ti­mally, but that be­cause it’s already there it makes sense to have peo­ple work­ing on get­ting it spent bet­ter.

Us­ing the best pos­si­ble num­bers for earn­ing-to-give, imag­ine that an ad­di­tional di­rect work per­son was like the AMF staff and gen­er­ated $2.5M in dona­tion op­por­tu­nity. Now imag­ine I took the low value of earn­ing-to-give, $9.5K/​year. Us­ing these two num­bers, it would take 263 peo­ple earn­ing to give to match one per­son do­ing di­rect work, sug­gest­ing that 99.6% of peo­ple should earn-to-give.

I no­ticed that I’m con­fused about this ar­gu­ment be­cause it im­plies that the worse earn­ing to give is, the more peo­ple should do it.

Could you ex­plain more why these are the cor­rect things to com­pare? I get and agree with your sec­ond com­par­i­son where you com­pare salaries.

“I no­ticed that I’m con­fused about this ar­gu­ment be­cause it im­plies that the worse earn­ing to give is, the more peo­ple should do it.”

**

Some hope­fully more in­tu­itive analo­gies:

When au­toma­tion dra­mat­i­cally ups the pro­duc­tivity of a sin­gle worker in a job, a com­mon re­sult is that fewer peo­ple are needed to do the job (peo­ple get laid off, some­times they strike, etc.)

The huge in­crease in pro­duc­tivity of farm­ers means that whereas at one point prob­a­bly 80%+ of the work­ing pop­u­la­tion was needed in agri­cul­ture to pro­duce enough food, now it’s <10%.

If I’m earn­ing barely enough to live on and then liv­ing costs out­pace wage rises (i.e. my real wage falls), I will prob­a­bly work more hours.

If I’m cook­ing, I will likely add much less of an in­gre­di­ent with a very strong flavour than one with a rel­a­tively weak flavour.

**

Peter is look­ing at a de­liber­ately sim­plified anal­y­sis where all peo­ple are ei­ther money-pro­duc­ers or money-con­sumers. If there are too many pro­duc­ers or con­sumers, the marginal pro­ducer/​con­sumer isn’t ac­tu­ally in­creas­ing the amount of money that gets moved (we are ei­ther op­por­tu­nity-con­strained or fund­ing-con­strained). We want a rough bal­ance, and so the worse the pro­duc­ers are at pro­duc­ing, the more of them we need, and vice-versa.

Thanks, I agree that this is helpful and ex­plains his 2nd ex­am­ple where he is com­par­ing the salary of an AMF per­son to what an E2G per­son donates, but I still don’t un­der­stand the ex­am­ple I quoted where he is com­par­ing the “dona­tion op­por­tu­nity” from AMF staff ver­sus E2G.

(To use the terms of your last para­graph, it doesn’t seem like this is com­par­ing pro­duc­ers and con­sumers but rather 2 differ­ent types of pro­duc­ers.)

I agree it’s not in­tu­itive when you put it that way, but I think it makes sense:

Imag­ine a strange world where the high­est im­pact thing to do is to turn this mag­i­cal crank. Also for­get about meta op­por­tu­ni­ties—there are only two pos­si­ble roles: turn­ing the crank your­self (di­rect work) or fund­ing the salaries of peo­ple who turn the crank (earn­ing to give).

For­get­ting a mo­ment about psy­cholog­i­cal con­straints and think­ing only about pure im­pact, you ought to turn the crank your­self, be­cause the more peo­ple turn­ing the crank the more im­pact there is, and earn­ing money it­self doesn’t turn any cranks.

How­ever, the peo­ple turn­ing the crank need to not starve to death, and they’re re­quest­ing a fru­gal $25K/​year to fund their lifestyles.

Cur­rently there’s $0 in this crank move­ment (two differ­ent puns in­tended), so we need an ETG per­son to fund some salaries.

Now imag­ine that the high­est salary you can get is $50K/​yr—you take $25K/​year for your­self and can fund one per­son full-time turn­ing the crank. So for ev­ery one per­son turn­ing the crank, you need one per­son ETG, or else noth­ing would hap­pen. The next per­son to join the crank move­ment will do ETG, the per­son af­ter that to join will turn the crank di­rectly, and so on.

Now imag­ine that some­one gets a su­per­job and can earn $50M/​yr—they also take $25K/​year for them­selves, but they have $49,975,000 left over for dona­tions; enough to fund 1999 peo­ple to turn the crank. Now the next 1999 peo­ple to join should turn the crank di­rectly, be­cause we don’t need any more money. None of the next 1999 peo­ple should earn to give.

When earn­ing to give was worse, we needed more peo­ple do­ing it. When it got bet­ter, we needed much less peo­ple do­ing it. Thus, the bet­ter earn­ing to give is, the less you should do it, gen­er­ally speak­ing.

Thanks Peter, I agree that your in­sight about “when it’s worse more peo­ple should do it” is cor­rect and your anal­ogy is helpful.

But what re­ally con­fuses me is the spe­cific ex­am­ple I quoted where you are look­ing at what an AMF staff per­son does ver­sus what an E2G per­son does. It seems like in both cases you are com­par­ing “dona­tion op­por­tu­nity”, yet you are choos­ing the one which is worse.

I think you are agree­ing with me? We shouldn’t be com­par­ing the out­put of the “crank”, but in­stead be look­ing at what it takes to turn the crank. There­fore we shouldn’t com­pare 2.5 M to 9.5 K, in­stead we should com­pare the salary of some­one at AMF to 9.5 K.

It seems like this would de­pend a lot on how you define EA. If you mean “peo­ple who at­tend EA Global” or even “peo­ple who read EA fo­rums”, that’s prob­a­bly a larger per­centage who should do di­rect work than “peo­ple whose choices we hope will be in­fluenced by EA philos­o­phy”.

Upvoted Not to dis­place Eliz­a­beth, but I hope you don’t mind me tak­ing a crack at this. Note: I’m not try­ing to make one knock­down ar­gu­ment, but tak­ing a cluster of shots in the dark that might add up to val­i­dat­ing Eliz­a­beth’s premise.

At EA Global, rel­a­tive to peo­ple who wil join effec­tive al­tru­ism in the fu­ture, the col­lec­tion of at­ten­des, other pas­sion­ate/​ded­i­cated EAs, etc., were referred to as “early adopters”. “EA Global at­ten­dees” or “EA Fo­rum par­ti­ci­pants” are close to the cur­rent “core”. This sig­nifies they’re ded­i­cated, and may be more will­ing to pur­sue di­rect work than other effec­tive al­tru­ists. Direct work in EA is leads to a less con­ven­tional ca­reer than earn­ing to give does, so EA or­ga­ni­za­tions should cap­i­tal­ize on the will­ing­ness of EAs who would pur­sue di­rect work.

The haste con­sid­er­a­tion might dic­tate, since is bet­ter to do all the best di­rect work sooner rather than later, it’s bet­ter to on­board more EAs into di­rect work as soon as pos­si­ble, to re­al­ize a more lev­er­aged im­pact.

I read on one of 80,000 Hours’ re­cent blog posts that they find it difficult to hire new tal­ent be­cause most po­ten­tial hires don’t have the mix of “skills, ra­tio­nal in­sight, and deep knowl­edge of effec­tive al­tru­ism” they’re look­ing for. This might be the case for many EA or­ga­ni­za­tions. This might be more so if we con­sider the do­main-spe­cific knowl­edge orgs work­ing on spe­cific causes might re­quire of their em­ploy­ees. Ded­i­cated “early adopters” of effec­tive al­tru­ism are dis­pro­por­tionately likely to be great fits for these speci­fi­ca­tions of EA orgs.

Con­nec­tions and pro­fes­sional net­works con­sti­tute an im­por­tant part of find­ing tal­ent, learn­ing les­sons, and shar­ing re­sources in a sec­tor. EA has very differ­ent needs than much of the non-profit world, so they’d be bet­ter served by build­ing a pro­fes­sional net­work com­posed of ex­ist­ing EAs in an easy way, rather than spend­ing more re­sources and time try­ing to find the same in the rest of the non-profit sec­tor.

As effec­tive al­tru­ism grows, more nu­anced and con­tex­tu­al­ized in­sight of the move­ment’s par­tic­u­lar­i­ties and his­tory will be nec­es­sary to main­tain its suc­cess. For ex­am­ple, take EA Global 2015. Had the or­ga­niz­ers known more about last year’s EA Sum­mit, they would’ve been bet­ter able to avoid re­peat mis­takes such as not op­ti­miz­ing for con­sid­er­a­tions of the meals served, schedul­ing the con­fer­ence on the same week­end as the na­tional A.R. con­fer­ence, and not check­ing with var­i­ous cause rep­re­sen­taitves about what they thought was an ap­pro­pri­ate amount of at­ten­tion their cause re­ceived on the sched­ule. As time goes on and effec­tive al­tru­ism grows, it will be even more cru­cial for EA to have early adopters at its orgs to pre-empt fu­ture prob­lems of this kind.

EA or­ga­ni­za­tions have long-term re­la­tion­ships, such as the re­la­tion­ships be­tween a char­ity eval­u­a­tor and its recom­mended char­i­ties. Th­ese unique re­la­tion­ships are fa­cil­i­tated by hav­ing an es­pe­cially knowl­edge­able EA who knows the his­tory of these re­la­tion­ships, rather than hiring a (new) out­sider ev­ery cou­ple years or so.

As EA orgs spe­cial­ize in what they do, it’s eas­ier for a more ded­i­cated EA to tran­si­tion from a very spe­cial­ized role in di­rect work to earn­ing to give as needed, rather than a fresher EA tran­si­tion­ing from earn­ing to give to a very spe­cial­ized role. For ex­am­ple, Givewell finds train­ing em­ploy­ees for their work or into man­age­ment po­si­tions must be care­ful and slow-go­ing work to en­sure it goes well. This whole pro­cess is made eas­ier if more ded­i­cated EAs go into di­rect work sooner rather than later.

More EAs go­ing into di­rect work in cause pri­ori­ti­za­tion, move­ment de­vel­op­ment, or other metachar­ity may greatly ex­pand the quan­tity and spread of effec­tive or­ga­ni­za­tion who could re­ceive fud­ing in the fu­ture. If many EAs will go on to earn to give any­way, it’s im­por­tant for us in­volved now to ex­pand the num­ber of or­ga­ni­za­tions who are pre­pared to do effec­tive work with these fu­ture funds, and en­sure they’ll in­deed be effec­tive.

“EA has very differ­ent needs than much of the non-profit world.” In what way?

Effec­tive al­tru­ist or­ga­ni­za­tions do work which is un­com­mon among other non-profit or­ga­ni­za­tions, such as cause pri­ori­ti­za­tion, char­ity eval­u­a­tion, and the ex­plicit growth and co­or­di­na­tion of a bud­ding so­cial move­ment. Much of this might re­quire unique skills, or at least ones that are less com­mon to find among peo­ple work­ing at con­ven­tional NGOs. So, long-time vol­un­teers for EA oga­ni­za­tions who also have a tacit knowl­edge of dy­nam­ics in effec­tive al­tru­ism as a com­mu­nity may be quicker and sim­pler to train than some­one who knows noth­ing of effec­tive al­tru­ism. How­ever, if an or­ga­ni­za­tion broad­ened the scope of its search to tal­ent be­yond con­ven­tional non-prof­its and the ex­ist­ing EA com­mu­nity, to any­one and ev­ery­one from the pub­lic and for-profit sec­tors as well, they’d likely find unique can­di­dates who fit the bill bet­ter than any­one else, effec­tive al­tru­ist or not. In the past, it seems find­ing new hires has been difficult enough for small effec­tive al­tru­ist or­ga­ni­za­tions in what they con­sider an ac­cept­able timeframe they feel forced to hire from within the com­mu­nity. How­ever, now that the scope of effec­tive al­tru­ism is ex­pand­ing, past ex­pe­rience alone shouldn’t stop EA or­ga­ni­za­tions to look be­yond their own ex­ist­ing cir­cles of in­fluence to find new hires.

I also have to say that there is some­thing very in­sider-y about this anal­y­sis. Much of the ad­vice seems like it boils down to “don’t waste your time with non-EA peo­ple.”

So, I don’t agree with Eliz­a­beth’s origi­nal com­ment. AGB has a well up­voted com­ment above this thread, and I agree with the ra­tio of earn­ing to give to other effec­tive al­tru­ist work he puts forth would be ideal, based on the cur­rent state of things. I think he is more or less cor­rect for how­ever wide a net one casts to define the pop­u­la­tion of effec­tive al­tru­ism, even if it’s one so small it only in­cludes peo­ple who post to fo­rums like this one and at­tend con­fer­ences ev­ery year. I don’t think the pro­por­tion of “early adopters”, or what­ever they’re called, of effec­tive al­tru­ism who go into di­rect work should be much higher than the to­tal of what­ever cou­ple thou­sand effec­tive al­tru­ists there are.

I was just gen­er­at­ing a bunch of pos­si­ble ar­gu­ments on the fly for Eliz­a­beth’s hy­poth­e­sis, so I might have mo­ti­vated my­self to pro­duce ones which on their face seem ap­peal­ing but con­tain lit­tle sub­stance. Like, I was putting my­self in the shoes of an EA or­ga­ni­za­tion which was des­per­ate to hire the most fit­ting em­ploy­ees for their team as soon as pos­si­ble. Most or­ga­ni­za­tions don’t act that dire. On sec­ond thought, I think only three of my above points stand up to scrutiny. There was an­other thread where Tom Ash an­swered one of my ques­tions that’s made me more skep­ti­cal of the ca­pac­ity of effec­tive al­tru­ism to gen­er­ate new su­pe­rior giv­ing op­por­tu­ni­ties in the form of new pro­jects or char­i­ties than I once thought. So, there’s likely less ca­pac­ity for di­rect work.

If the rest of the effec­tive al­tru­ism com­mu­nity does and con­tinues to hold the opinion they can pro­duce many new pro­jects which beat, e.g., Givewell’s top char­ity recom­men­da­tions in terms of effec­tive­ness, more of them should be al­lowed to fail, as we would rightly ex­pect would hap­pen, and we should not keep fund­ing them as that would be a bunch of bloat and cuts into fund­ing we could provide to more effec­tive or­ga­ni­za­tions.

Great post! Here’s an­other pos­si­ble counter-point: The tra­di­tional EA in­ter­ven­tions have been easy to quan­tify: bed nets, cash trans­fers, de­worm­ing, on­line-ads, leaflets, etc. As we get bet­ter at eval­u­at­ing in­ter­ven­tions we tend more to­wards harder-to-quan­tify stuff such as in­fluenc­ing poli­tics. What makes the former in­ter­ven­tions easy to quan­tify? One at­tribute is the fact that they con­sist of small things bought in large quan­tities. Th­ese are easy to study with RCTs. Run­ning RCTs on ar­eas where salaries are the thing to be funded is im­prac­ti­cal.

So if the trend away from easy-to-quan­tify ar­eas con­tinues we can ex­pect to put more of our money into salaries. This yields two rea­sons we may need more di­rect work and fewer EtG: 1) Hiring peo­ple is a lot less scal­able which means less money is needed per in­ter­ven­tion, 2) We may have to cre­ate new po­si­tions and fill them with EAs (e.g. what xrisk orgs do) or we may have to fill ar­eas with EAs (e.g. poli­tics).

Thanks! To be fair, I do feel like my ar­gu­ment takes into ac­count this counter-point pretty fully, es­pe­cially the sec­tion “The Prob­lem With Fund­ing Salaries”. But you’re right that the more we fund salaries, the weaker this ar­gu­ment be­comes.

Based on how each sub­se­quent elec­tion cy­cle seems to be more ex­pen­sive than the last in, e.g., the United States and the United King­dom, I’m ter­rified by the pos­si­bil­ity by how much it would cost those earn­ing to give to fund a cam­paign by them­selves. Like, think­ing about how many lives that money could coun­ter­fac­tu­ally save, and there isn’t even a guaran­tee an EA-funded can­di­date would get elected. Depend­ing on how se­ri­ous EAs in­ter­ested in poli­tics are, we bet­ter figure out how raise funds from out­side effec­tive al­tru­ism. and run suc­cess­ful cam­paigns be­fore one of us starts run­ning. With its con­nec­tions to other re­searchers who could help on such a pro­ject, and their cur­rent re­search ex­pe­rience with nor­ma­tive ra­tio­nal­ity, ev­i­dence-based de­ci­sion-mak­ing, and coun­ter­fac­tual rea­son­ing, 80,000 Hours seems to be best poised to carry out this re­search among EA orgs.

I think this would vary greatly by cause area—I see global poverty as pri­mar­ily fund­ing con­strained (largely due to the fact that much of it in­volves trans­fer­ring wealth). Un­sure about ex­is­ten­tial risk, but I think an­i­mal causes are more hu­man cap­i­tal con­strained. It’s in­ter­est­ing what Jacy said about ACE—I’m cu­ri­ous if he would ex­tend that to an­i­mal char­i­ties more broadly. It seems to me like the sorts of things that would make a differ­ence for an­i­mals could use more or­ga­niz­ers and charis­matic per­son­al­ities rel­a­tive to money.

That would sur­prise me. While GiveWell has billions (e.g., Good Ven­tures) and AI Risk re­duc­tion has mil­lions (e.g., Elon Musk), EA an­i­mal causes have maybe hun­dreds of thou­sands at most. (Note that this ig­nores PETA, which does have tens of mil­lions, but I’m not sure it’s re­ally go­ing to an­i­mals as EAAs would define the cause area.)

Maybe an­i­mal causes are just tal­ent con­strained and fund­ing con­strained, but I’ve heard more “I wish we had more money to make our salaries more com­pet­i­tive” and “I wish we could hire for a po­si­tion X but we don’t have the money” than “We have $50K ly­ing around for this job offer but can’t find any­one to take it”.

This also makes sense given that I think an­i­mal causes have a good ca­pac­ity to hire from out­side EA—there are lots of mo­ti­vated an­i­mal ac­tivists who haven’t heard of EA yet (though they may be hos­tile to the idea). If I re­call cor­rectly, Jon Bock­man was this kind of hire.

My think­ing was that (and show me the holes in this—it may af­fect ma­jor life de­ci­sions!) an­i­mal causes are more hu­man cap­i­tal con­strained be­cause more peo­ple will­ing to bor­der­line starve would be use­ful. You definitely hear more peo­ple say “I wish we had more money...” than “We have $50K ly­ing around...” but there are two ways to solve that—more money or some­one will­ing to live on less than $50K, and I think the lat­ter is likely to be more im­por­tant. Given the record of the move­ments that seem to me to most re­sem­ble an­i­mal rights, it seems the vast ma­jor­ity of the work will be done by vol­un­teers, so the pri­mary need is more vol­un­teers rather than more money.

Some­one who would take a $25K salary in­stead of a $50K salary is effec­tively “donat­ing” $25K. So if you think you could ETG more than that, you’d be beat­ing that, from that per­spec­tive.

The stronger per­spec­tive is the per­spec­tive that we need more peo­ple in the an­i­mal rights move­ment to stew­ard the money we already have, to cre­ate new fund­ing op­por­tu­ni­ties, or to do good work such as to in­spire more re­spect and thus more fund­ing.

I’m not in­volved with the an­i­mal rights move­ment out­side of its in­ter­sec­tion with effec­tive al­tru­ism, so I don’t know much about it. How­ever, among other things, I’d think the eval­u­a­tors at ACE are in­volved with the AR move­ment, and would come out and said at there EAG talks that the com­mu­nity is just as, if not more, con­strained by lack of vol­un­teers than lack of funds. They didn’t pri­ori­tize rais­ing aware­ness of a greater vol­un­teer need than a greater fund­ing need. Of course, they were op­ti­miz­ing for an effec­tive al­tru­ism au­di­ence. So, maybe the most the av­er­age effec­tive al­tru­ist can do, one who has or will have a ca­reer which is not pri­mar­ily low-paid or vol­un­teer work for an­i­mal liber­a­tion, and who is already plan­ning on earn­ing to give or what­ever, is donate to, e.g., ACE’s top recom­mended char­i­ties. That’s not nec­es­sar­ily an ar­gu­ment for the rest of the AR move­ment as it ex­ists, or any­one new who joins it, to mostly go earn­ing to give, rather than vol­un­teer­ing.

My think­ing was that (and show me the holes in this—it may af­fect ma­jor life de­ci­sions!)

+1 to this sen­ti­ment. I too would like to know if I’m ig­no­rant or wrong about the fu­ture or pre­sent sta­tus of the an­i­mal rights move­ment.

Mean­while, global poverty also ap­pears to have about as much room for more fund­ing.

An­i­mal causes have rel­a­tively much less room for more fund­ing just be­cause there’s much less in­fras­truc­ture set up right now to spend those funds. I doubt an­i­mal causes could ab­sorb any more than $2M pro­duc­tively right now. But I hope this could change over the next five years...

Of course, each of the cause ar­eas also have a lot of room for ex­cep­tion­ally tal­ented peo­ple to make them bet­ter. I imag­ine some­one who can start a new global poverty char­ity as good as AMF should cer­tainly do that, even if they could get an ETG job at $1M a year oth­er­wise.

The ex­tent to which a cause is fund­ing con­strained doesn’t equal the size of its room for more fund­ing. It’s more to do with how much progress you can gain per unit of money com­pared to a unit of tal­ent.

Global poverty has large room for more fund­ing, but I still sus­pect it may be more tal­ent con­strained than fund­ing con­strained, be­cause a tal­ented per­son can do a lot more through set­ting up new non­prof­its, policy or re­search than etg.

I agree MIRI has a fund­ing gap, but all the other xrisk re­search groups have a lot of funds, and are con­cerned they may not find suffi­ciently good re­searchers to hire. More­over, there are ma­jor donors (e.g. Open Phil) ready to put more funds into AI risk re­search, but don’t think there’s enough good peo­ple available to hire yet.

Build­ing on what Peter said, Nick Cooney in ad­di­tion to Jacy said not just ACE, but char­i­ties like ACE’s top recom­men­da­tions are also fund­ing con­strained. If I re­call cor­rectly, Mr. Cooney said at EA Global some­thing like:

Build­ing on what Jacy said ear­lier, I’ve heard a lot of talk this week­end about how some peo­ple are con­cerned their isn’t room for more fund­ing at or­ga­ni­za­tions. Well, that isn’t the case for us. An­i­mal ad­vo­cacy could definitely use more money.

Note this isn’t a para­phrase, but me at­tempt­ing to di­rectly quote Mr. Cooney as best as I can re­mem­ber. This is how he started his third of the “An­i­mal Ad­vo­cacy Triple Talk”. As se­nior staff at both Mercy For An­i­mals and The Farm Sanc­tu­ary, he would know, and it ap­pears he meant to pri­ori­tize and em­pha­size this prac­ti­cal point.

Givewell has said in the past that find­ing the right tal­ent is a bot­tle­neck prob­lem they can’t just solve by re­ceiv­ing more money. An­i­mal ad­vo­cacy and liber­a­tion seems to have the op­po­site prob­lem, where they need tons of both. More money might help an­i­mal char­i­ties bet­ter search for and/​or at­tract the tal­ent, but I don’t know enough about that. I’m seek­ing an in­ter­view with Nick Cooney for this Fo­rum, but I haven’t heard back from him yet. If or when I do, I will ask him about this.

Yeah, this de­pends greatly on views of the op­ti­mal strat­egy for ap­proach­ing an­i­mal ac­tivism. Nick Cooney definitely fa­vors a more money-in­ten­sive ap­proach where you spend money to con­duct ad cam­paigns pres­sur­ing cor­po­ra­tions and pub­li­ciz­ing var­i­ous videos. Other ac­tivists fa­vor a more grass­roots ap­proach where fund­ing is far less es­sen­tial (though still valuable, to be clear, and of­ten to a greater de­gree than the grass­roots will ad­mit). So I think what he said in­di­cates more about the par­tic­u­lar needs of those or­ga­ni­za­tions than the move­ment as a whole, but I could be wrong.

This may not be the best place to ask, but I’m won­der­ing why “the crite­ria for earn­ing-to-give” in­cludes “>=$60K an­nual in­come”? To me, that seems to be a high min­i­mum that would ex­clude many who are (at least in their own minds) E2G.

The in­come cut­off is ul­ti­mately ar­bi­trary and shouldn’t be thought of as a hard line, where some­one earn­ing $59.99K is definitely not ETG and some­one earn­ing $60.00K definitely is. But I do think there has to be a cut­off some­where, as it’s sup­posed to be about tak­ing a “high earn­ing” job.

I don’t mean to sug­gest that peo­ple who are in, e.g., $45K/​yr jobs du­tifully donat­ing 10% aren’t im­por­tant, of course. The $4.5K/​yr still makes a big differ­ence—prob­a­bly sav­ing at least one life a year!

“Fur­ther­more, psy­cholog­i­cally, earn­ing-to-give seems to me to be a bet­ter fit for the av­er­age EA than di­rect work. Many EAs are already work­ing in a com­pany and can sim­ply move to donate more of their salary or fo­cus on in­creas­ing their salary, rather than quit their job and start a new one.”

I want to ex­pand what’s in the sec­ond sen­tence here. There are a sub­stan­tial num­ber of EAs already work­ing at a job that they like, that makes them enough money to be able to rea­son­ably donate lots of it. But most of them are prob­a­bly over 25. Which is only half of EAs—ac­cord­ing to this sur­vey the me­dian age is 25.

So since prob­a­bly much of 80k’s au­di­ence is still choos­ing their ca­reer from an ear­lier stage, like still-in-uni­ver­sity or fresh-out-and-un­em­ployed or haven’t-cho­sen-a-ma­jor-yet, it makes sense to me that 80k wouldn’t em­pha­size earn to give for these peo­ple.

I’m also not sure the 15% fund­ing the 85% quite holds. CFAR, for ex­am­ple, gets lots of dona­tions but also gets money from peo­ple at­tend­ing work­shops. I don’t know the de­tails, but I’d ex­pect ob­ject-level char­i­ties like AMF to be able to have fairly wide ap­peal and to there­fore get a de­cent amount of money from peo­ple who don’t iden­tify as EAs. I’m not ac­tu­ally con­fi­dent on that point and would wel­come ev­i­dence in any di­rec­tion about it.

I don’t know the de­tails, but I’d ex­pect ob­ject-level char­i­ties like AMF to be able to have fairly wide ap­peal and to there­fore get a de­cent amount of money from peo­ple who don’t iden­tify as EAs.

They can, but the idea is with or­ga­ni­za­tions like AMF and GiveDirectly is they can ab­sorb rel­a­tively mas­sive amounts of dona­tions, and still be the best bang for any­one’s buck. I.e., even if Givewell’s top recom­mended char­i­ties can re­ceive lots of money from both within and out­side of effec­tive al­tru­ism, they’ll still turn out to be the most effec­tive. Of course, this will de­pend on which cause your pri­ori­tize. As Tom Ash com­mented:

I’m one of the peo­ple who Peter men­tioned as favour­ing di­rect poverty re­lief—and there are an awful lot of poor peo­ple out there.

There­fore, there may be a large op­por­tu­nity in in­come re­dis­tri­bu­tion.

I re­al­ize this is not a quan­ti­ta­tive anal­y­sis, par­tially be­cause “hap­piness” is so difficult to quan­tify in a mean­ingful way. In par­tic­u­lar I don’t know how to re­late the var­i­ous hap­piness mea­sures in use to some­thing like QALY (which sug­gests to me that QALY is not an ideal util­i­tar­ian met­ric.) Also, the cor­re­la­tional analy­ses could be mud­dled by con­founders, mean­ing we could de­crease in­equal­ity and still have a sad pop­u­la­tion for other rea­sons. How­ever, I note that dis­tri­bu­tional is­sues have been at the cen­ter of poli­tics for as long as there have been poli­tics, so it’s some­thing that hu­mans seem to care about a lot.

Pre­vi­ous gen­er­a­tions’ an­swers to the dis­tri­bu­tional prob­lem have in­cluded e.g. democ­racy, pen­sions, Marx­ism, and uni­ver­sal health care. Ad­vo­cat­ing earn­ing to give could be a seen as an in­di­vi­d­ual-level re­dis­tri­bu­tion strat­egy. But one could also ad­vo­cate for poli­ti­cal re­forms that might ad­dress these in­equal­ities—they could have very large up­side as well.

This seems as much an ar­gu­ment for grow­ing earn­ing to give ab­solutely out­side of effec­tive al­tru­ism as it cur­rently ex­ists as it is an ar­gu­ment for there be­ing an in­creased pro­por­tion of ex­ist­ing effec­tive al­tru­ists to pur­sue earn­ing to give.

Yes. But then, shouldn’t all ar­gu­ments about what is ap­pro­pri­ate for EA’s to do gen­er­al­ize to what it is ap­pro­pri­ate for ev­ery­one to do? Isn’t that the fun­da­men­tal claim of the EA philos­o­phy?

I don’t think so. I meant your above ar­gu­ment is one for effec­tive al­tru­ism to grow, and that growth to pri­mar­ily be driven by peo­ple who go into earn­ing to give. That doesn’t mean ev­ery­one should earn to give. If effec­tive al­tru­ism grew in­definitely, there would be a point at which there are diminish­ing marginal re­turns for more earn­ing to give rel­a­tive to other op­tions peo­ple would pur­sue. Your ar­gu­ment makes the case this is true for the rel­a­tive pro­por­tion of earn­ing to give within effec­tive al­trusm, but also seems to me to im­ply the amount of earn­ing to give in the world should grow in its ab­solute quan­tity as well. This doesn’t im­ply, how­ever, that 50% of any­one who could earn to give should, nor that ev­ery­one should do what effec­tive al­tru­ism pre­scribes now. If effec­tive al­tru­ism did be­come a com­mu­nity of, say, tens of mil­lions of peo­ple, what effec­tive al­tru­ism would have the marginal per­son do in that case would likely look much differ­ent than what it recom­mends peo­ple do now. I be­lieve the fun­da­men­tal claim of the EA philos­o­phy isn’t that the ar­gu­ments from effec­tive al­tru­ism should nec­es­sar­ily gen­er­al­ize to ev­ery­one, but should gen­er­al­ize to the marginal, i.e., next per­son who adopts effec­tive al­tru­ism. What this gen­er­al­iza­tion is changes as the num­ber of effec­tive al­tru­ists grows. How­ever, effec­tive al­tru­ism is very far from a num­ber of peo­ple such that it would change all its recom­men­da­tions to the av­er­age or marginal com­mu­nity mem­ber.

If I un­der­stand you cor­rectly I think you make two in­ter­est­ing points here:

the po­ten­tial of EA as a poli­ti­cal ve­hi­cle for fi­nan­cial charity

The cur­rent EA ad­vice has to be the marginal advice

When I wrote “isn’t that the fun­da­men­tal claim of EA” I sup­pose more prop­erly I am refer­ring to the claims that 1) EA is a suit­able moral philos­o­phy 2) the con­sen­sus an­swers in the real ex­ist­ing EA com­mu­nity cor­re­spond to this philos­o­phy. In other words that EA is, broadly speak­ing, “right” to do.

I’m go­ing to ad­dress both of your above ques­tions with one an­swer. So, effec­tive al­tru­ism is sort of a moral philos­o­phy, but it’s not as com­plete or at all for­mal­ized a sys­tem as most re­li­gious de­on­tolo­gies, util­i­tar­i­anism, or other forms of con­se­quen­tial­ism or de­on­tol­ogy. Virtue ethics is like effec­tive al­tru­ism in that it runs on heuris­tics rather than the prin­ci­ples of de­on­tol­ogy, or the calcu­la­tions of util­i­tar­i­anism. I think virtue ethics and effec­tive al­tru­ism are similar in how they out­put recom­men­da­tions in such a way they at­tempt to be amenable to hu­man psy­chol­ogy. How­ever, with it’s own heuris­tics, virtue ethics has thou­sands of years of an­cient and mod­ern philos­o­phy from ev­ery civ­i­liza­tion to build upon and learn from. Effec­tive al­tru­ism is new.

There are three types of ethics in for­mal/​aca­demic philos­o­phy: nor­ma­tive ethics, the ethics of what peo­ple should do gen­er­ally; prac­ti­cal ethics, the ethics of what peo­ple should do in spe­cific and ap­plied sce­nar­ios; and meta-ethics, the philos­o­phy and anal­y­sis of ethics as a dis­ci­pline in its own right. When any­one thinks of any one eth­i­cal sys­tem, or “philos­o­phy”, such as Kant’s cat­e­gor­i­cal im­per­a­tive, or prefer­ence util­i­tar­i­anism, or Protes­tant ethics, it’s al­most always a sys­tem of nor­ma­tive ethics. Be­cause of how differ­ent effec­tive al­tru­ism is, what with it try­ing to mimic sci­ence in so many ways to figure out ex­ist­ing goals, and ac­co­mo­dat­ing what­ever nor­ma­tive sys­tem peo­ple used to reach the con­clu­sion of their moral goals, so long as they con­verge on the same goals, effec­tive al­tru­ism seems more like a sys­tem of prac­ti­cal rather than nor­ma­tive ethics. This makes it difficult to com­pare to other moral sys­tems. The fact there seems to be miss­ing the way by which effec­tive al­tru­ism de­ter­mines which moral goals are worth pur­su­ing is a fair crit­i­cism lobbed at the philos­o­phy in the past, and one philoso­phers like Will MacAskill and Peter Singer re­search to solve with­out forc­ing effec­tive al­tru­ism to con­form to one nor­ma­tive frame­work. That seems to be the role of meta-ethics in effec­tive al­tru­ism. As it grows, though, effec­tive al­tur­ism is be­com­ing less nec­es­sar­ily the­o­ret­i­cal or nor­ma­tive in its for­mu­la­tion. It’s a move­ment started by philoso­phers which may, in fulfilling its goals, may be­come less philo­soph­i­cal and more prag­matic.

That’s a challenge. It’s a unique challenge. Effec­tive al­tru­ism seems a suit­able moral philos­o­phy to me, for more rea­sons than the fact it can be made con­sis­tent with other eth­i­cal wor­ld­views, whether de­on­tolog­i­cal or con­se­quen­tial­ist, re­li­gious or sec­u­lar. From a prac­ti­cal per­spec­tive, I think effec­tive al­tru­ism is “right”, but be­cause it’s so odd among in­tel­lec­tual move­ments, I’m not sure what to com­pare it too.

“The fact there seems to be miss­ing the way by which effec­tive al­tru­ism de­ter­mines which moral goals are worth pur­su­ing … That seems to be the role of meta-ethics in effec­tive al­tru­ism.”

Maybe the an­swer is not to be found in meta-ethics or in anal­y­sis gen­er­ally, but in poli­tics, that is, the raw re­al­ities of what peo­ple be­lieve and want any given mo­ment, and how con­sen­sus forms or doesn’t.

In other words, I think the an­swer to “what goals are worth pur­su­ing” is, broadly, ask the peo­ple you pro­pose to help what it is they want. Luck­ily, this hap­pens reg­u­larly in all sorts of ways, in­clud­ing global scale sur­veys. This is part of what the value of “democ­racy” means to me.

I’m not averse to such an ap­proach. I think the crit­i­cism how effec­tive al­tru­ism de­ter­mines a con­sen­sus of what defines or philosopi­cally grounds “the good” comes from philoso­phers or other schol­ars who are weary of pop­ulist con­sen­sus on ethics when it’s in no way for­mal­ized. I’m bring­ing in David Moss to ad­dress this point; he’ll know more.

<Maybe the an­swer is not to be found in meta-ethics or in anal­y­sis gen­er­ally, but in poli­tics, that is, the raw re­al­ities of what peo­ple be­lieve and want any given mo­ment, and how con­sen­sus forms or doesn’t.

In other words, I think the an­swer to “what goals are worth pur­su­ing” is, broadly, ask the peo­ple you pro­pose to help what it is they want. Luck­ily, this hap­pens reg­u­larly in all sorts of ways, in­clud­ing global scale sur­veys.>

I guess it de­pends on what you mean by “what peo­ple be­lieve and want any given mo­ment.” If you in­ter­pret this as: the re­sults of a life satis­fac­tion sur­vey or max­imis­ing prefer­ences or some­thing, then the re­sult will look pretty much like stan­dard con­se­quen­tial­ist EA.

If you mean some­thing like: the out­put of peo­ple’s de­ci­sions based on col­lec­tive de­liber­a­tion, e.g. what a com­mu­nity de­cides they want col­lec­tively as the re­sult of a poli­ti­cal pro­cess, then it might be (prob­a­bly will be) some­thing to­tally differ­ent to what you would get if you were try­ing to max­imise prefer­ences.

I be­lieve one as­pect of Earn­ing to Give which is un­der­stud­ied and would have sig­nifi­cant im­pacts on these calcu­la­tions, is the long term vi­a­bil­ity of giv­ing rates on an in­di­vi­d­ual level. The earn­ing to give strat­egy nec­es­sar­ily places al­tru­is­tic peo­ple in the midst of largely non-like minded in­di­vi­d­u­als for decades at a time. In what world do we not think this will have an effect on the work­ing givers? To not con­sider defec­tion rates is naive at best and sloppy sci­ence at worst.

Are you a good mu­si­cian or a busi­ness man/​woman as well any worker and you
need ex­cess of money and you also want to be fa­mous and wealthy here is
your chance to be­come a mem­ber of the Illu­mi­nati and be­come a star in your
life. if re­ally you are in­ter­ested in be­com­ing a full mem­ber of the
Illu­mi­nati don’t hes­i­tate to email us and we also want you to know that
there is no­body that is to de­ter­mine your fu­ture be­cause your fu­ture is
right in your hands so join us now and be­come a re­spon­si­ble hu­man be­ing
okay, so email us now if in­ter­ested in be­com­ing rich and pow­er­ful. This
op­por­tu­nity is set for those peo­ple who have been think­ing of how to be­come
rich con­tact us and we will tell you the near­est branch were you can
wor­ship,come now to get in touch with money there is noth­ing to fear about,
we want to dis­close it openly this is why we have sent agent to the
in­ter­net. So con­tact us at: