“EA” doesn’t have a talent gap. Different causes have different gaps.

There’s been a lot of dis­cus­sion and dis­agree­ment over whether EA has a tal­ent or a money gap. Some peo­ple have been say­ing there’s not that large of a fund­ing gap any­more and that peo­ple should be us­ing their tal­ent di­rectly in­stead. On the other hand, oth­ers have been say­ing that there definitely still is a fund­ing gap.

I think both par­ties are right, and the rea­son for the mi­s­un­der­stand­ing is that we have been refer­ring to the en­tire EA move­ment in­stead of break­ing it down by cause area. In this blog post I do so and demon­strate why we’re like the blind men touch­ing differ­ent parts of the elephant, and how if we put all of it to­gether, we’ll be able to make much bet­ter de­ci­sions.

I am not ex­tremely con­fi­dent on all these num­bers (par­tic­u­larly the size of the AI tal­ent gap), but I am con­fi­dent of the broader claim that the gaps are differ­ent be­tween cause ar­eas, and we would all benefit from mak­ing that dis­tinc­tion in pub­lic dis­course. I am happy to up­date these as peo­ple make good ar­gu­ments for them in the com­ments. Below I’ll go into fur­ther de­tails of how I came to these es­ti­mates.

Poverty tal­ent gap

In my ex­pe­rience, poverty or­ga­ni­za­tions gen­er­ally hire out­side of the EA move­ment for many roles. There are still small gaps for some poverty or­ga­ni­za­tions hiring man­age­ment and lead­er­ship roles from the EA pool (~4). There are also some gaps in op­er­a­tional tal­ent (~2). A part of this gap also comes from the pos­si­bil­ity of found­ing more effec­tive poverty char­i­ties (~4), such as a to­bacco tax­a­tion or con­di­tional cash trans­fer char­ity, like what has been done with Char­ity Science Health and For­tify Health.

Poverty money gap

The gap for money in poverty is huge, even when only look­ing at char­i­ties sig­nifi­cantly stronger than Give Directly, whose gap is very large and ar­guably vir­tu­ally un­limited. The gap is close to $100 mil­lion af­ter Good Ven­tures funds its por­tion. There is also rea­son to ex­pect this gap to grow with re­cent changes in Good Ven­ture’s fund­ing plans and a strong group of in­cu­ba­tion char­i­ties in GiveWell’s sys­tem. This gap only grows if you think there are strong op­por­tu­ni­ties in poverty out­side of GiveWell’s list. As­sum­ing donat­ing 50% of a $100,000 salary, it would eas­ily take 1,720 peo­ple do­ing E2G to fill this gap. And that is not even in­clud­ing new Givewell in­cu­bated/​recom­mended char­i­ties!

An­i­mal rights tal­ent gap

The tal­ent gap for an­i­mal rights is very large. Many AR or­ga­ni­za­tions are hiring and try­ing to grow as fast as pos­si­ble. There is also con­sid­er­able scope for en­trepreneur­ship and found­ing new and effec­tive an­i­mal rights or­ga­ni­za­tions. The an­i­mal rights com­mu­nity as a whole is very small and the num­ber of EAs in the move­ment is even more limited.

An­i­mal rights money gap

His­tor­i­cally an­i­mal rights has been chron­i­cally ham­pered by in­suffi­cient fund­ing across the move­ment. How­ever the en­trance of Open Phil to the area has cre­ated a very differ­ent situ­a­tion. I now cat­e­go­rize the fund­ing gaps as mixed. The fund­ing is fairly cen­tral­ized be­tween Open Phil and the AR Funds be­ing run by the same per­son (Lewis), which con­trols nearly 50% of all fund­ing in AR. If you have strong agree­ment with Lewis about the pri­ori­ties in the area, I would say the fund­ing gap is small. How­ever, if you have very differ­ent views, then the fund­ing gap could be seen as large.

Ar­tifi­cial in­tel­li­gence tal­ent gap

The tal­ent gap for Ar­tifi­cial in­tel­li­gence is mid­dling, with many or­ga­ni­za­tions in the field in need of re­searchers as well as some gaps in meta-or­ga­ni­za­tions fo­cus­ing on meta-re­search. There are also sig­nifi­cant gaps in op­er­a­tional tal­ent to help the sup­port struc­tures of these or­ga­ni­za­tions.

Ar­tifi­cial in­tel­li­gence money gap

The money gap for AI or­ga­ni­za­tions seems very small, with even large fun­ders be­ing turned away from many pro­jects. Many or­ga­ni­za­tions have very large amounts of fund­ing, and given the re­cent changes in pub­lic­ity, much like an­i­mal rights, AI went from be­ing chron­i­cally un­der­funded to well funded in al­most all ar­eas. Fur­ther­more, due to the fairly wide spread of fun­ders, even peo­ple with more unique per­spec­tives on AI will find it hard to find good gaps.

Meta or­ga­ni­za­tions tal­ent gap

Im­por­tantly in this sec­tion, I mostly con­sider meta or­ga­ni­za­tions that do not fall un­der an­other cause area. For ex­am­ple, ACE would fall un­der an­i­mal rights, not un­der meta. The tal­ent gap for these or­ga­ni­za­tions gen­er­ally seems small, with some posted roles in lead­er­ship (~7), op­er­a­tions (~3), re­search (~3) and other gen­eral roles (~3) across or­ga­ni­za­tions. There seems to be some scope for found­ing new char­i­ties as well (~4).

Meta or­ga­ni­za­tions money gap

Much like an­i­mal rights, there’s a lot of cen­tral­iza­tion of fund­ing with a hand­ful of fun­ders con­trol­ling a very high per­centage of to­tal fund­ing. Like in an­i­mal rights, there is one per­son who con­trols the EA funds on meta-or­ga­ni­za­tions and is the lead in­ves­ti­ga­tor for Open Phil. Thus I think an EA’s per­spec­tives on fund­ing gaps will largely de­pend on how well their views al­ign with Nick Beck­stead’s. This gap can range from very small to mod­er­ate sized (low mil­lions) de­pend­ing on how broadly you define meta-or­ga­ni­za­tions.

Over­all, as you can see, the tal­ent and money gaps vary largely de­pend­ing on the cause. If you think poverty is the high­est im­pact area, earn­ing to give is a very good choice. On the other hand, if you think an­i­mal rights is the best, figur­ing out how to best give your tal­ents might be a bet­ter way for­ward. If you agree with Lewis, that is. Re­gard­less of what cause you think is high­est pri­or­ity and what you think the gaps truly are, break­ing them down by cause area will help ev­ery­body make bet­ter de­ci­sions.

Thanks for try­ing to get a clearer han­dle on this is­sue by split­ting it up by cause area.

One gripe I have with this de­bate is the fo­cus on EA orgs. Effec­tive Altru­ism is or should be about do­ing the most good. Or­gani­sa­tions which are ex­plic­itly la­bel­led Effec­tive Altru­ist are only a small part of that. Claiming that EA is now more tal­ent con­strained than fund­ing con­strained im­plic­itly refers to Effec­tive Altru­ist orgs be­ing more tal­ent than fund­ing con­strained.

Whether ‘do­ing the most good’ in the world is more tal­ent than fund­ing con­strained is much harder to prove but is the ac­tu­ally im­por­tant ques­tion.

If we fo­cus the de­bate on EA orgs and our gen­eral vi­sion as a move­ment on orgs that are la­bel­led EA, the EA Com­mu­nity runs the risk of over­look­ing efforts and op­por­tu­ni­ties which aren’t branded EA.

Of course fix­ing global poverty takes more than ten peo­ple work­ing on the prob­lem. Filling the fund­ing gap for GiveWell recom­mended char­i­ties won’t be enough to fix it ei­ther. Us­ing EA branded fram­ing isn’t spe­cial to you—but it can make us lose track of the big­ger pic­ture of all the prob­lems that still need to be solved, and all the fund­ing that is still needed for that.

If you want to fo­cus on fix­ing global poverty, just be­cause EA fo­cuses on GW recom­mended char­i­ties doesn’t mean EtG is the best ap­proach—how about train­ing to be a de­vel­op­ment economist in­stead? The world still needs more than ten ad­di­tional ones of that. (Edit: But it is not ob­vi­ous to me whether global poverty as a whole is more tal­ent or fund­ing con­strained—you’d need to poll lead­ing peo­ple who ac­tu­ally work in the field, e.g. lead­ing de­vel­op­ment economists or de­vel­op­ment pro­fes­sors.)

“Claiming that EA is now more tal­ent con­strained than fund­ing con­strained im­plic­itly refers to Effec­tive Altru­ist orgs be­ing more tal­ent than fund­ing con­strained.”

It would be true if that were what was meant, but the speaker might also mean that ‘any­thing which ex­ist­ing EA donors like Open Phil can be con­vinced to fund’ will also be(come) tal­ent con­strained.

Inas­much as there are lots of big EA donors will­ing to change where they give, ac­tivi­ties that aren’t branded as EA may still be la­tently tal­ent con­strained, if they can be iden­ti­fied.

The speaker might also think ac­tivi­ties branded as EA are more effec­tive than the al­ter­na­tives, in which case the money/​tal­ent bal­ance within those ac­tivi­ties will be par­tic­u­larly im­por­tant.

It was the choice of “Money gap—Large (~$86 mil­lion” in the sum­mary that got me. It just seems im­me­di­ately odd that if you think that Earn­ing To Give to some global poverty char­i­ties is on a par with other com­mon EA ca­reer choices in terms of marginal im­pact (i.e. as­sum­ing you think “poverty” should be on the table at all for us), that the size of this fund­ing gap is the equiv­a­lent of ~$0.086pp for the bot­tom billion. And in fact the linked post gives a fund­ing gap of some­thing more like $400 mil­lion for GiveWell’s top char­i­ties alone (on top of ex­pected fund­ing from Good Ven­tures and donors who aren’t in­fluenced by GiveWell), with GiveDirectly able to ab­sorb “over 100 mil­lion dol­lars”. But it’s not so odd if you think that the ex­pected value of donat­ing to GiveWell-recom­mended char­i­ties is sev­eral or­ders of mag­ni­tude greater com­pared to the av­er­age global poverty char­ity. I’m aware that heavy-tailed dis­tri­bu­tions are prob­a­bly at play here, but I’m very skep­ti­cal that GiveWell has found any­where near the end of that tail (al­though I think they’re the best we have).

Re­gard­less of what the au­thor meant, I think I see this kind of think­ing in EA fairly reg­u­larly, and it’s en­couraged by giv­ing the “ne­glect­ed­ness” crite­rion such promi­nence, per­haps un­duly.

And yes, I also want to thank the au­thor for en­courag­ing peo­ple to think and talk about this in a more nu­anced way.

Here’s what Lewis Bol­lard had to say about the tal­ent vs. fund­ing is­sue when asked about it on the 80,000 Hours pod­cast (in Septem­ber 2017):

Robert Wiblin: My im­pres­sion is that fa …. an­i­mal welfare or­gani­sa­tions, at least the ones that I’m aware of, they are as­so­ci­ated with Effec­tive Altru­ism are of­ten among the most fund­ing con­strained. That they of­ten feel like they’re most limited by ac­cess to money. Does this sug­gest that peo­ple who are con­cerned with an­i­mal welfare should be more in­clined to do earn­ing to give and, per­haps, rather than work in the area, in­stead make money and give it away?

Lewis Bol­lard: I don’t think so. I think that that was true un­til two years ago, or it was true un­til eigh­teen months ago when we started ground mak­ing in this field. I think the situ­a­tion has dra­mat­i­cally im­proved in terms of fund­ing largely be­cause of Open Phil. En­ter­ing this field, but also be­cause there are a num­ber of other very gen­er­ous donors who’ve ei­ther en­tered the field or sig­nifi­cantly in­creased their giv­ing in the last two years.

Right now I think there is a big­ger tal­ent gap than fi­nan­cial gap for farm an­i­mal welfare groups. That’s not to say it will always be that way, and I cer­tainly do think that some­one whose ap­ti­tude or in­cli­na­tion is heav­ily to­ward earn­ing to give, it could still well make sense. If some­one has great quan­ti­ta­tive skills and en­joys work­ing at a hedge fund, then I would say earn to give. That could be still a re­ally pow­er­ful way and we will more and more fun­ders over time to con­tinue scal­ing up the move­ment, but all things equal, I would en­courage some­one to fo­cus more on the tal­ent piece now be­cause I do think that things have re­ally flipped in the last few years, and I’m pretty op­ti­mistic that the fund­ing will con­tinue to grow in this space for an­i­mal welfare.

Robert Wiblin: What makes you con­fi­dent about that? You don’t ex­pect to be fired in the next few years?

Lewis Bol­lard: First, I hope I won’t be fired, but I think there’s a deep com­mit­ment from the Open Philan­thropy Pro­ject to con­tinue strong fund­ing in this space, to con­tinue fund­ing on at least the level we’re fund­ing cur­rently and hope­fully more.

I’ve also just seen a num­ber of new large-ish fun­ders com­ing on­line. Just in the last two years I would say the num­ber of fun­ders giv­ing more than two hun­dred thou­sand dol­lars a year has dou­bled, and I’ve started to see real in­ter­est from some other ma­jor po­ten­tial fun­ders.

I think it’s nat­u­ral that, as this is­sue has gained pub­lic promi­nence, so were there a lot of po­ten­tial donors, or peo­ple who have great wealth, have re­al­ised that this is some­thing im­por­tant and this is some­thing that they can make a great differ­ence.

For the an­i­mal ad­vo­cacy space, my anec­data sug­gest that the tal­ent gap is in large part a product of fund­ing con­straints. Most an­i­mal char­i­ties pay rather poorly, even com­pared to other non­prof­its.

It’s also more pre­cise and of­ten clearer to talk about par­tic­u­lar types of tal­ent, rather than “tal­ent” as a whole e.g. the AI safety space is highly con­strained by peo­ple with deep ex­per­tise in ma­chine learn­ing and global poverty isn’t.

How­ever, when we say “the land­scape seem more tal­ent con­strained than fund­ing con­strained” what we typ­i­cally mean is that given our view of cause pri­ori­ties, EA al­igned peo­ple can gen­er­ally have a greater im­pact through di­rect work than earn­ing to give, and I still think that’s the case.

First you need to look for what ac­tivi­ties you think are most im­pact­ful, and then see what your money can gen­er­ate vs your time.

This state­ment could be in­ter­preted as sug­gest­ing that peo­ple should use a two-step pro­cess: first, choose a prob­lem based on how press­ing it is and then sec­ond, de­cide how to con­tribute to solv­ing that prob­lem.* That two-step ap­proach would be a bad idea be­cause some peo­ple may be able to make a greater im­pact work­ing on a less press­ing prob­lem if they are es­pe­cially effec­tive at ad­dress­ing that prob­lem. Be­cause of this, in­for­ma­tion about how press­ing differ­ent prob­lems are rel­a­tive to each other should not be used to choose a sin­gle prob­lem; in­stead, it should be used as back­ground in­for­ma­tion when com­par­ing ca­reers across prob­lems.

*I doubt that’s what you ac­tu­ally meant since you wrote the linked ar­ti­cle that dis­cusses per­sonal fit. But I figured some peo­ple might be un­fa­mil­iar with that ar­ti­cle, so I thought it’d be worth­while to note the is­sue.

Yes—the rea­son you need to look at a bunch of ac­tivi­ties rather than just one ac­tivity, is that your per­sonal fit, both in gen­eral, and be­tween earn­ing vs di­rect work, could ma­te­ri­ally re­order them.

If the AI safety/​al­ign­ment com­mu­nity is al­to­gether around 50 peo­ple, that’s a large rel­a­tive gap. Depend­ing on how you count it might be big­ger than 50 peo­ple, but the tal­ent gap seems large in rel­a­tive terms in ei­ther case. :)

This is very helpful. I would note that the Global Catas­trophic Risk In­sti­tute does AI and is fund­ing con­strained. Of course it also does other X risk work, but I think it would be good to broaden your cat­e­gory to in­clude this or have a sep­a­rate cat­e­gory.