Effective Altruism Making Waves

Over the last few years, I’ve no­ticed how bits and pieces of effec­tive al­tru­ism have be­come main­stream. A cou­ple weeks ago when I watched a YouTube video on my smart­phone, there was an ad for the Beyond Burger available at A&W’s across Canada. A&W’s is one of the biggest fast food fran­chises in North Amer­ica, and the Beyond Burger is a product from Beyond Meat, which has re­ceived sup­port from the Good Food In­sti­tute, which in turn has re­ceived fund­ing from the Open Philan­thropy Pro­ject. This means effec­tive al­tru­ism played a cru­cial role in the de­vel­op­ment of a con­sumer product that mil­lions of peo­ple will be ex­posed to.

Ar­tifi­cial In­tel­li­gence (AI) de­vel­op­ments make the head­lines on a reg­u­lar ba­sis, es­pe­cially re­gard­ing a com­ing age of au­toma­tion loom­ing in the near fu­ture. While con­cerns about ex­is­ten­tial risks from trans­for­ma­tive AI are dis­tinct from what is­sues re­gard­ing AI are most com­mon in the pub­lic con­scious­ness, when­ever AI comes up in con­ver­sa­tion I ask if peo­ple have heard about the AI safety con­cerns raised by pub­lic figures like Elon Musk, Bill Gates, and Stephen Hawk­ing. Most peo­ple I talk to when I bring this up have heard about it, and have a pos­i­tive as op­posed to nega­tive at­ti­tude to­ward the idea the de­vel­op­ment of AI should be man­aged to min­i­mize the chances it poses threats to hu­man­ity’s safety or se­cu­rity. This is all anec­do­tal, but in my ev­ery­day life in­ter­act­ing with peo­ple out­side EA, I’m sur­prised by how many peo­ple have some level of aware­ness of AI safety. It’s been at least a cou­ple dozen peo­ple.

I imag­ine be­cause char­i­ties fo­cused on helping the poor in the de­vel­op­ing the world are so com­mon, among the gen­eral pub­lic aware­ness of global poverty alle­vi­a­tion efforts ad­vo­cated by EA rel­a­tive to other char­i­ta­ble work in the de­vel­op­ing world is prob­a­bly pretty low. But among my cir­cles of friends also par­ti­ci­pat­ing in so­cial move­ments or in­tel­lec­tual com­mu­ni­ties, such as the ra­tio­nal­ity com­mu­nity, or a va­ri­ety of poli­ti­cal or ac­tivist move­ments, most ac­quain­tances I meet and friends I meet lo­cally have already heard of effec­tive al­tru­ism, and gen­er­ally have a pos­i­tive im­pres­sion of EA top­ics like effec­tive giv­ing, and or­ga­ni­za­tions like Givewell.

While the phrase ‘effec­tive al­tru­ism’ isn’t on ev­ery­one’s lips, it seems like a sig­nifi­cant pro­por­tion of the whole pop­u­la­tion of Canada and the United States is aware of things done to im­prove the world that effec­tive al­tru­ism played an early hand in mak­ing hap­pen. Over­all, in the last cou­ple years, how much more I no­tice con­nec­tions to EA in my ev­ery­day life, un­re­lated to EA, is much more com­mon. I don’t know if this pre­dicts or not a spike in growth and aware­ness of EA among the gen­eral pub­lic in the near fu­ture. But I’ve found it very sur­pris­ing just how no­tice­able the early suc­cesses of the EA move­ment so far by how far and wide things EA as a move­ment has had a hand in have im­pacted the world. Does any­one else have a similar ex­pe­rience?

While I do think EA has been spread­ing, I do want to cau­tion against gen­er­al­iz­ing from your per­sonal so­cial net­work to the broad pop­u­la­tion. As Scott Alexan­der put it:

Ac­cord­ing to Gal­lup polls, about 46% of Amer­i­cans are cre­ation­ists. Not just in the sense of be­liev­ing God helped guide evolu­tion. I mean they think evolu­tion is a vile athe­ist lie and God cre­ated hu­mans ex­actly as they ex­ist right now. That’s half the coun­try.

And I don’t have a sin­gle one of those peo­ple in my so­cial cir­cle. It’s not be­cause I’m de­liber­ately avoid­ing them; I’m pretty live-and-let-live poli­ti­cally, I wouldn’t os­tra­cize some­one just for some weird be­liefs. And yet, even though I prob­a­bly know about a hun­dred fifty peo­ple, I am pretty con­fi­dent that not one of them is cre­ation­ist. Odds of this hap­pen­ing by chance? 1/​2^150 = 1/​10^45 = ap­prox­i­mately the chance of pick­ing a par­tic­u­lar atom if you are ran­domly se­lect­ing among all the atoms on Earth.

About forty per­cent of Amer­i­cans want to ban gay mar­riage. I think if I re­ally stretch it, maybe ten of my top hun­dred fifty friends might fall into this group. This is less as­tro­nom­i­cally un­likely; the odds are a mere one to one hun­dred quin­til­lion against.

Peo­ple like to talk about so­cial bub­bles, but that doesn’t even be­gin to cover one hun­dred quin­til­lion. The only metaphor that seems re­ally ap­pro­pri­ate is the bizarre dark mat­ter world.

I live in a Repub­li­can con­gres­sional dis­trict in a state with a Repub­li­can gov­er­nor. The con­ser­va­tives are definitely out there. They drive on the same roads as I do, live in the same neigh­bor­hoods. But they might as well be made of dark mat­ter. I never meet them.

Filter bub­bles are re­ally strong. You are prob­a­bly as­tro­nom­i­cally more likely to meet peo­ple who might have heard of Effec­tive Altru­ism than the baseline sug­gests.

In the ex­am­ples I was talk­ing about, it was ads in one of the biggest fast food fran­chises in the coun­try, and the ran­dom peo­ple I talk to about AI safety are at bus stops and air­ports. This isn’t just from my so­cial net­work.Like I said, it’s only a lot of peo­ple in my so­cial net­work who have heard the words ‘effec­tive al­tru­ism,’ or know what they re­fer to. I was mostly talk­ing about the things EA has im­pacted, like AI safety and the Beyond Burger, re­ceiv­ing a lot of pub­lic at­ten­tion, even if EA doesn’t re­ceive credit. I took the out­comes of EA re­ceiv­ing at­ten­tion to be a sign of steps to­ward the move­ment’s goals as a good thing with­out re­gard to whether peo­ple have heard of EA.

Nice. And even more so if you broaden the defi­ni­tion of EAs to in­clude peo­ple who would have been EAs now if EA ma­te­rial had been available at their col­lege eg. older peo­ple and math­e­mat­i­cally in­clined Quak­ers and Univer­sal­ist Uni­tar­i­ans.

I agree with Habryka’s cau­tion, but I’ve been start­ing to see some of the same effects Evan men­tions. Speci­fi­cally, af­ter see­ing an EA friend do the same, I set up an IFTTT rule (the link may not work for you, IFTTT re­stricts shar­ing) that finds all Tweets us­ing terms like “effec­tive al­tru­ism” or “effec­tive al­tru­ists”.

Each morn­ing, I get an email with the day’s Tweets. Many of them are con­tent from EA orgs, but some re­veal con­ver­sa­tions hap­pen­ing in cor­ners of the in­ter­net that seem quite sep­a­rate from the broader “EA com­mu­nity”.

Some of those con­ver­sa­tions are nega­tive, but most are pos­i­tive; there is a slowly grow­ing pop­u­la­tion of peo­ple who heard the term “effec­tive al­tru­ism” at some point and now use it in con­ver­sa­tions about giv­ing with­out feel­ing the need to ex­plain them­selves. As our move­ment grows, this will have a lot of effects, good and bad, and it seems worth think­ing about.

(If you de­cide to set up your own IFTTT rule for Twit­ter or any­where else, my per­sonal opinion is that it’s bet­ter to avoid jump­ing into ran­dom con­ver­sa­tions with strangers, es­pe­cially if your goal is to “cor­rect” a crit­i­cism they made. It won’t work.)

(If you de­cide to set up your own IFTTT rule for Twit­ter or any­where else, my per­sonal opinion is that it’s bet­ter to avoid jump­ing into ran­dom con­ver­sa­tions with strangers, es­pe­cially if your goal is to “cor­rect” a crit­i­cism they made. It won’t work.)

Depend­ing on the con­text, there could be many more peo­ple read­ing the con­ver­sa­tion than the per­son who had the mis­con­cep­tion. (IIRC, re­search into lurker:par­ti­ci­pant ra­tios in on­line con­ver­sa­tions of­ten comes up with num­bers like 10:1 or 100:1.) If the mis­con­cep­tion goes un­cor­rected then many more peo­ple could ac­quire it. I think cor­rect­ing mis­con­cep­tions on­line can be a re­ally good use of time.

I’ve only been do­ing this for a few weeks, so not yet. I’m archiv­ing all the emails I get, so even­tu­ally I should have a rea­son­able trend es­ti­mate. I’ve set a re­minder to check in on this in six months.

I went back and looked at an ear­lier Feedly I had setup from 18th Novem­ber 2015 un­til 3rd March 2016 and there were 2123 men­tions of “effec­tive al­tru­ism” which is over 106 days com­pared to 130 days in the cur­rent ex­am­ple.

I have a sus­pi­cion that a few tweets get cut off from my cur­rent Feedly which might be one rea­son it seems to be lower, it could also be that there was a big­ger me­dia push in 2015/​2016.

I think with EAA waves have been made in quite a de­poli­ti­cised way. We can point to how GFI has sup­ported in­vest­ment and pro­moted prod­ucts, but we can also look to the costs of this gen­eral ap­proach. Go­ing “main­stream” of­ten seems to mean that we are adopt­ing and repli­cat­ing the char­ac­ter­is­tics of that main­stream and nudg­ing within it (or just al­ign­ing with it) rather than challeng­ing it. This has in­formed much of effec­tive al­tru­ism and how dona­tions are made to larger or­gani­sa­tions, par­tic­u­larly as is­sues of rights, anti-speciesism and ve­g­anism have been con­sid­ered and of­ten pushed aside. For in­stance, i doubt there are many rights ad­vo­cates in the ACE top char­i­ties, or gen­er­ally as­so­ci­ated with effec­tive al­tru­ism. Those per­spec­tives are largely miss­ing through­out EAA and nei­ther are they sought out or par­tic­u­larly wel­come as far as i can tell.

The em­pha­sis for me has been a race to make short term gains whilst medium to longer term pro­jects have been marginal­ised or just not con­sid­ered in favour of ap­proaches al­igned to dom­i­nant ide­olo­gies around welfarism and “prag­ma­tism”. Par­tic­u­larly as­so­ci­ated with Bruce Friedrich, Paul Shapiro, Nick Cooney, Matt Ball and favoured by Peter Singer.

Another con­cern is how effec­tive al­tru­ism con­tinues to break is­sues down be­tween in­di­vi­d­u­al­ism (or atom­i­sa­tion) and cor­po­rate cam­paign­ing from or­gani­sa­tional per­spec­tives, some­thing which over­looks the na­ture of the gen­eral an­i­mal move­ment. I’m pleased that plant based burg­ers are more read­ily available these days, but this is per­haps not so much due to GFI but more to do with how peo­ple have helped pro­mote them gen­er­ally.

We can find pos­i­tive things to con­sider about effec­tive al­tru­ism, but there is a ten­dency to over­look some un­der­ly­ing is­sues which are im­por­tant to think about in terms of a more com­plex form of effec­tive­ness, and it is rare to see these types of is­sues con­sid­ered and dis­cussed. Per­haps not least be­cause EAA has be­come some­what dis­torted by a main­stream it has at­tempted to en­gage and in­fluence.

The em­pha­sis for me has been a race to make short term gains whilst medium to longer term pro­jects have been marginal­ised or just not con­sid­ered in favour of ap­proaches al­igned to dom­i­nant ide­olo­gies around welfarism and “prag­ma­tism”.

My strong im­pres­sion is that longer-term pro­jects have be­come a much greater pri­or­ity for fund­ing over the last few years, in that EA or­ga­ni­za­tions have fo­cused more on re­search and com­mu­nity-build­ing (pro­jects with low short-term re­turn) than on col­lect­ing dona­tions and try­ing to ap­pear in the me­dia.

I may have a differ­ent idea of what con­sti­tutes “short-term gains”, es­pe­cially since I don’t see why they would be in­her­ently op­posed to prag­ma­tism, and would be cu­ri­ous to hear how you define the term //​ what spe­cific events make you think this trend ex­ists.

The em­pha­sis for me has been a race to make short term gains whilst medium to longer term pro­jects have been marginal­ised or just not considered

ACE re­cently did an anal­y­sis of how re­sources are al­lo­cated in the farmed an­i­mal move­ment. You can see from figure 7 that ACE fund­ing goes more to­wards build­ing al­li­ances and ca­pac­ity (the “long-term” parts of their on­tol­ogy) than in the move­ment more gen­er­ally.

(ACE ar­gues that the amount is still too small. But it seems weird to crit­i­cize EAA for that, since ACE is do­ing bet­ter than the rest of the move­ment, and seems to be plan­ning to do even more.)

In re­la­tion to short /​ medium term, i am say­ing that short term gains are more geared to­ward welfarism and *veg* ap­proaches rather than pro­jects such as rights /​ anti-speciesism in terms of anti-ex­ploita­tion. So whilst we could view con­ven­tional EAA in­ter­ven­tions as part of a big­ger pic­ture, we’re not ex­plor­ing these is­sues as part of how they fit to­gether in a broader con­text, par­tic­u­larly in terms of differ­ent moral the­o­ries or how it is that differ­ent per­spec­tives aim to re­duce suffer­ing. In the sense of what is funded /​ em­pha­sised through effec­tive al­tru­ism then there are con­flict­ing over­ar­ch­ing ideas which in my view need to be con­sid­ered and re­solved in or­der to be in­clu­sive /​ rep­re­sen­ta­tive.

For most or­gani­sa­tions which already fit with “prag­ma­tism” this is a bit of a non-is­sue. How­ever, for those which are more poli­ti­cised they can be marginal­ised in re­la­tion to how fund­ing is al­lo­cated and how pow­er­ful al­li­ances are con­structed around ide­ol­ogy. This i would ar­gue has hap­pened with most of the large con­sid­ered to be EA al­igned or­gani­sa­tions. This to me over­looks how nar­row the frame­work for in­ter­ven­tion ac­tu­ally is. To illus­trate this point we can look at where prob­lems have arisen with or­gani­sa­tions ACE has con­sid­ered eval­u­at­ing such as A Well Fed World.

“De­clined to be re­viewed/​pub­lished for the fol­low­ing rea­son(s):

They do not sup­port An­i­mal Char­ity Eval­u­a­tors’ de­ci­sion to eval­u­ate char­i­ties rel­a­tive to one an­other.”

De­spite this out­come they don’t ap­pear a good fit for con­ven­tional EAA be­cause the work they do is difficult to mea­sure and the groups they sup­port as part of their work so small it is difficult to mea­sure their im­pact go­ing for­ward (pos­i­tive or nega­tive). How­ever, that po­ten­tial im­pact is diminished (in terms of in­clud­ing differ­ent per­spec­tives) fur­ther by favour­ing re­sourc­ing con­ven­tion­ally al­igned or­gani­sa­tions over those not part of the EAA fam­ily (which isn’t to say they don’t tac­itly ac­cept EA prin­ci­ples) which then grow at a much faster rate po­ten­tially crowd­ing out other ideas and or­gani­sa­tions. For those re­sourced and largely ide­olog­i­cally al­igned i’m think­ing of An­i­mal Equal­ity, The Hu­mane League, Good Food In­sti­tute, Mercy for An­i­mals, ProVeg, Re­duc­etar­ian Foun­da­tion, Albert Sch­weitzer Foun­da­tion, Open Cages, Com­pas­sion In World Farm­ing.

What hap­pens here is that EAs tend to point to­ward fund­ing di­rected to­ward cat /​ dog res­cues over farmed an­i­mal pro­tec­tion, and it is cor­rect to note how egre­giously dis­pro­por­tionate that con­tinues to be. How­ever, within the some­what del­i­cate and nascent space of farmed an­i­mal pro­tec­tion, fund­ing a small num­ber of ide­olog­i­cally al­igned groups has been dis­rup­tive in the move­ment as a whole (for in­stance af­ford­abil­ity in terms of con­fer­ences, spon­sor­ship, out­reach and so on), and this im­pact hasn’t been fac­tored in (though it re­mains to be seen whether the new ACE Effec­tive An­i­mal Ad­vo­cacy pro­ject will ad­dress some of these is­sues, though per­haps only im­plic­itly). A fur­ther is­sue would arise that if it doesn’t hap­pen and if EA Funds doesn’t shift be­yond Lewis’ gen­eral con­sid­er­a­tions then the new panel for EA Funds will pre­sent a missed op­por­tu­nity. Lewis might be con­cerned about whether peo­ple would be a good fit and could agree on cer­tain is­sues, but it seems un­for­tu­nate that con­clu­sion was drawn be­fore an at­tempt made to re­ally challenge the foun­da­tion of EAA, for in­stance in re­la­tion to nor­ma­tive un­cer­tainty. How­ever, here it de­pends on what time Lewis would have to over­see that, and i sus­pect not enough to make it a vi­able pos­si­bil­ity which i think illus­trates the rea­son that un­der­pins the new ap­proach.

Tra­di­tion­ally, or­gani­sa­tions that are more challeng­ing to the “main­stream” have of­ten strug­gled for fund­ing (so there­fore by the lights of many aren’t very suc­cess­ful), and are of­ten too small for Open Philan­thropy to con­sider, or EA Funds at least up un­til now be­cause of the time con­straints in­volved in do­ing so (time spent per dol­lar donated). In­deed, it is challeng­ing to pre­sent a case for many or­gani­sa­tions, other than it is im­por­tant to have mul­ti­ple per­spec­tives /​ or­gani­sa­tions in a move­ment for­mat, though, as Lewis pointed out in re­la­tion to EA Funds he also wor­ries about dis­cord. But this isn’t a rea­son not to do that more challeng­ing work, and nei­ther are time con­straints. If any­thing, these are fun­da­men­tal con­sid­er­a­tions that ought to have been in­cor­po­rated at the in­cep­tion of EAA and the Open Philan­thropy An­i­mal Welfare Pro­gram, but it doesn’t ap­pear to me they ever re­ally were. Partly be­cause it ap­pears EA leaned heav­ily on con­ven­tional or­gani­sa­tional lead­ers of the larger an­i­mal or­gani­sa­tions prior to EAA, and there isn’t much ev­i­dence those lead­ers took those types of con­sid­er­a­tions on­board ei­ther. Par­tic­u­larly i’m think­ing Paul Shapiro, Wayne Pa­celle, Nick Cooney, Bruce Friedrich, Matt Ball who largely preferred an agenda and ap­proach grounded in “prag­ma­tism”, some­thing which was quite ap­peal­ing to many util­i­tar­i­ans but not to rights ad­vo­cates, who be­came un­flat­ter­ingly as­so­ci­ated with terms such as ex­trem­ist, fun­da­men­tal­ist, ab­solutist, pu­ri­tan, hardliner in the as­so­ci­ated rhetoric. Their value fur­ther dimin­shed be­cause a lack of prag­ma­tism seemed to be­come equated with a lack of effec­tive­ness.

None of this is to say that it is “wrong” to fund any of the top or stand­out ACE char­i­ties (for in­stance) from an EA per­spec­tive, but taken to­gether it’s a stretch even for effec­tive al­tru­ism. So from my view fund­ing is dis­pro­por­tionate, but this also re­flects the view of the EAA trust net­work pre­sum­ably. If we had a bet­ter idea of who ex­actly that was, in­clud­ing who the CEA was con­sult­ing then it would be eas­ier to point out where ad­just­ments could be made, so we might have di­ver­sity of view­points and rep­re­sen­ta­tion within an EA frame­work, or we could at least con­sider how it is that it could func­tion differ­ently given a va­ri­ety of sce­nar­ios /​ coun­ter­fac­tu­als. Other­wise we have no real idea of how effec­tive we are be­ing col­lec­tively, we are in­stead look­ing at things from a fairly con­ven­tional EAA view, which from my per­spec­tive is loaded to­ward de-poli­ti­cised short terms gains as­so­ci­ated with “veg” and welfare ap­proaches.