Jamie_Harris

“Both WASR and UF spent a sig­nifi­cant amount of time on aca­demic out­reach in 2018”

I hadn’t re­al­ised this; I thought that An­i­mal Ethics fo­cused more on this, while WASR fo­cused more di­rectly on foun­da­tional re­search. Do you think there will be over­loaded be­tween WAI and An­i­mal Ethics or do the or­gani­sa­tions have differ­ent ap­proaches?

Yes, I had thought about this. There was a ques­tion in the sur­vey in­tend­ing to check if peo­ple thought this was the case so far, and I didn’t see much ev­i­dence for it. But Id guess that those sorts of effects might be less ob­vi­ously no­tice­able, or might take longer to be­come no­tice­able.

“For ex­am­ple, the differ­ence be­tween as­sign­ing a 5% prob­a­bil­ity and a 50% prob­a­bil­ity is epistem­i­cally vast but ar­guably prac­ti­cally in­signifi­cant. It merely af­fects the amount of ex­pected value rep­re­sented by in­ver­te­brates by one or­der of mag­ni­tude. There are very roughly 10^18 in­sects in the world, and this num­ber is still mul­ti­ple or­ders of mag­ni­tude higher than the num­ber of ver­te­brate an­i­mals.”

Given this point, and the im­pli­ca­tions of Jacy’s com­ment, per­haps it would be prefer­able to con­cep­tu­al­ise the im­pact of this re­search/​ca­reer plan in this area as a form of ad­vo­cacy, rather than as a form of en­hanc­ing our knowl­edge and af­fect­ing cause pri­ori­ti­sa­tion?

In some ways, your rough ca­reer tra­jec­tory might look similar, but it might af­fect some de­ci­sions e.g. how to split your time be­tween fo­cus­ing on fur­ther re­search and fo­cus­ing on giv­ing talks to EA groups, aca­demic set­tings etc.

This list has re­search ques­tions across a num­ber of differ­ent themes or cat­e­gories, e.g. “wider un­der­stand­ing of cur­rent an­i­mal use” and “eval­u­a­tions of farmed an­i­mal in­ter­ven­tions.” To think about which ques­tions are im­por­tant, I’d sug­gest cat­e­goris­ing the ques­tions, then pri­ori­tis­ing the over­all cat­e­gories.

Sen­tience In­sti­tute has sum­marised foun­da­tional ques­tions in effec­tive an­i­mal ad­vo­cacy, and we tend to pri­ori­tise re­search that we think will best help to im­prove our un­der­stand­ing of these ques­tions (see our re­search agenda).

Frankly, there are huge amounts of re­search ques­tions that could be use­ful in some shape or form to effec­tive an­i­mal ad­vo­cacy. I’m not aware of any­one hav­ing com­piled a com­pre­hen­sive list, al­though I think that this might be worth­while do­ing at some point, es­pe­cially to co­or­di­nate the differ­ent or­gani­sa­tions and in­di­vi­d­u­als con­duct­ing re­search and to avoid un­nec­es­sary du­pli­ca­tion of effort.

There are also im­por­tant con­sid­er­a­tions about the risk that rain­for­est preser­va­tion efforts might in­di­rectly in­crease suffer­ing.

Many in the effec­tive al­tru­ism com­mu­nity be­lieve that a large pro­por­tion of wild an­i­mals, es­pe­cially in­ver­te­brates and other r-se­lected species, have net nega­tive lives. Re­cently, this was the con­clu­sion of a re­cent re­port by char­ity en­trepreneur­ship. If you be­lieve that there is a non-triv­ial chance that these an­i­mals can suffer or have morally rele­vant ex­pe­riences, then the short- and medium-term effect of rain­for­est pro­tec­tion might be a coun­ter­fac­tual in­crease in wild an­i­mal suffer­ing (see here for Brian To­masik’s dis­cus­sion of a re­lated ques­tion).

More widely, en­courag­ing con­cern for habitat pro­tec­tion might en­courage peo­ple to value non-sen­tient en­tities even where the in­ter­ests of these non-sen­tient en­tities con­flict with the di­rect in­ter­ests of in­di­vi­d­ual an­i­mals. In gen­eral, this seems to be a step in the wrong di­rec­tion if you agree that moral cir­cle ex­pan­sion is de­sir­able. This might en­courage the like­li­hood of fu­ture dystopian sce­nar­ios which in­volve as­tro­nom­i­cal lev­els of suffer­ing.

In a sense, by pro­mot­ing en­vi­ron­men­tal­ism via con­ser­va­tion, you might be re­duc­ing the chance of a global catas­tro­phe via cli­mat change but in­creas­ing the chance of S risk.

I’m glad to see his­tor­i­cal ev­i­dence be­ing con­sid­ered and also aware­ness of its limi­ta­tions.

What do you con­sider to be the main strate­gic im­pli­ca­tions for the EA com­mu­nity?

Is it mainly to up­date slightly away from strate­gies which might lead to events similar to the hy­poth­e­sised causes of de­cline of Mo­hism, and to­wards those which might lead to events similar to the hy­poth­e­sised causes of the suc­cess of Con­fu­ci­anism? E.g. up­date to­wards be­ing will­ing to “adapt doc­trines to chang­ing so­cial and in­tel­lec­tual cir­cum­stances.”

Great write up. I’m a fan of the sys­tem­atic think­ing and re­search. It’s in­ter­est­ing to com­pare how you ap­proached it to how Char­ity En­trepreneur­ship are look­ing into non-profit startup op­por­tu­ni­ties. I’m in­ter­ested in how you weighed up the de­ci­sion crite­ria; was this just in­tu­itive, based off the rest of the re­search, or did you have an­other ap­proach?

One area where I might di­verge from your ap­proach here is in how you con­cep­tu­al­ise ex­pected so­cial im­pact. I get the im­pres­sion here (mainly from your use of “Filter #2: So­cial Im­pact—Com­par­ing An­i­mal Suffer­ing”) that you pri­mar­ily con­cep­tu­al­ise the im­pact of a startup in terms of the the prod­ucts that that startup pro­duces and the an­i­mal prod­ucts that it re­places coun­ter­fac­tu­ally. But a broader con­cep­tu­al­i­sa­tion of the im­pact of a startup might in­clude its con­tri­bu­tion (pos­i­tive or nega­tive) to the over­all even­tual suc­cess (i.e. mar­ket share) of plant-based meat and/​or clean meat. In the long-term, this could well mat­ter more for to­tal im­pact.

So a startup which in­tro­duces a cel­lu­lar agri­cul­ture product re­plac­ing an an­i­mal product that causes rel­a­tively small amounts of suffer­ing might still be far more im­pact­ful than some other startup ideas (e.g. a startup that brings a good clean chicken product to mar­ket at a bet­ter price point than its com­peti­tors) if it helps to bring cel­lu­lar agri­cul­ture prod­ucts to mar­ket in a way that has wider pub­lic sup­port. Although each of these ex­am­ples has a long list of pros and cons, this spe­cific goal might be bet­ter achieved by:

2) Fo­cus­ing on prod­ucts which are more widely con­demned by the pub­lic, e.g. foie gras

3) Fo­cus­ing on mar­ketis­ing the prod­ucts in coun­tries which are more likely to be sup­port­ive, even if the to­tal mar­ket is smaller, e.g. Sin­ga­pore (see here).

In each of these ex­am­ples, bring­ing the prod­ucts to mar­ket in those spe­cific con­texts might in­crease con­sumer ac­cep­tance of the higher pri­or­ity prod­ucts, since they (or lots of peo­ple in other coun­tries) will already be us­ing cel­lu­lar agri­cul­ture prod­ucts.

A differ­ent ap­proach might be to start­ing a B2B startup which fo­cuses on pro­vid­ing a cheap—but also sta­ble and se­cure—spe­cific in­gre­di­ent, e.g. growth me­dia (this one over­laps with some of your sug­ges­tions). This might re­quire that their busi­ness fo­cuses on sel­l­ing to a broader cus­tomer base, in­clud­ing med­i­cal com­pa­nies and sci­en­tific re­searchers, to en­sure that they have a busi­ness model that isn’t wholly de­pen­dent on the (po­ten­tially fluc­tu­at­ing) for­tunes of the rest of the clean meat sup­ply chain.

Po­ten­tially these strate­gic con­cerns might mat­ter less for plant-based foods. I can think of ways it would in­fluence de­ci­sion-mak­ing though, like fo­cus­ing heav­ily on price, so that less well-off peo­ple can ac­cess plant-based foods, to re­duce the risk that plant-based food be­comes con­fined to well-off peo­ple and spe­cific de­mo­graph­ics (hip­pies/​hip­sters) due to the real bar­rier that price puts up and/​or due to pub­lic per­cep­tions and iden­tity is­sues.

Gen­er­ally, I’m ar­gu­ing for con­sid­er­ing a long-term “strate­gic” per­spec­tive to think­ing about the so­cial im­pact of start-ups. J at Sen­tience In­sti­tute has writ­ten two tech­nol­ogy adop­tion stud­ies on nu­clear power and GM foods which I think are helpful for think­ing about these sorts of per­spec­tives. He’s cur­rently writ­ing a third, on biofuels—I imag­ine that that will be similarly use­ful, and that we’ll start to see trends and pat­terns oc­cur­ring across the tech­nol­ogy adop­tion stud­ies as he does more.

1) Can I check I’ve un­der­stood: the “Es­ti­mated pop­u­la­tion size” and “Odds of feel­ing pain” columns are not fac­tored into the “to­tal welfare score” (which is made up of adding to­gether scores from the var­i­ous crite­ria which then end up some­where be­tween −100 and +100) at all; they are to be used sep­a­rately.

So if you wanted to work out whether spar­ing 10 broiler chick­ens or 20 beef cows from ex­is­tence was more im­pact­ful, you’d have to mul­ti­ply your re­sult by the odds of feel­ing pain etc. E.g. for chick­ens: 10 * −56 * 0.7 = −392 units of suffer­ing pre­vented. For beef cows: −20 * 20 * 75% = −300 units of suffer­ing pre­vented. So spar­ing chick­ens slightly bet­ter by this met­ric (also: note that peo­ple might not agree with that the rough es­ti­mates from the OPP on con­scious­ness mean the same thing as “odds of feel­ing pain,” e.g. if you sub­scribe to con­scious­ness elimi­na­tivism, al­though I haven’t read the OPP re­port in a while so might be mis­re­mem­ber­ing the speci­fics)

Thanks for the de­tailed re­ply. I agree with most of your com­ments/​ad­di­tions on my com­ments! Here are some fur­ther com­ments on your com­ments on my com­ments:

<< Un­for­tu­nately lack of fund­ing con­straints doesn’t nec­es­sar­ily mean that it’s easy to build new teams. For in­stance, the com­mu­nity is very con­strained by man­agers, which makes it hard to both hire ju­nior peo­ple and set up new or­gani­sa­tions… [lo­cal work­shops ] are already be­ing ex­per­i­mented with by lo­cal effec­tive al­tru­ism groups… [but are] also quite challeng­ing to run well—of­ten some­one able to do this in­de­pen­dently can get a full-time job at an ex­ist­ing or­gani­sa­tion.”

Do I take these two com­ments com­bined to mean that you be­lieve some­one needs man­age­rial ex­pe­rience, or ex­ten­sive ex­pe­rience to set these up? I feel there might be a half way house here, where those at 80K who are more ex­pe­rienced in run­ning ca­reer work­shops spent the days/​weeks/​months re­quired to set up some clear train­ing re­sources and in­fras­truc­ture to make these more eas­ily/​sys­tem­at­i­cally run at a lo­cal level. At this point, it wouldn’t re­quire man­agers or hugely ex­pe­rienced peo­ple to run these. For ex­am­ple, I would imag­ine that any­one with teach­ing ex­pe­rience who spent a few weeks (paid?) mak­ing sure that they were suffi­ciently up to speed on key EA and ca­reer-rele­vant knowl­edge could then run work­shops like this very suc­cess­fully. In short, I sus­pect we have differ­ent opinions about a) the re­sources re­quired to set up the ini­tial in­fras­truc­ture to make these ses­sions work­able, and b) the level of ex­pe­rience and skill needed to run them lo­cally. In­tu­itively I feel quite strongly about this but I also have a ten­dency to un­der­es­ti­mate the effort/​time re­quired for large pro­jects like this.

<< One-on-one calls seem safer, and fund­ing some­one to work in­de­pen­dently do­ing calls all day seems like a rea­son­able use of fund­ing to me, pro­vided they couldn’t /​ wouldn’t get a more se­nior job >>

Similarly to the above point, my cur­rent im­pres­sion is that the EA com­mu­nity has more peo­ple who are suffi­ciently tal­ented to do a role like this suffi­ciently well than it has jobs like this for them to fill. This seems like it would be a fairly gen­er­al­ist role, which could be done well by quite a range of peo­ple. Again, I think I might have a lower bar for the cal­ibre of ap­pli­cant that I would see as suffi­cient to make it worth fund­ing some­one to work on this full time though.

<< Note that we have tried this in the past (e.g. al­lied health, web de­sign, ex­ec­u­tive search), but they took a long time to write, never got much at­ten­tion, and as far as we’re aware haven’t caused any plan changes. >>

Fair enough. How­ever, these met­rics as­sess their use­ful­ness within the con­text of the cur­rent au­di­ence and de­mo­graph­ics of the EA com­mu­nity /​ 80K. Part of my un­der­stand­ing of the broader vi­sion of 80K’s role (or for other new or­gani­sa­tions to step in) as­sumes a broader /​ chang­ing au­di­ence for the EA com­mu­nity.

To my knowl­edge, SHIC don’t spend much time on ca­reers ad­vice. I am aware that SHIC are work­ing on differ­ent pro­grammes /​ forms of de­liv­ery at the mo­ment, but the “core cur­ricu­lum” only in­cludes one ses­sion on ca­reers ad­vice, which was mostly a se­lec­tion of ideas from 80K.

More broadly, this prob­a­bly fits into an is­sue that I think EA might have (un­der­stand­ably, given how new it is) of hav­ing 1 or­gani­sa­tion work­ing on 1 key area. E.g. 80K for ca­reers, SHIC for stu­dents. Even ACE for eval­u­at­ing an­i­mal char­i­ties/​in­ter­ven­tions… or Sen­tience In­sti­tute for do­ing so­cial move­ment re­search for an­i­mal or­gani­sa­tions. But none of those or­gani­sa­tions do all pos­si­ble work in those ar­eas (al­though you could ar­gue that they take up the low hang­ing fruit) and they all have par­tic­u­lar views about how they should do each of those things that oth­ers in the EA com­mu­nity might dis­agree with.

<< Un­for­tu­nately, we have very limited ca­pac­ity to hire. It seems bet­ter that we fo­cus our efforts on peo­ple who can help with our main or­gani­sa­tional fo­cus, which is the nar­row vi­sion. So, like I note, I think these would mainly have to be done by other or­gani­sa­tions. >>

My guess would be that it would be worth di­vert­ing some time/​re­sources from 80K to ac­tively ad­vo­cate for the set­ting up of new or­gani­sa­tions, to as­sist with sup­port­ing or se­lect­ing the right can­di­dates to fill those roles (e.g. if they ap­ply­ing for some form of grant), and to ad­vise them, based on your own ex­pe­riences. Or even offer grants to set up or­gani­sa­tions to fill those gaps?

(P.S. feel free not to re­ply to these com­ments; I added them to try and ex­plain/​ex­plore why we might dis­agree on some of these is­sues de­spite me ac­cept­ing most of the points that you just made)

Given some of the is­sues raised on this thread, I sug­gest that ei­ther 80K should broaden its role and hire (lots) more staff to make this pos­si­ble, or that new or­gani­sa­tions should be set up to fill the gaps.

I’m glad to see the dis­cus­sion of the “two vi­sions.” I would guess that there is a dis­crep­ancy be­tween how 80K thinks of its role (the sec­ond vi­sion, fo­cus­ing on key bot­tle­necks) and how most peo­ple, es­pe­cially peo­ple newer to the EA com­mu­nity or not in­volved in EA meta orgs, think of 80K’s role (the first vi­sion, fo­cus­ing on broader so­cial im­pact ca­reer ad­vice).

When I come across some­one who cares about mak­ing the world a bet­ter place /​ max­imis­ing their im­pact who is look­ing for ca­reer ad­vice, I ei­ther point them to­wards 80K or dis­cuss ideas with them that have al­most en­tirely come from 80K. It may well be that 80K doesn’t see some of those peo­ple that I have con­ver­sa­tions with as their in­tended tar­get au­di­ence, but since 80K is the only EA org fo­cus­ing on ca­reers ad­vice, I de­fault to those recom­men­da­tions. I would guess that many other peo­ple do the same.

A crude sum­mary of some of the ideas here would be that in­creas­ing “in­cli­na­tion” is more im­por­tant than in­creas­ing aware­ness from a long-term per­spec­tive. But if 80K is de­mor­al­is­ing peo­ple new to the move­ment be­cause it fo­cuses on the sec­ond vi­sion of its role over the first vi­sion, then this prob­a­bly de­creases in­cli­na­tion quite a lot and so has nega­tive long-term im­pli­ca­tions (even if in the short-term, it has higher im­pact).

Although I haven’t thor­oughly looked at im­pact or cost-effec­tive­ness met­rics for 80K and other meta orgs, there are sev­eral fac­tors that make me think that the EA com­mu­nity should pri­ori­tise de­vot­ing more re­sources to filling the gaps in the area of ca­reer ad­vice:

1) Con­ver­sa­tions about ca­reer de­ci­sions hap­pen pretty reg­u­larly. Even if the most im­pact­ful thing for the hand­ful of in­di­vi­d­u­als work­ing at 80K is in­deed to fo­cus on the nar­rower vi­sion of their role, it seems im­por­tant that other in­di­vi­d­u­als work on the broader con­cep­tion, so that these reg­u­lar con­ver­sa­tions that are hap­pen­ing any­way can be rel­a­tively in­formed.

2) Given that 80K fo­cuses on the nar­rower vi­sion, there is prob­a­bly quite a lot of work that could be done rel­a­tively eas­ily and be quite im­pact­ful if peo­ple were work­ing on the broader vi­sion (i.e. low hang­ing fruit)

3) We talk about EA move­ment-build­ing not be­ing fund­ing con­strained. If that’s the case, then pre­sum­ably it’d be pos­si­ble to cre­ate more roles, be that at 80K or at new or­gani­sa­tions.

4) If I re­mem­ber cor­rectly, the EA sur­vey sug­gests that 80K is an im­por­tant en­try point for lots of peo­ple into EA. It’s also a high-fidelity form of com­mu­ni­ca­tion about EA ideas/​re­search.

5) Gen­er­ally there are loads of op­por­tu­ni­ties for im­pact that I can think of that a much larger 80K (or ad­di­tional or­gani­sa­tions also work­ing on the in­ter­sec­tion of EA and ca­reers ad­vice/​de­ci­sion mak­ing) could work on, that seem like they would plau­si­bly have higher im­pact than some other ways that funds have been used for EA move­ment build­ing that I can think of:

Re­search/​web­site like 80K’s cur­rent ca­reer pro­file re­views, but in­clud­ing less com­pet­i­tive ca­reer paths (per­haps this would need to fo­cus on quan­tity over qual­ity and “breadth” over depth)

Ca­reer coach­ing calls (available all year round, for any­one fo­cus­ing on any of the higher pri­or­ity EA cause ar­eas)

Reg­u­lar ca­reer work­shops, per­haps run through ad­di­tional em­ploy­ees at lo­cal groups who are trained in how to run them, or per­haps as a sin­gle in­ter­na­tional or­gani­sa­tion. This seems like a high fidelity method of EA out­reach; if mar­keted well, I sus­pect these would get a lot of take-up. Tar­geted mar­ket­ing to groups which are de­mo­graph­i­cally un­der-rep­re­sented in EA might also be a good way to start ad­dress­ing di­ver­sity/​in­clu­sion/​elitism con­cerns.

Re­search/​webite/​pod­casts etc like 80K’s cur­rent work, but fo­cused on high school age stu­dents, be­fore they’ve made choices which sig­nifi­cantly nar­row down their op­tions (like choos­ing their de­gree).

In short, 80K does some amaz­ing and im­por­tant work, but there seems to be lots of space for the EA com­mu­nity to do more in the broad area of the in­ter­sec­tion of EA and ca­reers ad­vice or de­ci­sion-mak­ing. So it seems to me that ei­ther 80K should pri­ori­tise hiring more peo­ple to take up some of these op­por­tu­ni­ties, or EA as a move­ment should pri­ori­tise cre­at­ing new or­gani­sa­tions to take them up.

Michael, apolo­gies for this. I just came back to check this post.
I didn’t ever re­ceive the email be­cause the for­mat­ting of the EA fo­rum re­moved the un­der­score from my email, and I didn’t no­tice at the time. If you can find the email that you sent from your sent box in April, and could for­ward it to me at jamie@sen­tien­ce­in­sti­tute.org that would be great!

I wanted to echo all of Saulius’ points (in­clud­ing the thanks for do­ing this!).

To clar­ify your re­sponse here: all of the rank­ings are es­sen­tially sub­jec­tive judge­ments, based on what­ever ev­i­dence you have available in that cat­e­gory? So in the ex­am­ple above, if those cor­ti­sol tests were some­how your only ev­i­dence in the “in­dex of biolog­i­cal mark­ers” cat­e­gory, you would just de­cide a score that you felt rep­re­sented the ap­pro­pri­ate level of bad­ness for the wild rat “in­dex of biolog­i­cal mark­ers” score?

I’m also won­der­ing if you’re go­ing to use the method to com­pare hu­mans to non-hu­man an­i­mals? Some of the biolog­i­cal mea­sures we could use fall down when we think about how hu­mans fit in, e.g. neu­ron count. In­clud­ing hu­mans in com­par­a­tive mea­sures seems valuable for re­flect­ing on/​test­ing in­tu­itions we might oth­er­wise have about cross-species com­par­i­sons.

Thanks for the re­ply. Just wanted to note that I agree with ACE’s breadth over depth strat­egy, and that ACE might not be best-placed for a ful­ler re­view of so­cial move­ment im­pact liter­a­ture. It’s some­thing I’m con­sid­er­ing pri­ori­tiz­ing do­ing per­son­ally in my work for Sen­tience In­sti­tute.

Thanks very much for post­ing this re­ply. And thanks a lot for all the work ACE does in gen­eral.
Some clar­ifi­ca­tions were use­ful to have, e.g. “The Re­la­tion­ship Between our In­ter­ven­tion Re­search and our Char­ity Re­views”—I had felt con­fused about this when I first looked through the re­views in depth.

Here are some spe­cific com­ments:

Re­views of ex­ist­ing literature

I agree that the new in­ter­ven­tion re­ports are much bet­ter on this front. I’m es­pe­cially keen on the clear ta­bles sum­maris­ing ex­ist­ing liter­a­ture in the protest re­port. I sus­pect that there’s still room for more depth here, es­pe­cially since the ar­ti­cles sum­ma­rized are prob­a­bly just the most rele­vant parts of much wider de­bates within the so­cial move­ment stud­ies liter­a­ture. For ex­am­ple, I no­tice a cou­ple of items by S.A. Soule; al­though I haven’t read the book and anal­y­sis you (or who­ever wrote the protest re­port) cite, I have read an­other ar­ti­cle of her’s which was par­tially di­rected at con­sid­er­ing the im­por­tance of the “poli­ti­cal me­di­a­tion” and “poli­ti­cal op­por­tu­nity struc­ture” the­o­ries for as­sess­ing the im­pact of so­cial move­ment or­ga­ni­za­tions, and sus­pect that some of the works you cite might con­sider similar is­sues. I think the protest re­port goes into an ap­pro­pri­ate amount of depth, given limited time and re­sources etc, but I’ve re­cently gained the im­pres­sion that a liter­a­ture re­view of so­cial move­ment im­pact the­ory in a broad sense, or more sys­tem­atic re­views of some of the more spe­cific sub-ar­eas, is a high pri­or­ity in EAA re­search. I’d be keen to hear views about how use­ful this would be, and I’m happy to share more spe­cific thoughts if that would help.

Un­clear sources of figures

With some older in­ter­ven­tion re­ports I agree with John Halstead that there are some con­fus­ing, un­ex­plained num­bers, al­though I think he ex­ag­ger­ates the ex­tent of this (per­haps un­in­ten­tion­ally), since some of the figures are ex­plained. I don’t think this needs fur­ther com­ment since, as noted, the new in­ter­ven­tion re­port style is much clearer.
My im­pres­sion was that the Guessti­mate mod­els from more re­cent char­ity eval­u­a­tions also had some slightly un­ex­plained figures on there. E.g. THL guessti­mate model – “Rough es­ti­mate of num­ber of farmed an­i­mals spared per dol­lar THL spent on cam­paigns” is −52 to 340. Track­ing this back through the model takes you to a box which notes “THL did not provide es­ti­mates for the num­ber of an­i­mals af­fected by cage-free cam­paigns they were in­volved with. We have roughly based this es­ti­mate on es­ti­mates from other groups ac­tive in pro­mot­ing cage-free poli­cies and have at­tempted to take into ac­count the greater amount of re­sources THL ded­i­cates to­wards this pro­gram area.” I feel like some ex­pla­na­tion of this (per­haps a link to an ex­ter­nal Google sheet) might have been helpful? I don’t think this is a big is­sue though. There’s also a chance I’ve just missed some­thing /​ don’t fully un­der­stand Guessti­mate yet.

Gen­eral com­ment on use of CEEs

ACE does make very clear that it only sees CEEs as one part of a char­ity eval­u­a­tion. I’d just sug­gest that, in spite of these warn­ings, in­di­vi­d­u­als look­ing at the re­ports will nat­u­rally grav­i­tate to­wards the CEEs as one of the more tan­gible/​con­crete/​eas­ily quotable ar­eas of the re­port. E.g. when I’ve or­ganised events and cre­ated re­sources for Effec­tive An­i­mal Altru­ism Lon­don, I’ve quoted some of the CEEs for char­i­ties (and pretty much noth­ing else from the re­port) to make broad points about the rough bal­l­park for cost effec­tive­ness of differ­ent groups. Given this, it still makes sense to treat the CEEs as more im­por­tant than some other parts of the re­port, and to try and be es­pe­cially rigor­ous in these sec­tions.
So do­ing things like us­ing a sin­gle dis­puted pa­per by De Mol et al (2016) (al­though this ex­am­ple is from the old cor­po­rate cam­paigns in­ter­ven­tion re­port) as a key part of a cost effec­tive­ness anal­y­sis seems in­ad­vis­able, if it is avoid­able.

(Holly prob­a­bly knows most of my story but writ­ing about my­self seems fun so I’m go­ing to do it any­way… maybe it’ll be some­how use­ful for some­one too)

When I was 5, I re­fused to eat meat for emo­tional rea­sons (some­thing along the lines of “Mum, that thing you’re cut­ting up still looks like a real chicken and that is sad, I’m go­ing to cry lots now”).

When I was about 16, my schoolfriend (also a veg­e­tar­ian) bought me Peter Singer’s An­i­mal Liber­a­tion for my birth­day. Read­ing this turned my per­sonal, emo­tional choice into some­thing which felt like a moral im­per­a­tive. De­spite never hav­ing en­gaged with any philos­o­phy be­fore, Singer’s views felt al­most like a man­i­festo of what I thought I be­lieved in. I’ve been pretty staunchly util­i­tar­ian since (al­though I still haven’t en­gaged very deeply with much philos­o­phy).

I knew that I wanted to con­tribute pos­i­tively to the world through my ca­reer. Given that his­tory was my favourite sub­ject, it seemed like the best way to help the world was to be­come a his­tory teacher. I fixed my ca­reer plans upon this, and didn’t re­ally con­sider any al­ter­na­tives to this for years to come...

When I went to uni­ver­sity I had hoped to find an an­i­mal ad­vo­cacy stu­dent so­ciety, but there was none, so I set one up within weeks, alongside a few other peo­ple.

It was at uni that I first heard of Effec­tive Altru­ism. Max Dal­ton (now at CEA) was at my col­lege at uni and so was in my (ex­tended) friend­ship group. He was heav­ily in­volved in the Oxford GWWC so­ciety. I didn’t ever speak to Max about EA whilst I was at uni, but I’d guess that most un­der­grads in my col­lege had heard of EA be­cause of Max. I also went to hear Peter Singer gives talks twice while I was there, and I think one of the talks was about Effec­tive Altru­ism (be­fore I knew much about it); I don’t re­mem­ber it well, so it ob­vi­ously didn’t leave as much of an im­pres­sion on me at the time as An­i­mal Liber­a­tion had. I thought that EA sounded like a great idea, but that I couldn’t en­gage with it yet, be­cause I wasn’t earn­ing any money, and my un­der­stand­ing was that EA was about donat­ing effec­tively. So I de­cided I would donate 10% of my in­come to effec­tive char­i­ties once I started earn­ing, but that there was noth­ing else I needed to do in the mean­time.

After my de­gree and 1 year teacher train­ing course, I be­gan work­ing as a teacher and im­me­di­ately be­gan donat­ing 10% of my in­come. I also started ten­ta­tively look­ing for po­ten­tial EA-re­lated vol­un­teer­ing op­por­tu­ni­ties (e.g. ACE) but noth­ing came of this at the time.

I spoke to some uni friends who were at similar lev­els of sup­port for EA as I was. They said they had taken the GWWC pledge. I de­cided to sign up, since I was already donat­ing 10%.

After sign­ing up, David Nash (EA Lon­don) sent me an email ask­ing if I’d like to come to EA Lon­don events. I said yes and asked how else I could get in­volved; I ended up tak­ing over the ma­jor­ity of the or­ganis­ing of the Effec­tive An­i­mal Altru­ism Lon­don sub-group which he had set up with Saulius (an­other EA based in Lon­don) but didn’t have much time to put into or­ganis­ing.

My re­spon­si­bil­ity for this group (and my gen­eral in­ter­est) led to a pe­riod of deep­en­ing in­volve­ment in EA; try­ing to read as much as I could that came out re­lat­ing to EA and an­i­mals and vol­un­teer­ing for sev­eral EAA or­gani­sa­tions. At some point I de­cided that I wanted to change my ca­reer to have a greater pos­i­tive im­pact; this was why I had cho­sen teach­ing in the first place any­way, I just hadn’t thought the im­pli­ca­tions of this through. After sev­eral months of ag­o­nis­ing, speak­ing to var­i­ous peo­ple and an 80K coach­ing call, I de­cided to work to­wards work­ing di­rectly in the Effec­tive An­i­mal Ad­vo­cacy com­mu­nity (as op­posed to fo­cus­ing on build­ing more flex­ible ca­reer cap­i­tal). So I started an EAA blog, con­tiued to fo­cus on read­ing into the area and my vol­un­teer­ing.

A few months later, I have just started work­ing full time as a re­searcher at Sen­tience In­sti­tute.