Michelle_Hutchinson

Let’s Fund has re­cently set up to try to get fund­ing for ne­glected and spec­u­late pro­jects in effec­tive al­tru­ism. They seem to par­tic­u­larly fo­cus on re­search. It could be worth reach­ing out to them about whether your pro­ject is the kind they’d be in­ter­ested in fundrais­ing for.

It’s always great to see in­ter­est­ing new pro­jects like this to im­prove the EA com­mu­nity! There might also be learn­ings for the pro­ject from EA Ven­tures which tried to co­or­di­nate be­tween spec­u­la­tive EA pro­jects and fun­ders.

I didn’t down­vote the com­ment, but it did seem a lit­tle harsh to me. I can eas­ily imag­ine be­ing for­warded a draft ar­ti­cle, and read­ing the text the per­son for­ward­ing wrote, then look­ing at the draft, with­out read­ing the text in the email they were origi­nally sent. (Hence miss­ing text say­ing the draft was sup­posed to be con­fi­den­tial.) As­sum­ing that Will read the part say­ing it was con­fi­den­tial seemed un­char­i­ta­ble to me (though it turns out to be cor­rect). That seemed in sur­pris­ing con­trast to the un­der­stand­ing at­ti­tude taken to Ju­lia’s mis­take.

One thing I find im­por­tant in con­ver­sa­tions, par­tic­u­larly if I’m do­ing them back to back, is writ­ing down ac­tion points (eg peo­ple I want to in­tro­duce them to) as I go. Peo­ple some­times think it’s rude to do this on a phone, so prob­a­bly hav­ing a note book with you is the best ap­proach.

Some­thing I strug­gle with is mak­ing sure that I build up enough rap­port with a per­son fast that they will feel com­fortable push­ing back on things, and in par­tic­u­lar bring­ing up more so­cially awk­ward con­sid­er­a­tions (eg I’ve heard that effec­tive al­tru­ists don’t think it’s par­tic­u­larly im­pact­ful to get a job do­ing x but I’ve been work­ing to­wards that goal for years, and hate the idea of never get­ting to do it). I’ve found it pretty use­ful watch­ing other peo­ple who are re­ally good at get­ting on with peo­ple meet new peo­ple, and see­ing what they do that makes peo­ple feel quickly at ease. Be­cause I know this is a weak spot of mine, I try af­ter some of my 1-1 con­ver­sa­tions to think through whether there was any­thing in par­tic­u­lar that went well/​badly on this di­men­sion (I waited a while for them to re­spond af­ter say­ing y, rather than bul­l­doz­er­ing on...; when I pushed back on z I ac­ci­den­tally got into ‘philos­o­phy de­bate’ mode rather than friendly dis­cus­sion mode). I also find read­ing books that get me to think through these kinds of dy­nam­ics use­ful: I’ve found ‘Charisma Myth’ use­ful enough to have read it a cou­ple of times, and right now I’m read­ing ‘Never Split the Differ­ence’. (A lot of these kinds of books sound like they’ll be about get­ting your own way and per­suad­ing peo­ple into things they don’t want to do, but they ac­tu­ally spend most of their time on how to make sure you prop­erly hear and un­der­stand the per­son you’re talk­ing to, and help them feel at ease.)

This seems to be some­thing that varies a lot by field. In aca­demic jobs (and PhD ap­pli­ca­tions), it’s ab­solutely stan­dard to ask for refer­ences in the first round of ap­pli­ca­tions, and to ask for as many as 3. It’s a re­ally use­ful part of the pro­cess, and since aca­demics know that, they don’t be­grudge writ­ing refer­ences fairly fre­quently.

Writ­ing fre­quent refer­ences in academia might be a bit eas­ier than when peo­ple are ap­ply­ing for other types of jobs: a su­per­vi­sor can sim­ply have a let­ter on file for a past stu­dent say­ing how good they are at re­search and send that out each time they’re asked for a refer­ence. Another thing which might con­tribute to academia us­ing refer­ences more is it be­ing a very com­pet­i­tive field, where large re­turns are ex­pected from differ­en­ti­at­ing be­tween the very best can­di­date and the next best. As an em­ployer, I’ve found refer­ences very helpful. So if we ex­pect EA orgs to have com­pet­i­tive hiring rounds where there are large re­turns on find­ing the best can­di­date, it could be worth our spend­ing more time writ­ing/​giv­ing refer­ences than is typ­i­cal.

I find it difficult to gauge how off-putting ask­ing for refer­ences early would be for the typ­i­cal can­di­date. In my last job ap­pli­ca­tion, I gave a num­ber of refer­ees, some of whom were con­tacted at the time of my trial, and I felt fine about that—but that could be be­cause I’m used to academia, or be­cause my refer­ees were in the EA com­mu­nity and so I knew they would value the org I was ap­ply­ing for mak­ing the right hiring de­ci­sion, rather than ex­pe­rience giv­ing a refer­ence as an un­due bur­den.

I would guess the most im­por­tant in ask­ing for refer­ences early is be­ing will­ing to ac­cept not get­ting a refer­ences from cur­rent em­ploy­ers /​ col­leagues, since if you don’t know whether you have a job offer you’re of­ten not go­ing to want your cur­rent em­ployer to know you’re ap­ply­ing for other jobs.

My im­pres­sion is that while speci­fi­cally *IQ* tests in hiring are re­stricted in the US, many of the stan­dard hiring tests used there (eg Won­der­lic https://​​www.won­der­lic.com/​​) are ba­si­cally try­ing to get at GMA. So I wouldn’t say the out­side view was that test­ing for GMA was bad (though I don’t know what pro­por­tion of em­ploy­ers use such tests).

I agree with this take on the com­ment as it’s liter­ally writ­ten. I think there’s a chance that Siebe meant ‘writ­ten in bad faith’ as some­thing more like ‘writ­ten with less at­ten­tion to de­tail than it could have been’, which seems like a very rea­son­able con­clu­sion to come to.

(I just wanted to add a pos­si­bly more char­i­ta­ble in­ter­pre­ta­tion, since oth­er­wise the de­scrip­tion of why the com­ment is un­helpful might seem a lit­tle harsh)

I don’t know how oth­ers an­swered this ques­tion, but per­son­ally I didn’t an­swer for how good I thought the last grants were to each other (ie, I wasn’t com­par­ing CfAR/​MIRI to Founders Pledge) or in ex­pec­ta­tion of chang­over in grant maker. I was think­ing about some­thing like whether I preferred fund­ing over the next 5 years to go to or­gani­sa­tions which fo­cused on the far fu­ture vs com­mu­nity build­ing, know­ing that these might or might not con­verge. I’d ex­pect over that pe­riod a bunch of things to come up that we don’t yet know about (in the same way that BERI did a year or so ago).

There do seem to be some strong ar­gu­ments in favour of hav­ing a cause pri­ori­ti­sa­tion jour­nal. I think there are some rea­sons against too though, which you don’t men­tion:

For work peo­ple are happy to do in suffi­cient de­tail and depth to pub­lish, there are sig­nifi­cant down­sides to pub­lish­ing in a new and un­known jour­nal. It will get much less read­er­ship and en­gage­ment, as well as gen­er­ally less pres­tige. That means if this jour­nal is pul­ling in pieces which could have been pub­lished el­se­where, it will be de­creas­ing the en­gage­ment the ideas get from other aca­demics who might have had lots of use­ful com­ments, and will be de­creas­ing the ex­tent to which peo­ple in gen­eral know about and take the ideas se­ri­ously.

For early stage work, get­ting an ar­ti­cle to the point of be­ing pub­lish­able in a jour­nal is a large amount of work. Sim­ply from how peo­ple un­der­stand jour­nal pub­lish­ing to work, there’s a much higher bar for pub­lish­ing than there is on a blog. So the benefits of hav­ing things look­ing more pro­fes­sional are ac­tu­ally quite ex­pen­sive.

The ac­tual work it is to set up and run a jour­nal, and do so well enough to make sure that cause pri­ori­ti­sa­tion as a field gains rather than loses cred­i­bil­ity from it.

It sounds like AEF is do­ing a fan­tas­tic job of en­sur­ing rigour in its mes­sag­ing!

But we have to re­al­ize that when it comes to an­i­mal suffer­ing, as far as I know ACE is the only game in town. In my opinion, this is a pre­car­i­ous state of af­fairs, and we should do our best to pro­tect crit­i­cism of ACE, even when it does not come with the high­est level of po­lite­ness.

I think in cases where there is lit­tle pri­mary re­search, it’s all the more im­por­tant to en­sure that dis­course re­main not merely po­lite, but friendly and kind. Re­search isn’t easy at the best of times, and the an­i­mal space has a num­ber of fac­tors mak­ing it harder than oth­ers like global poverty (eg his­toric ne­glect and the difficulty of un­der­stand­ing ex­pe­riences un­like our own). In cases like this where peo­ple are push­ing ahead de­spite difficulty, it is all the more im­por­tant to make sure that the work is ac­tively ap­pre­ci­ated, and at baseline that peo­ple do not end up feel­ing at­tacked sim­ply for do­ing it. Crit­i­cisms that are framed badly can eas­ily be worse than noth­ing, in lead­ing those work­ing in this area to think that their work isn’t use­ful and they should leave the area, and by dis­suad­ing oth­ers from join­ing the area in the first place.

This makes me all the more grate­ful to John for be­ing so thought­ful in his feed­back—sug­gest­ing im­prove­ments di­rectly to ACE in the first in­stance, run­ning a pub­lic piece by them be­fore pub­lish­ing, and for high­light­ing rea­sons for be­ing op­ti­mistic as well as po­ten­tial prob­lems.

I’m Head of Oper­a­tions for the Global Pri­ori­ties In­sti­tute (GPI) at Oxford Univer­sity. OpenPhil is GPI’s largest donor, and Nick Beck­stead was the pro­gram officer who made that grant de­ci­sion.

I can’t speak for other uni­ver­si­ties, but I agree with his as­sess­ment that Oxford’s reg­u­la­tions make it much more difficult to use dona­tions get pro­duc­tivity en­hance­ments than it would be at other non-prof­its. For ex­am­ple, we would not be able to pay for the child care of our em­ploy­ees di­rectly, nor raise their salary in or­der for them to be able to pay for more child care (since there is a stan­dard pay scale). I there­fore be­lieve that the rea­son he gave for rul­ing out uni­ver­sity-based grantees is the true rea­son, and one which is jus­tified in at least some cases.

Thanks for writ­ing a sum­mary of your progress and learn­ings so far, it’s so use­ful for the EA com­mu­nity to share its find­ings.

A few com­ments:

You might con­sider mak­ing the web­site more tar­geted. It seems best suited to un­der­grad­u­ate the­ses, so it would be use­ful to fo­cus in on that. For ex­am­ple, it might be valuable to in­crease the fo­cus on learn­ing. Dur­ing your de­gree, build­ing ca­reer cap­i­tal is likely to be the most im­pact­ful thing you can do. Although things like build­ing con­nec­tions can be valuable for ca­reer cap­i­tal, learn­ing use­ful skills and re­search­ing deeply into a topic are the ex­pected goals a the­sis and so what most uni­ver­sity courses give you the best op­por­tu­nity to do. Choos­ing a topic which gives you the best op­por­tu­nity for learn­ing could mean, for ex­am­ple, think­ing about which peo­ple in your de­part­ment you can learn the most from (whether be­cause the best re­searchers, or be­cause they are likely to be the most con­scien­tious su­per­vi­sors), and what topic is of in­ter­est to them so that they’ll be en­thu­si­as­tic to work with you on it.

Peo­ple in academia tend to be stick­lers wrt writ­ing style, so it could be worth get­ting some­one to copy edit your main pages for ty­pos.

Com­ing up with a topic to re­search is of­ten a very per­sonal pro­cess that hap­pens when read­ing around an area. So it could be use­ful to have a page link­ing to recom­mended EA re­search /​ read­ing lists, to give peo­ple an idea of where they could start if they want to read around in ar­eas where ideas are likely to be par­tic­u­larly use­ful. For ex­am­ple you might link to this list of syl­labi and read­ing lists Pablo com­piled.

I agree with you that im­pact is im­por­tantly rel­a­tive to a par­tic­u­lar com­par­i­son world, and so you can’t straight­for­wardly sum differ­ent peo­ple’s im­pacts. But my im­pres­sion is that Joey’s ar­gu­ment is ac­tu­ally that it’s im­por­tant for us to try to work col­lec­tively rather than in­di­vi­d­u­ally. Con­sider a case of three peo­ple:

Anna and Bob each have $600 to donate, and want to donate as effec­tively as pos­si­ble. Anna is de­cid­ing be­tween donat­ing to TLYCS and AMF, Bob be­tween GWWC and AMF. Casey is cur­rently not plan­ning to donate, but if in­tro­duced to EA by TLYCS and con­vinced of the effi­cacy of donat­ing by GWWC, would donate $1000 to AMF.

It might be the case that Anna knows that Bob plans to donate the GWWC, and there­fore she’s choos­ing be­tween a case of caus­ing $600 of im­pact or $1000. I take Joey’s point not to be that you can’t think of Anna’s im­pact as be­ing $1000, but to be that it would be bet­ter to con­cen­trate on the col­lec­tive case than the in­di­vi­d­ual case. Rather than con­sid­er­ing what her im­pact would be hold­ing fixed Bob’s ac­tions ($1000 if she donates to TLYCS, $600 if she gives to AMF), Anna should try to co­or­di­nate with Bob and think about their col­lec­tive im­pact ($1200 if they give to AMF, $1000 if they give to TLYCS/​GWWC).

Given that, I would add ‘in­creased co-or­di­na­tion’ to the list of things that could help with the prob­lem. Given the high­lighted fact that of­ten mul­ti­ple steps by differ­ent or­gani­sa­tions are re­quired to achieve par­tic­u­lar im­pact, we should be think­ing not just about how to op­ti­mise each step in­di­vi­d­u­ally but also about the pro­cess over­all.

If you want to make some­thing to ran­domise the text sug­ges­tions, you might be able to do it pretty quickly and eas­ily with Guided Track.
Per­son­ally, I think I would find it more helpful look­ing at the whole list than be­ing given a ran­dom sug­ges­tion from it. If you wanted to give peo­ple that op­tion with­out mak­ing it pub­li­cly available for free, you could put the list on the pri­vate and un­search­able Face­book group EA self help, with a re­quest not to share.