Risto_Uuk

If you’re a thought­ful Amer­i­can in­ter­ested in de­vel­op­ing ex­per­tise and tech­ni­cal abil­ities in the do­main of AI policy, then this may be one of your high­est im­pact op­tions, par­tic­u­larly if you have been to or can get into a top grad school in law, policy, in­ter­na­tional re­la­tions or ma­chine learn­ing. (If you’re not Amer­i­can, work­ing on AI policy may also be a good op­tion, but some of the best long-term po­si­tions in the US won’t be open to you.)

What do you think about similar type of work within the Euro­pean Union? Could it po­ten­tially be a high-im­pact ca­reer path for those who are not Amer­i­cans?

This post in­creased my in­ter­est in vis­it­ing the Bos­ton area. Un­for­tu­nately, I can­not come to the EAGx this year, but per­haps an­other time. I’m quite sur­prised that you’d have the is­sue of brain drain as the area seems to be a very im­pres­sive place with top uni­ver­si­ties, lots of peo­ple in­ter­ested in EA, and even a few great EA-al­igned or­ga­ni­za­tions. Do you have other ideas be­sides a full-time paid com­mu­nity builder to im­prove that?

Nice idea. I wrote my bio in third-per­son like you did even though on my web­site I have it in first-per­son: https://​​ris­touuk.com. Usu­ally, I feel weird about the third-per­son nar­ra­tive when I’m the one who is talk­ing about me, but it feels right for the fo­rum.

As an ap­pli­ca­tion of this model, the Global Pri­ori­ties Pro­ject es­ti­matesthat re­search into the ne­glected trop­i­cal dis­eases with the high­est global DALY bur­den (di­ar­rheal dis­eases) could be 6x more cost-effec­tive, in terms of DALYs per dol­lar, than the 80,000 Hours recom­mended top char­i­ties.

What are 80,000 Hours’ recom­mended top char­i­ties? I think you mean some other or­ga­ni­za­tion here.

It would be nice if some­one up­dated it reg­u­larly and had a note about when it was last up­dated on the top of the page. For ex­am­ple, ac­cord­ing to Ju­lia Wise there were 3855 Giv­ing What We Can mem­bers at the be­gin­ning of 2019, whereas the num­ber here is out­dated with 1800+ mem­bers.

Let’s face it. Long-ter­mism is not very in­tu­itively com­pel­ling to most peo­ple when they first hear of it. Not only do you have to think in very con­se­quen­tial­ist terms, you also have to be ex­tremely com­mit­ted to act­ing and pri­ori­tiz­ing on the ba­sis of fairly ab­stract philo­soph­i­cal ar­gu­ments. In my view, that’s just not very ap­peal­ing—some­times even off-putting—if you’ve never even thought in terms of cost-effec­tive­ness or to­tal-view con­se­quen­tial­ism be­fore.

I agree. Be­cause of this, the 2nd edi­tion of the EA hand­book doesn’t seem ap­peal­ing at all as an EA in­tro­duc­tion. I don’t want to hi­jack this thread, but along these lines, what do you think about the fol­low­ing con­tent as an in­tro­duc­tion to effec­tive al­tru­ism?:

MacAskill’s con­clu­sion: “What should you do right now?” and “The five key ques­tions of effec­tive al­tru­ism” (8 pages)

Ad­di­tion: Reflect on the stipend

We are about to run our stipend with this con­tent in mind. Com­pared to your read­ing list, I feel that the con­tent we have planned is more be­gin­ner-level. What do you think? What seems to be miss­ing in terms of EA ba­sics?

Effec­tive­ness: Am­bi­tious in their al­tru­ism, with a drive to do as much good as they can. Po­ten­tial to be al­igned with the cen­tral tenets of EA.

Po­ten­tial: Ex­cited to ded­i­cate their ca­reer to do­ing good or to donate a sig­nifi­cant por­tion of their in­come to charity

Open-mind­ed­ness: Open-minded and flex­ible, ea­ger to up­date their be­liefs in re­sponse to per­sua­sive evidence

En­thu­si­asm: Willing and able to com­mit ~3-4 hours per weekFit: How good a fit are they with the fel­low­ship for­mat? Will they be good in dis­cus­sions? Will they do good work for the Im­pact Challenge?”

I ap­pre­ci­ate that you ex­plic­itly listed all the traits you were look­ing for in the ap­pli­cants. We have done that more in­tu­itively, but it’s very use­ful to make them ex­plicit. Th­ese traits al­ign well with my in­tu­itions for what we also look for in ap­pli­cants.

This might be slightly off-topic, but you may have some in­sight into it. If a donor donates money to, for ex­am­ple, global health s/​he can find pretty con­crete num­bers about im­pact based on GiveWell’s es­ti­mates or in­for­ma­tion from spe­cific or­ga­ni­za­tions such as AMF. How can some­one donat­ing money to Meta jus­tify those dona­tions quan­ti­ta­tively and via con­crete in­di­ca­tors?

2. I’m not sure what kind of refer­ences you are sup­posed to add here. Should they be ac­cessible to ev­ery­one or can books, etc. be in­cluded as well? If the lat­ter, then I’d add Daniel Kah­ne­man’s book Think­ing Fast and Slow to the list. There are good parts about these con­cepts in the book. (e.g. Kin­dle ver­sion lo­ca­tion 4220)

3. To me, it seems that the defi­ni­tions of “in­side view” and “out­side view” are not clear enough, whereas the ex­am­ples are very good. https://​​www.hy­brid­fore­cast­ing.com/​​ had nice slides about this, how­ever, I’m not able to find their ma­te­rial to share here. Any­way, their defi­ni­tions and ex­pla­na­tions are the fol­low­ing:

In­side view: fo­cus on the unique qual­ities of the case at hand.

Out­side view: con­nect the case at hand to a refer­ence class and rely on base rate in­for­ma­tion.

Refer­ence classes re­fer to similar events from the past.

Base rates are rel­a­tive fre­quen­cies of an out­come given a defined set. For ex­am­ple, the chance of se­lect­ing a red card from a deck of cards if 50%.

You didn’t men­tion any­thing about (a) the risk of be­com­ing less al­tru­is­tic in the fu­ture, (b) in­creas­ing your mo­ti­va­tion to learn more about effec­tive giv­ing by giv­ing now, and (c) sup­port­ing the de­vel­op­ment of the cul­ture of effec­tive giv­ing. How much the giver learns over time isn’t the only con­sid­er­a­tion. I’m refer­ring to this fo­rum post by list­ing these other con­sid­er­a­tions: http://​​effec­tive-al­tru­ism.com/​​ea/​​4e/​​giv­ing_now_vs_later_a_sum­mary/​​.

I feel that the book con­tains too much fluff and even these com­mand­ments, de­spite ap­pear­ing use­ful, seem to lack enough speci­fic­ity to be use­ful. Does any­one have other book recom­men­da­tions or guidelines for im­prov­ing one’s fore­cast­ing and prob­a­bil­is­tic think­ing? At the end of the day, it’s im­por­tant to ac­tu­ally prac­tice fore­cast­ing and think­ing prob­a­bil­is­ti­cally, but spe­cific in­for­ma­tion for how to do that would be use­ful. E.g. how do you ac­tu­ally de­ter­mine 40⁄60 and 45⁄55 or even 43⁄57 prob­a­bil­ities?

If some­one can’t ap­ply right now due to other com­mit­ments, do you ex­pect there to be new roles for gen­er­al­ist re­search an­a­lysts next year as well? What are the best ways one could make one­self a bet­ter can­di­date mean­while?

Sam Har­ris did ask Steven Pinker about AI safety. If any­body gets around listen­ing to that, it starts at 1:34:30 and ends at 2:04, so that’s about 30 min­utes about risks from AI. Har­ris wasn’t his best in that dis­cus­sion and Pinker came off much more nu­anced and ev­i­dence and rea­son based.

Do you offer any recom­men­da­tions for com­mu­ni­cat­ing util­i­tar­ian ideas based on Everett’s re­search or some­one else’s?

For ex­am­ple, in Everett’s 2016 pa­per the fol­low­ing is said:

“When com­mu­ni­cat­ing that a con­se­quen­tial­ist judg­ment was made with difficulty, nega­tivity to­ward agents who made these judg­ments was re­duced. And when a harm­ful ac­tion ei­ther did not blatantly vi­o­late im­plicit so­cial con­tracts, or ac­tu­ally served to honor them, there was no prefer­ence for a de­on­tol­o­gist over a con­se­quen­tial­ist.”