At 80,000 Hours we think a sig­nifi­cant num­ber of peo­ple should build ex­per­tise to work on United States (US) policy rele­vant to the long-term effects of the de­vel­op­ment and use of ar­tifi­cial in­tel­li­gence (AI).

In this ar­ti­cle we go into more de­tail on this claim, as well as dis­cussing ar­gu­ments in fa­vor and against. We also briefly out­line which spe­cific ca­reer paths to aim for and dis­cuss which sorts of peo­ple we think might suit these roles best.

This ar­ti­cle is based on mul­ti­ple con­ver­sa­tions with three se­nior US Govern­ment offi­cials, three fed­eral em­ploy­ees work­ing on sci­ence and tech­nol­ogy is­sues, three con­gres­sional staffers, and sev­eral other peo­ple who have served as ad­vi­sors to gov­ern­ment from within academia and non-prof­its. We also spoke with sev­eral re­search sci­en­tists at top AI labs and in academia, as well as rele­vant ex­perts from foun­da­tions and non­prof­its.

We have hired Niel Bow­er­man as our in-house spe­cial­ist on AI policy ca­reers. If you are a US cit­i­zen in­ter­ested in pur­su­ing a ca­reer in AI pub­lic policy, please let us know and Niel may be able to work with you to help you en­ter this ca­reer path.

Summary

The US Govern­ment is likely to be a key ac­tor in how ad­vanced AI is de­vel­oped and used in so­ciety, whether di­rectly or in­di­rectly.

One of the main ways that AI might not yield sub­stan­tial benefits to so­ciety is if there is a race to the bot­tom on AI safety. Govern­ments are likely to be key ac­tors that could con­tribute to an en­vi­ron­ment lead­ing to such a race, or could ac­tively pre­vent one.

Good sce­nar­ios seem more likely if there are more thought­ful peo­ple work­ing in gov­ern­ment who have ex­per­tise in AI de­vel­op­ment and are con­cerned about its effects on so­ciety over the long-term.

This is a high-risk, high-re­ward ca­reer op­tion, and there is a chance that pur­su­ing this ca­reer path will re­sult in lit­tle so­cial im­pact over your ca­reer. How­ever we think there are sce­nar­ios in which this work is re­mark­ably im­por­tant, and so the over­all value of work on AI policy seems high.

We think there is room for hun­dreds of peo­ple to build ex­per­tise and ca­reer cap­i­tal in roles that may one day al­low them to work on the most rele­vant ar­eas of AI policy.

If you’re a thought­ful Amer­i­can in­ter­ested in de­vel­op­ing ex­per­tise and tech­ni­cal abil­ities in the do­main of AI policy, then this may be one of your high­est im­pact op­tions, par­tic­u­larly if you have been to or can get into a top grad school in law, policy, in­ter­na­tional re­la­tions or ma­chine learn­ing. (If you’re not Amer­i­can, work­ing on AI policy may also be a good op­tion, but some of the best long-term po­si­tions in the US won’t be open to you.)

If you’re a thought­ful Amer­i­can in­ter­ested in de­vel­op­ing ex­per­tise and tech­ni­cal abil­ities in the do­main of AI policy, then this may be one of your high­est im­pact op­tions, par­tic­u­larly if you have been to or can get into a top grad school in law, policy, in­ter­na­tional re­la­tions or ma­chine learn­ing. (If you’re not Amer­i­can, work­ing on AI policy may also be a good op­tion, but some of the best long-term po­si­tions in the US won’t be open to you.)

What do you think about similar type of work within the Euro­pean Union? Could it po­ten­tially be a high-im­pact ca­reer path for those who are not Amer­i­cans?

I think work­ing on AI policy in an EU con­text is also likely to be valuable, how­ever few (if any) of the world’s very top AI com­pa­nies are based in the EU (ex­cept Deep­Mind, which will soon be out­side the EU af­ter Brexit). Nonethe­less, I think it would be very helpful to more AI policy ex­per­tise within an EU con­text, and if you can con­tribute to that it could be very valuable. It’s worth men­tion­ing that for UK cit­i­zens it might be bet­ter to fo­cus on Bri­tish AI policy.