What natural disaster is most likely to kill more than 10 million human beings in the next 20 years?

Terrorism? Famine? An asteroid?

Actually it’s probably a pandemic: a deadly new disease that spreads out of control. We’ve recently seen the risks with Ebola and swine flu, but they pale in comparison to the Spanish flu which killed 3% of the world’s population in 1918 to 1920. If a pandemic of that scale happened again today, 200 million would die.

Looking back further, the Black Death killed 30 to 60% of Europe’s population, which would today be two to four billion globally.

The world is woefully unprepared to deal with new diseases. Many countries have weak or non-existent health services. Diseases can spread worldwide in days due to air travel. And international efforts to limit the spread of new diseases are slow, if they happen at all.

Even more worryingly, scientific advances are making it easier to create diseases much worse than anything nature could throw at us – whether by accident or deliberately.

In this in-depth interview I speak to Howie Lempel, who spent years studying pandemic preparedness for the Open Philanthropy Project. We spend the first 20 minutes covering his work as a foundation grant-maker, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it. In the second half of the interview we go through what you personally could study and where you could work to tackle one of the worst threats facing humanity.

OpenAI’s Universe, a software platform for training AIs to play computer games.

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal.

Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.

I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about:

OpenAI’s latest plans and research progress.

His paper Concrete Problems in AI Safety, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend – something OpenAI has to work to avoid.

How listeners can best go about pursuing a career in machine learning and AI development themselves.

We suggest subscribing, so you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can subscribe by searching ‘80,000 Hours’ wherever you get your podcasts (RSS, SoundCloud, iTunes, Stitcher).

If a smarter-than-human AI system were developed, who would decide when it was safe to deploy? How can we discourage organisations from deploying such a technology prematurely to avoid being beaten to the post by a competitor? Should we expect the world’s top militaries to try to use AI systems for strategic advantage – and if so, do we need an international treaty to prevent an arms race?

Questions like this are the domain of AI policy experts.

We recently launched a detailed guide to pursuing careers in AI policy and strategy, put together by Miles Brundage at the University of Oxford’s Future of Humanity Institute.

I interviewed Miles to ask remaining questions I had after he finished his career guide. We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far.

Many experts believe that there is a significant chance we’ll create artificially intelligent machines with abilities surpassing those of humans – superintelligence – sometime during this century. These advances could lead to extremely positive developments, but could also pose risks due to catastrophic accidents or misuse. The people working on this problem aim to maximise the chance of a positive outcome, while reducing the chance of catastrophe.

Work on the risks posed by superintelligent machines seems mostly neglected, with total funding for this research well under $10 million a year.

The main opportunity to deal with the problem is to conduct research in philosophy, computer science and mathematics aimed at keeping an AI’s actions and goals in alignment with human intentions, even if it were much more intelligent than us.

In the profile we cover:

The main reasons for and against thinking that the future risks posed by artificial intelligence are a highly pressing problem to work on.

How to use your career to reduce the risks posed by artificial intelligence.

Natural pandemics and new scientifically engineered pathogens could potentially kill millions or even billions of people. Moreover, future progress in synthetic biology is likely to increase the risk and severity of pandemics from engineered pathogens.

But there are promising paths to reducing these risks through regulating potentially dangerous research, improving early detection systems and developing better international emergency response plans.

In the profile we cover:

The main reasons for and against thinking that biosecurity is a highly pressing problem.

And that’s really worrying. This confidence interval suggests the author puts significant probability on human-level artificial intelligence (HLAI) occurring within 20 years. A survey of the top 100 most cited AI scientists also gave a 10% chance that HLAI is created within ten years (this was the median estimate; the mean was a 10% probability in the next 20 years).

This is like being told there’s a 10% chance aliens will arrive on the earth within the next 20 years.

Making sure this transition goes well could be the most important priority for the human race in the next century. (To read more, see Nick Bostrom’s book, Superintelligence, and this popular introduction by Wait But Why).

We issued a note about AI risk just over a year ago when Bostrom’s book was released. Since then, the field has heated up dramatically.

In January 2014, Google bought Deepmind for $400m. This triggered a wave of investment into companies focused on building human-level AI. A new AI company seems to arrive every week.

This line of argument doesn’t apply so much to preventing the use of nuclear weapons, climate change, or containing disease pandemics – the potential to act on these today is about at the same level as it will be in the future.

Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.

Many people we coach are interested in doing research into artificial intelligence (AI), in particular how to lower the risk that superintelligent machines do harmful things not intended by their creators – a field usually referred to as ‘AI risk research’. The reasons people believe this is a particularly pressing area of research are outlined in sources such as:

Our goal with this career review was not to assess the cause area of AI risk research – on that we defer to the authors above. Rather we wanted to present some concrete guidance for the growing number of people who want to work on the problem.

We spoke to the leaders in the field, including top academics, the head of MIRI and managers in AI companies, and the key findings are:

The Centre for the Study of Existential Risk (CSER) is hiring for postdoctoral researchers. Existential risk reduction is a high-priority area on the analysis of the Global Priorities Project and GiveWell. Moreover, CSER report that they have had a successful year in grantwriting and fundraising, so the availability of research talent could become a significant constraint over the coming months. Here is Sean’s announcement:

The Centre for the Study of Existential Risk (University of Cambridge; http://cser.org) is recruiting for postdoctoral researchers to work on the study of extreme risks arising from technological advances. We have several specific projects we are recruiting for: responsible innovation in transformative technologies; horizon-scanning and foresight; ethics and evaluation of extreme technological risks, and policy and governance challenges associated with emerging technologies.

However, we also have the flexibility to hire one or more postdoctoral researchers to work on additional projects relevant to CSER’s broad aims, which include impacts and safety in artificial intelligence and synthetic biology, biosecurity, extreme tail climate change, geoengineering, and catastrophic biodiversity loss. We welcome proposals from a range of fields. The study of technological x-risk is a young interdisciplinary subfield, still taking shape. We’re looking for brilliant and committed people, to help us design it. Deadline: April 24th. Details here, with more information on our website.

If you’ve read the book, and are interested in how you can contribute to this cause, we’d like to hear from you. There’s pressing needs developing in the field for researchers, project managers, and funding. We can help you work out where you can best contribute, and introduce you to the right people.

If you’re interested, please email ben at 80000hours.org, or apply for our coaching.

Introduction

Continuing our investigation into medical research careers, we interviewed Prof. Andrew McMichael. Andrew is Director of the Weatherall Institute of Molecular Medicine in Oxford, and focuses especially on two areas of special interest to us: HIV and flu vaccines.

Key points made

Andrew would recommend starting in medicine for the increased security, better earnings, broader perspective and greater set of opportunities at the end. The main cost is that it takes about 5 years longer.

In the medicine career track, you qualify as a doctor in 5-6 years, then you work as a junior doctor for 3-5 years, while starting a PhD. During this time, you start to move towards a promising speciality, where you build your career.

In the biology career track, get a good undergraduate degree, then do a PhD. It’s very important to join a top lab and publish early in your career. Then you can start to move towards an interesting area.

After you finish your PhD is a good time to reassess. It’s a competitive career, and if you’re not headed towards the top, be prepared to do something else. Public health is a common backup option, which can make a significant contribution. If you’ve studied medicine, you can do that. People sometimes get stranded mid-career, and that can be tough.

An outstanding post-doc applicant has a great reference from their PhD supervisor, is good at statistics/maths/programming, and has published in a top journal.

If you qualify in medicine in the UK, you can earn as much as ordinary doctors while doing your research, though you’ll miss out on private practice. In the US, you’ll earn less.

Some exciting areas right now include stem cell research, neuroscience, psychiatry and the HIV vaccine.

To increase your impact, work on good quality basic science, but keep an eye out for applications.

Programming, mathematics and statistics are all valuable skills. Other skills shortages develop from the introduction of new technologies.

Good researchers can normally get funded, and Andrew would probably prefer a good researcher to a half million pound grant, though he wasn’t sure.

He doesn’t think that bad methodology or publication bias is a significant problem in basic science, though it might be in clinical trials.

In this post, we apply this method to identify a list of causes that we think represent some particularly promising opportunities for having a social impact in your career (though there are many others we don’t cover!).

We’d like to emphasise that these are just informed guesses over which there’s disagreement. We don’t expect the results to be highly robust. However, you have to choose something to work on, so we think it’ll be useful to share our guesses to give you ideas and so we can get feedback on our reasoning – we’ve certainly had lots of requests to do so. In the future, we’d like more people to independently apply the methodology to a wider range of causes and do more research into the biggest uncertainties.

The following is intended to be a list of some of the most effective causes in general to work on, based on broad human values. Which cause is most effective for an individual to work on also depends on what resources they have (money, skills, experience), their comparative advantages and how motivated they are. This list is just intended as a starting point, which needs to be combined with individual considerations. An individual’s list may differ due also to differences in values. After we present the list, we go over some of the key assumptions we made and how these assumptions affect the rankings.

We intend to update the list significantly over time as more research is done into these issues. Fortunately, more and more cause prioritisation research is being done, so we’re optimistic our answers will become more solid over the next couple of years. This also means we think it’s highly important to stay flexible, build career capital, and keep your options open.

If you’re looking to spend or influence large budgets with the aim of improving the world (or happen to be extremely wealthy!) we recommend taking a look. It also contains brief arguments in favor of five causes.

Introduction

In an earlier post we reviewed the arguments in favor of the idea that we should primarily assess causes in terms of whether they help build a society that’s likely to survive and flourish in the very long-term. We think this is a plausible position, but it raises the question: what activities in fact do help improve the world over the very long term, and of those, which are best? We’ve been asked this question several times in recent case studies.

First, we propose a very broad categorisation of how our actions today might affect the long-run future.

Second, as a first step to prioritising different methods, we compiled a list of approaches to improve the long-run future that are currently popular among the community of people who explicitly believe the long-run future is important.

The list was compiled from our knowledge of the community. Please let us know if you think there are other important types of approach that have been neglected. Further, note that this post is not meant as an endorsement of any particular approach; just an acknowledgement that it has significant support.

Third, we comment on how existing mainstream philanthropy may or may not influence the far future.

At 80,000 Hours, we think it’s really important to find the causes in which you can make the most difference. One important consideration in evaluating causes is how much we should care about their impact on future generations. Important new research by a trustee of CEA (our parent charity) Nick Beckstead, argues that the impact on the long-term direction of future civilization is likely to be the most important consideration in working out the importance of a cause.

Receive our career guide to your inbox, as well as monthly updates on our latest research, events near you and career opportunities.

We're affiliated with the University of Oxford's Future of Humanity Institute and the Oxford Uehiro Centre for Practical Ethics.

We're part of the Centre for Effective Altruism, and work closely with Giving What We Can.

80,000 Hours is part of the Centre for Effective Altruism, a registered charity in England and Wales (Charity Number 1149828) and a registered 501(c)(3) Exempt Organization in the USA (EIN 47-1988398).

We do our best to provide useful information, but how you use the information is up to you. We don’t take responsibility for any loss that results from the use of information on the site. Please consult our full legal disclaimer and privacy policy.

80,000 Hours is part of the Centre for Effective Altruism, a registered charity in England and Wales (Charity Number 1149828) and a registered 501(c)(3) Exempt Organization in the USA (EIN 47-1988398).

We do our best to provide useful information, but how you use the information is up to you. We don’t take responsibility for any loss that results from the use of information on the site. Please consult our full legal disclaimer and privacy policy.