Elon Musk notably sounded the alarm about potentially catastophic artificial intelligence. As concern grows, Dustin Moskovitz and Cari Tuna’s funding outfit is also paying attention, with several recent grants focused on the risks of AI, including one for $5.5 million.

Artificial intelligence research is booming, and the field has drawn a ton of private funds as a result, especially surrounding the race to bring autonomous vehicles to market. But a smaller research priority involves heading off enormous risks at stake as humanity creeps toward the advent of advanced artificial intelligence.

To paraphrase the great Dr. Ian Malcolm, there’s more research being devoted to whether we could, than there is to whether we should. As it looks increasingly likely that humans could develop machine intelligence that outperforms our own, how do we make sure it doesn’t pose a serious threat to humanity?

The latest funder to make this a chief concern is the Open Philanthropy Project, anchored by the wealth of Dustin Moskovitz and Cari Tuna, which this year bumped up artificial intelligence risk to near the top of its priority list. This has led to its biggest grant to the field yet, $5.5 million toward the launch of the Center for Human-Compatible Artificial Intelligence, led by UC Berkeley prof and AI pioneer Stuart Russell.

The center brings together other leading AI researchers from Berkeley, Cornell, and the University of Michigan to explore how we can ensure that the behavior of AI systems will align with human values. Other funders of the center are Elon Musk grantee Future of Life Institute and the UK-based Leverhulme Trust.

The Open Philanthropy Project is kind of a confusing entity, reminding us that transparency doesn’t always translate to clarity. It’s a joint project of Good Ventures and GiveWell—the former is the philanthropy of Moskovitz and Tuna, the latter a nonprofit that evaluates causes for donors and has been sort of a standard bearer for the effective altruism movement of philanthropy.

The two entities had been working very closely, with GiveWell pointing Good Ventures toward funding opportunities, and in 2014, made it official by calling their joint work the Open Philanthropy Project. One goal is to create a platform that engages other funders in learning and grantmaking. As we've reported, Instagram co-founder Mike Krieger and wife Kaitlyn are also involved with OPP.

Anyway, not long ago, GiveWell’s main guy Holden Karnofsky decided he was fully on board with the issue of AI risk, and the Open Philanthropy Project has given around $7.5 million total to the issue to date. The broader context is OPP's ongoing exploration of global catastrophic risks. It describes that work this way: "Governments and corporations aren’t necessarily incentivized to focus on preparing for potentially globally disruptive events, so we’re seeking opportunities to help civilization become more robust." Biosecurity and pandemic preparedness is the other top issue on OPP's radar.

Karnofsky, like a lot of people, was skeptical at first when it came to viewing AI as a catastrophic threat that should be considered alongside biological pandemics. It had been something of an obsession among the effective altruism community, which is made up of a lot of young tech guys, but gained more mainstream attention when high-profile people like Elon Musk and Stephen Hawking expressed their deep concerns that AI might one day be the end of humanity.

Musk gave $10 million to the Future of Life Institute to research what he’s called humanity’s “biggest existential threat.” The institute boasts many highly respected researchers and has been building critical mass around this issue, hosting events and releasing open letters about risks surrounding artificial intelligence, such as creation of autonomous weapons. That letter has garnered more than 20,000 signatories to date.

Open Philanthropy Project has also funded the Future of Life Institute, which in part won over GiveWell’s Karnofsky to the cause and in turn prompted this latest grant to create something of a research foothold on the topic.

The new center builds on the research interests of Stuart Russell, who wrote a definitive text on AI and has been a vocal advocate for making sure it is designed to be beneficial. Russell’s concern is that, if machines reach a certain highly advanced level of intelligence, we need to be certain their functions are in alignment with human values. One philosophical example he’s cited imagines telling a machine to make some paperclips, only to have it turn the entire planet into a junkyard of paperclips. A less dramatic example might be telling a machine to clean the bathroom, only to have it use a white dress to do so, not realizing the value of the dress is higher than the cleanliness of the bathroom. Russell suggests designing AI to learn human values by observing our behavior.

Whether you’re an effective altruist or not, there’s a solid argument that this is a worthy opening for private philanthropy. It’s a risky, still emerging area of research, and one that doesn’t have a lot of other money lined up behind it. It’s also a field that’s evolving very rapidly, and could benefit from philanthropy’s potential flexibility relative to government funding. And even though industry is putting a lot of funds behind AI, profit motives make them far more likely to heedlessly race to the finish, corporations not being known as champions of caution.

There’s also the fact that, even when supplied by easy targets like tech billionaires, high-profile funding will chip away at the “Terminator Factor,” a term I just made up that describes how everyone who talks about this issue inevitably makes a dumb joke about The Terminator. See even I just did it. It’s a challenge to get people to take this stuff more seriously, and funding can help move it out of the margins.