Elon Musk, the CEO of Tesla and SpaceX, has donated $10 million for research of artificial intelligence safety. Musk has said that AI is “potentially more dangerous than nukes” and that something similar to what happens in the movie The Terminator is plausible.

“It’s best to try to prevent a negative circumstance from occurring than to wait for it to occur and then be reactive,” Musk says. “This is a case where the range of negative outcomes, some of them are quite severe. It’s not clear whether we’d be able to recover from some of these negative outcomes. In fact, you can construct scenarios where recovery of human civilization does not occur. When the risk is that severe, it seems like you should be proactive and not reactive.”

The money will be distributed to researchers through grant competitions, with the application process beginning on Monday. Overseeing the distribution is the Future of Life Institute, which describes itself as an organization “working to mitigate existential risks facing humanity.” The institute doesn’t care whether researchers are in academia or with a company — it just wants the money distributed to people with what it considers good ideas. “The plan is to award the majority of the grant funds to AI researchers,” the institute explains, “and the remainder to AI-related research involving other fields such as economics, law, ethics, and policy.”

Musk was among the high-profile signatories of an open letter from FLI earlier this week warning scientists that AI must not only grow more capable, but more beneficial. This is exactly what his money is going toward. While FLI doesn’t provide any specifics about what it wants to see out of the research, it did provide a long list of guidelines and priorities for researchers in its letter earlier this week. Certainly, some of these ideas may filter into the proposals that it ultimately accepts and distributes Musk’s funding too.

FLI’s suggested research priorities include optimizing AI’s economic impact so that it avoids destroying jobs in a way that will increase income inequality, determining how AI should handle ethical questions like those surrounding autonomous vehicle collisions, and how human control can be guaranteed over something like a weapons system.