You know a topic is trending when the likes of Tesla’s Elon Musk and Facebook’s Mark Zuckerberg publicly bicker about its potential risks and rewards. In this case, Musk says he fears artificial intelligence will lead to World War III because nations will compete for A.I. superiority. Zuckerberg, meanwhile, has called such doomsday scenarios “irresponsible” and says he is optimistic about A.I.

But another tech visionary sees the future as more nuanced. Ray Kurzweil, an author and director of engineering at Google, thinks, in the long run, that A.I. will do far more good than harm. Despite some potential downsides, he welcomes the day that computers surpass human intelligence—a tipping point otherwise known as “the singularity.” That’s partly why, in 2008, he cofounded the aptly named Singularity University, an institute that focuses on world-changing technologies. We caught up with the longtime futurist to get his take on the A.I. debate and, well, to ask what the future holds for us all.

Fortune: Has the rate of change in technology been in line with your predictions?

Kurzweil: Many futurists borrow from the imagination of science-fiction writers, but they don’t have a really good methodology for predicting when things will happen. Early on, I realized that timing is important to everything, from stock investing to romance—you’ve got to be in the right place at the right time. And so I started studying technology trends. If you Google how my predictions have fared, you’ll get a 150-page paper analyzing 147 predictions I made about the year 2009, which I wrote in the late ’90s—86% were correct, 78% were exactly to the year.

What’s one prediction that didn’t come to fruition?

That we’d have self-driving cars by 2009. It’s not completely wrong. There actually were some self-driving cars back then, but they were very experimental.

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

He’s not technology.

Have you tried to build models for predicting politics or world events?

The power and influence of governments is decreasing because of the tremendous power of social networks and economic trends. There’s some problem in the pension funds in Spain, and the whole world feels it. I think these kinds of trends affect us much more than the decisions made in Washington and other capitals. That’s not to say they’re not important, but they actually have no impact on the basic trends I’m talking about. Things that happened in the 20th century like World War I, World War II, the Cold War, and the Great Depression had no effect on these very smooth trajectories for technology.

What do you think about the current debate about artificial intelligence? Elon Musk has said it poses an existential threat to humanity.

Technology has always been a double-edged sword, since fire kept us warm but burned down our houses. It’s very clear that overall human life has gotten better, although technology amplifies both our creative and destructive impulses. A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation: It’s very important for your survival to be sensitive to bad news. A little rustling in the leaves may be a predator, and you better pay attention to that. All of these technologies are a risk. And the powerful ones—biotechnology, nanotechnology, and A.I.—are potentially existential risks. I think if you look at history, though, we’re being helped more than we’re being hurt.