Something that bothers me about most contemporary discussion of the risks of warfare is how lacking in nuance it tends to be. Polarized between the two extremes of “no war” versus “complete anomie where everybody dies”, and in my experience ANY drastic polarizations of that sort are gross oversimplifications which aren’t very helpful. Especially when discussing risk assessment, strategies, etc. Why do people assume that world war is the default? That war between just a few countries is unlikely or even impossible?

As a whole discussion, that sort of approach overlaps very little with what I would think of as political or military goals. Admittedly, I am not any kind of expert - but who is? It puzzles me how so many appear to relate to this kind of vaguely catastrophic thinking. Whatever the real risks of any given tech or military problem may be, fear and worry seem like the closest there is to a guarantee of making bad decisions.

I find Elon Musk’s persistent calls to regulate AI very suspicious: in his position, he must know that the AI we currently use have nothing in common with what he keeps brandishing as a danger to humanity.
It is all based on a misunderstanding: anything with an “if… then…”, neural networks or Markov chains could be AI. But these are nothing new, applications are rather dumb and easy to fool, and will never evolve into a self-conscious and autonomous entity.
However, Elon Musk and some of his pals also know that these old technologies are very efficient when it comes to sell us things we didn’t know we want.
…unless regulation prevents them to use our data to help them manipulating us.

Colossus: The Forbin Project (a.k.a. The Forbin Project) is a 1970 American science fiction thriller film from Universal Pictures, produced by Stanley Chase, directed by Joseph Sargent, that stars Eric Braeden, Susan Clark, Gordon Pinsent, and William Schallert. The film is based upon the 1966 science fiction novel Colossus, by Dennis Feltham Jones (as D. F. Jones), about an advanced American defense system, named Colossus, becoming sentient to everyone's pleasant surprise at first. After being h...

You just know Elon Musk is the sort of person who never gets tired of his parents and aunties making him retell the story about that one precocious thing he said or did when he was three. “Elon, tell your uncle again about that time you said AI would kill us all! It was the most precious thing!”

If people keep asking him (or Stephen Hawking) for his meaningless opinion on this subject, he will keep giving it, but it won’t become a useful insight through repetition.

Why does Elon Musk think people are going to put AI in charge of our nuclear missiles without human oversight? The idea of giving computer full control over any important system has freaked people out since the 60s, at least. To the point where there’s three or four different Star Trek episodes about why computers shouldn’t be in charge of stuff in the original series alone.

I can see putting AI in charge of strategic planning, and that’s probably not smart at this point, but who the hell does he honestly think is going to let the AI push the button?

If he’s worried the AI will advise a nuclear strike because it’s the most probable path to victory… well that’s nothing a human hasn’t already advised in the past, so it’s not much different from the current state of being. The main difference is that, if you tell the AI what the acceptable loss rate, and probability of retaliation is, it’s probably less likely to make that suggestion, because it’s better at math, and doesn’t have any irrational urge to blow people up.