He has called rapidly-evolving AI, with the potential to advance to the point where it spirals out of mere humans' control, the "biggest existential threat" we face, and a "fundamental risk to the existence of civilization."

"In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives," he said. "AI is already helping us diagnose diseases better, matching drugs with people depending what they're sick (with) so they can get treated better. It's going to help a whole lot of people get treated who wouldn't have had access to it before."

Zuckerberg took issue fear-mongering from his fellow Silicon Valley billionaire.

"I don't understand it," he said. "It's really negative and in some ways I think it is pretty irresponsible."

On Twitter today, Musk fired back that that's just the point: Zuckerberg doesn't understand.

"I've talked to Mark about this," he tweeted. "His understanding of the subject is limited."

Zuckerberg isn't the only tech expert to frame Musk's views as overly alarmist. But certainly the co-founder of OpenAI has done some hard thinking about artificial intelligence. With that research company, launched with a $1 billion endowment in 2015, Musk and other engineers aim to develop "safe" artificial intelligence by maintaining influence over the conditions under which it is created.

"Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach," according to OpenAI's website. "When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest."

And it's not like Musk is shy about exploring the delicate line between the organic and the mechanical. Earlier this year he launched another company, Neuralink, to develop so-called "neural lace" technology that could enable the human brain to communicate directly with computers.

It was founded as a medical company, but Musk has said such vastly augmented mental processing power could also help prevent humankind from being reduced to "a pet or a house cat" for some future super-intelligent AI.

Whether AI evolves in such a dire direction or turns out to be a net-positive for humanity is still an open question at the moment. But in the near term, it's proliferating across hospitals and health systems in ways that could hardly be predicted just a few years ago.

As it does, there are ethical questions that need to be kept in mind as the technology fundamentally changes the way care is delivered.

As healthcare workers are "displaced from their current roles by automation (and) retrained and reskilled to perform new ones, redirecting a significant section of that talent to operate and manage the ethics charge will prove beneficial," according to an Infosys report released in May.