But according to at least one report, and some experts, it doesn’t have to be that way. ICIT – the Institute for Critical Infrastructure Technology – contends in a recent whitepaper that the power of artificial intelligence and machine learning (AI/ML) can “crush the health sector’s ransomware pandemic.”

Which, on its face, might sound like a bit of an oversell, when the mantra in cybersecurity is that there is no such thing as a silver bullet.

James Scott, ICIT senior fellow and author of the report, agrees that AI/ML alone will not make any organization bulletproof. Organizations must, “effectively implement fundamental layered cybersecurity defenses and promote cyber-hygiene among personnel,” he said.

But, he said the use of AI/ML can definitely solve the low-hanging fruit problem. “They will no longer be an attractive target for unsophisticated ransomware and malware threat actors,” he said, “so adversaries will dedicate their resources to attacking easier targets – likely in other sectors – that do not have algorithmic defense solutions.”

Rob Bathurst, managing director for worldwide health care and life science at Cylance, and an ICIT fellow, agrees that AI/ML are not a silver bullet. “But they are a much better bullet,” he said.

He said they are a major improvement over Security Information and Event Management (SIEM) solutions that, in the words of the report, “are plagued by data overload, false positives, and false negatives.”

It is obvious that the healthcare sector needs better security. One of the reasons it is such a popular target is that, as the report notes, the victims are more likely to pay, since, “every second a critical system remains inaccessible risks the lives of patients and the reputation of the institution. Hospitals whose patients suffer as a result of deficiencies in their cyber-hygiene are subject to immense fines and lawsuits.”

Also, for security solutions to be attractive to healthcare organizations, they have to be both non-intrusive and affordable.

As has been widely reported, healthcare workers are notorious for skirting security protocols because of “friction” – they slow down or inhibit the ability to respond quickly to patient needs.

And, when a hospital or clinic is on a tight budget, security is a lower investment priority than patient care.

That, said Don McLean, chief technologist at DLT and an ICIT fellow, is both understandable and appropriate. “If a hospital administrator has limited funds, and needs to choose a new DLP system to protect data or a new defibrillator to rescue dying patients, they’ll pick the latter every time – and they should,” he said.

Given that, what are the chances that AI/ML will become common enough in the health sector to reverse, if not “crush” the ransomware trend?

On the “non-intrusive” front, it gets high marks. “One of the selling points of AI/ML is that it is not intrusive and works in the background,” said Mike Davis, CTO of CounterTack.

But Davis said it isn’t cheap, and when it comes to the bottom line, administrators may conclude that it is cheaper to pay ransoms than to pay AI/ML vendors.

AI/ML can be three times the cost of anti-virus solutions, he said, “and healthcare organizations are already fighting for every budget dollar they have.

“If the average cost of a ransomware attack is $300 – which was reported by the ICIT in 2016 – why would I spend tens of thousands of dollars more per year to prevent that risk? I’d need 30 or 40 successful attacks before the cost makes sense.”

An even more significant barrier, however, is simply that nothing – not even AI/ML – is a “set it and forget it” security solution. It takes time both to configure it and maintain it.

Experts, including advocates like Scott, agree that it is a component of a “layered” security posture.

Matt Mellen, security architect, health care, at Palo Alto Networks, said AI and ML are “proving to be very effective at one of hardest things to get right in security – to identify what’s normal versus malicious.”

But he, like others, adds the caveat that, “no single capability, like AI or ML, is going to be able to stop all attacks. Hence, it’s important to carefully employ multiple advanced prevention capabilities.”

Davis has the same warning. “Using AI/ML technologies raises the bar but it does not eliminate the risk. There is a lot more a company has to do to really address the risks discussed in the report,” he said.

“Attackers can simply move to different techniques – for example non-malware attacks that do not use binaries but scripts or macros – which are much harder to train/learn from an AI/ML perspective. Any preventative technology that relies on the classification of good or bad is always susceptible to the arms race,” he said.

Reza Chapman, managing director of cybersecurity in Accenture’s health practice, said maintaining the effectiveness of AI/ML can require significant maintenance. “Detection thresholds need to be adjusted to reach a balance between false alarm rate and missed detection rate,” he said.

“Further, constant tuning is often necessary within the specific operation environment. Overall, this is not a reason to steer away from these technologies. Instead, consider AI and ML as complementary to the personnel in your security program.”

And Chapman said he doubts AI/ML will discourage ransomware attackers. While those technologies will certainly raise a barrier, “the payoffs of mounting a ransomware or other malicious campaign are high, and attackers will likely continue to evolve,” he said. “Further, attackers are likely to employ AI and ML techniques for their own efforts.”

Perry Carpenter, chief evangelist and strategy officer at KnowBe4, agreed that, “like any technology, the devil is in the details. These systems need to be implemented, baselined, tuned, and proven effective in an ongoing manner.”

And he added another caveat – that while AI/ML are promising technologies, both for detection of threats and in being “self-healing and self-protecting,” they can still be undermined by negligent humans.

While they can adapt to, “nuances of human behavior, it would be a mistake to believe that they can fully account for the unpredictability of humans,” he said.

Beyond all that, if a healthcare organization decides to implement an AI/ML solution, that takes some advance due diligence as well.

Scott said while there are hundreds of companies offering it, “many of these organizations are faux experts and snake-oil salesmen that are using AI and ML as buzzwords and whose products lack any substance.

“A conservative guess would be that at the moment, there are less than a dozen actual, reputable vendors,” he said.

Davis is skeptical in the other direction – he said vendors that exclusively offer AI/ML may be charging a premium for a product that is not necessarily superior. “Many [antivirus] vendors have already moved to AI/ML models, and usually for a price much cheaper than the new ‘ML/AI only’ vendors,” he said.

So, the best advice is to ask around. “Ask for a demonstration,” Scott said. “Seek input from the product’s clients, and examine what technology the solution actually employs, how it deploys that technology, and whether it can deliver on its promised results.”

Finally, Scott acknowledges that attackers will eventually adapt to any new defense, but said he believes it will be five to 10 years before that happens. Meanwhile, “algorithmic solutions are adaptable, so they constantly learn and can be updated and retooled to respond to emerging threats,” he said.

“AI and ML will not become obsolete – they will be the foundation for all future defense-grade cybersecurity solutions.”

That, of course, assumes they are implemented. As Chapman noted, “one thing is clear from the history of many healthcare organizations: Top-tier innovation is focused on patient care, operational efficiency and cost reduction, not necessarily IT and security.”

Copyright 2017 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.