Artificial intelligence-powered malware is coming, and it's going to be terrifying

China Photos/Getty ImagesArtificial intelligence will open the door to ever-more devastating attacks — but the most effective ones may be the most subtle, Darktrace’s Dave Palmer says.

Imagine you’ve got a meeting with a client, and shortly before you leave, they send you over a confirmation and a map with directions to where you’re planning to meet.

It all looks normal — but the entire message was actually written by a piece of smart malware mimicking the client’s email mannerisms, with a virus attached to the map.

It sounds pretty far out — and it is, for now. But that’s the direction that Dave Palmer, director of technology at cybersecurity firm Darktrace, thinks the arms race between hackers and security firms is heading.

As artificial intelligence becomes more and more sophisticated, Palmer told Business Insider in an interview at the FT Cybersecurity Summit in London in September, it will inevitably find its way into malware — with potentially disastrous results for the businesses and individuals that hackers target.

It’s important to remember that Palmer is in the security business: It’s his job to hype up the threats out there (present and future), and convince customers that Darktrace is the only one that can save them. It’s a $500 million (£401 million) British firm, with an AI-driven approach to defend networks. It creates an “immune system” for customers that learns how businesses operate then monitors for potential irregularities.

But with that in mind, Palmer provides an fascinating insight into how one of the buzziest young companies in the industry thinks cybersecurity is going to evolve.

Smart viruses will hold industrial equipment to ransom

Ransomware is endemic right now. It’s a type of malware that encrypts everything on the victim’s computer or network, then demands a bitcoin ransom to decrypt it. If they don’t pay up in a set timeframe, the data is lost for good.

AI-infused ransomware could turbo-charge the risks these attacks make — self-organising to inflict maximum damage, and going after new, even more lucrative targets.

“[We’ll] see coordinated action. So imagine ransomware waiting until it’s spread across a number of areas of the network before it suddenly takes action,” Palmer said.

“I’m convinced we’ll see the extortion of assets as well as data. So factory equipment, MRI scanners in hospitals, retail equipment — stuff that you’d pay to have back online because you can’t actually function as a business without it. Data’s one thing and you can back that up, but if your machine stops working then you’re not going to be making any more money.”

Malware will learn to mimic people you know

Mustafa Suleyman/TwitterGoogle has taught neural network AI to play Go — but the tech could also be used for far more nefarious ends.

Using recurring neural networks, it’s already possible to teach AI software to mimic writing styles — whether that’s clickbait viral news articles or editorial columns from The Guardian. Palmer suggests that in the future, malware will be able to look through your correspondence, learn how you communicate, and then mimic you in order to infect other targets.

“Nadia’s got something on her laptop that can read all her emails, reads her messages, can read her calendar, and then sends people messages in the same communication style she uses with them. So Nadia’s always very rude to me so she’ll send jokey messages … but to you she’ll be extremely polite. So you would receive, maybe, a map of this location of where to meet from Nadia — because it can see in her calendar that we’re due to meet. And you’d open it, because it’d be relevant, it’d be contextual — but that map would have a payload attached to it.”

The worst hacks won’t be the most noticeable ones

In December 2015, a Ukrainian power station was knocked offline by an unprecedented hack. 80,000 people lost power as a result, and Russian state-sponsored hackers are believed to be responsible. It’s a spectacular example of how vulnerable the modern world is to hack attacks — but Palmer thinks the most destructive hacks in the future may be far less visible.

“If you can disable an oil rig, people are going to notice. Everyone’s going to get around to trying to fix it. If you really wanted to try and harm an oil and gas firm, to my mind what you would do is have your self-hunting, self-targeting malware go in there and then start to change the geophysical data on which they decide where they’re going to buy mining rights. And over a long time you can make sure they’re buying drilling rights in the wrong places, those wells are coming up drier than they should be, and do really serious harm to their business in a way they’re much less likely to notice and be able to respond to.”

He added: “You might think, ‘ok, that’s a good idea, we should go and look at our databases, and see if there’s any funny software there.’ But the attacks of the future could just as likely be in their internet of things sensors, their submarines, their scanning equipment that’s collecting [the data] in the first place, and good luck finding those attacks.”

It’s the dark side of the artificial intelligence revolution

We’re in the early days of an artificial intelligence revolution. The technology is being for everything from self-driving cars to treating cancer, and we’re only just scratching the surface right now. But as it becomes ever-more advanced and ever-more accessible — it is, inevitably, going to be used for ill.

What’s the timeframe for all of this? “I reckon you could train a neural network in the next 12 months that would be smart enough to [carry out a trust attack] in a rudimentary way,” Palmer said. “And if you look at the progress people like Google DeepMind are making on natural speech and language tools, it’s in the next couple of years.”