Cop & Robber: The two faces of AI in Cybersecurity

In cybersecurity, the ability to adapt to new and complex challenges is critical. Innovation in our field has made our work as cybersecurity professionals easier but has also produced never-before-seen threats.

Take for example, Artificial Intelligence (AI) and machine learning capabilities - technologies that continue to grow at unprecedented rates. These technologies bring with them many useful applications for cyber defence, such as image analysis and machine translation to combat the spread of cybercrime on our devices.

These tools means security professionals do not have to spend time on the laborious task of advanced data and pattern recognition. Instead, the technology takes on these duties leaving us to focus on other areas of work.

But with any great tool, there exists the very real danger of it falling into the wrong hands. The same advancements being celebrated in AI, could be turned around and used to maliciously attack systems. The more universal AI becomes, the higher this risk.

If AI is deployed for malicious purposes the rise in theft, spear-phishing attacks and intelligent viruses could be catastrophic. Yet despite the risk it poses, the size of this challenge is still not fully understood.

Fighting the good fight

AI isn’t just a technology of the future – it is already in use across the cybersecurity industry. This is because it has the capacity to simplify the detection and reaction process at scale.

This is particularly true of machine learning which is used in cybersecurity to predict behaviours. To use Bitdefender as an example, our security solution has machine learning technologies which are designed specifically to detect malicious files and contrast them against good ones. This technology is constantly on the prowl, recognising user behaviour, and hunting for anomalies.

An example of AI in best practice can be seen in the detection of financial fraud. Recently, I purchased a ticket for my partner to accompany me on a pre-arranged work trip, and seconds after my purchase I received a phone call from my bank. The bank noted one-persons tickets were outside of my regular shopping behavior: so what’s the deal? Is it fraud?

The AI systems the bank had in place recognised it was symptomatic of fraudulent behaviour and it triggered an immediate warning to the necessary professionals. This is the exciting potential of modern cybersecurity: an environment where the process of recognising, reacting and ultimately preventing fraud is instantaneous.

Evil intentions

However, amidst this excitement there remains genuine concern. There is the potential for criminals to use these AI-fuelled security solutions as a benchmark against their own creations, strengthening their attacks.

It’s a process which has been used in the past by cybercriminals against new technologies: get a sample, see if any of the security solution is detected and engage in a process of tweaking and re-tweaking until the security solution fails to detect anything.

Recent breakthroughs have also shown some frightening examples of the potential of AI in offensive applications. Researchers at ZeroFox recently showcased a fully automated spear-phishing system that could create tailored tweets on Twitter based simply on a user’s demonstrated interests, generating clicks to malicious files.

The strengthening of spear-phishing attacks are a particular troubling aspect of AI cybercrime. Spear-phishing will often involve an extensive amount of personal research and data collection to pinpoint a victim within specific networks. It is a time intensive activity, identifying targets and generating contextually specific messaging to commit the fraud. But as the ZeroFox example highlighted, AI technology allows a criminal to cut through this process of data collection, and bombard millions with tailored emails at all times of the day.

Cybersecurity is often described as a game of cat and mouse, but in reality, it is a game of cat and cat. In modern times, our industry has assumed a more reactive role: ready to respond when bad things happen, but never fully having the upper hand. Technologies like machine learning have finally titled this balance in favour of the good guys but at a moment’s notice, this pendulum can swing.

It is undeniable AI wields huge power. From my experience, hackers don’t waste time and very rarely lose momentum. Although we don’t have a precise example of malicious AI to point to, the threat is very real. It is imperative we in the industry stay prepared for any developments.

Latest Videos

Hear from Invictus Games Sydney 2019 CEO, Patrick Kidd OBE and Head of Technology, @James-d-smith -share their insights on how they partnered with Unisys to protect critical data over an open, public WiFi solution.

With so much change all the time, how can executives best prepare their businesses to meet the security challenges of the coming years? CSO Australia, in conjunction with Mimecast, explored this question in an interactive Webinar that looks at how the threat landscape has evolved – and what we can expect in 2019 and beyond.

According to new research conducted by the Ponemon Institute, Australia and New Zealand have the highest levels of data breaches out of the nine countries investigated. This was linked to heavy investment in security detection and an under-investment in security and vulnerability response capabilities

Copyright 2019 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.