DeepLocker demonstrates how AI can create a new breed of malware

While artificial intelligence is changing our world for the better, if put in the wrong hands, cybercriminals can figure out how to use the technology in a negative way. To stay ahead of the curve, IBM Research has announced it has been studying the evolution of this technology in order to predict new potential threats that could come from cybercriminals. One of the outcomes of the company’s research is DeepLocker, which was presented at Black Hat USA 2018 this week in Las Vegas.

According to Marc Ph. Stoecklin, principal research scientist at IBM Research, DeepLocker is a “new breed of highly targeted and evasive attack tools powered by AI.”

DeepLocker was designed in an attempt to improve understanding of how AI models can be combined with malware techniques to create a “new breed of malware,” Stoecklin explained in a post. This new type of malware can disguise its intent until it reaches an intended victim, which could be determined by taking advantage of facial recognition, geolocation, and voice recognition.

“The DeepLocker class of malware stands in stark contrast to existing evasion techniques used by malware seen in the wild. While many malware variants try to hide their presence and malicious intent, none are as effective at doing so as DeepLocker,” Stoecklin wrote.

DeepLocker avoids detection from malware scanners by hiding in normal applications, such as video conference software. It is designed to continue to behave normally until it confirms that it has reached the intended target, Stoecklin wrote.

It uses a deep neural network AI model to ensure that the malicious payload is only unlocked once it reaches the intended target. The conditions that have to be met to unlock an attack are nearly impossible to reverse engineer because of the incorporation of AI. This is because it can use several attributes to identify a target, so it would not be practical to exhaust all possible trigger conditions.

“The security community needs to prepare to face a new level of AI-powered attacks. We can’t, as an industry, simply wait until the attacks are found in the wild to start preparing our defenses. To borrow an analogy from the medical field, we need to examine the virus to create the ‘vaccine,’” Stoecklin wrote.

Article Tags

About Jenna Sargent

Jenna Sargent is an Online and Social Media Editor for SD Times. She covers Microsoft, data, programming languages, and UI frameworks and libraries. She likes reading, tabletop gaming, and playing the guitar. Follow her on Twitter at @jsargey!