The Looming War of Good AI vs. Bad AI

The rise of artificial intelligence, machine learning, hivenets, and next-generation morphic malware is leading to an arms race that enterprises must prepare for now.

The tech industry — and by extension, the global economy — is at a precipice with artificial intelligence (AI) as cybercriminals adopt AI technology to more effectively detect and exploit vulnerabilities, evade detection, adapt to complex network environments, and maximize profitability.

This is the first time that adversaries and white hats will have the same tools. It is leading to an AI arms race that organizations must prepare for now. Here's what the good guys will be up against.

1. A Wave of Machine LearningOver the past year, our industry has seen cybercriminals weaponize millions of unsecured IoT devices and use them to take out systems and networks. Supervised AI incubators can spend years carefully cultivating an AI to perform specific tasks in a predictable way. Cybercriminals, however, are not willing to go slowly. The unsupervised learning models they are likely to use to develop AI-based attacks, where speed of development is more important than predictability, are especially dangerous — and could potentially be devastating because of their complexity and unpredictability. As attack methodologies become more intelligent, there is the real potential to create swarms of compromised Internet of Things devices that could wreak indiscriminate havoc. Think Africanized bees.

If the best and the brightest within the cybersecurity research community are calling for regulation, it is because they see that the cybercriminal community is looking seriously at building these AI-based attacks and are likely to release them unsupervised into the wild.

2. Next-Generation Morphic MalwareIf not next year, we will soon begin to see malware created completely by machines based on automated vulnerability detection and complex data analysis. Morphic malware is not new, but it is about to take on a new face by leveraging AI to create sophisticated new code that can learn to evade detection through machine-written routines. With the natural evolution of tools that already exist, adversaries will be able to develop the best possible exploit based on the characteristics of each unique weakness. Malware is already able to use learning models to evade security and can produce more than a million virus variations in a day. But so far, this is all just based on an algorithm, and there is very little sophistication or control over the output.

3. The Rise of Hivenets and SwarmbotsWe have seen the development of predictive software systems programmed using AI techniques. The latest advances in these sorts of tools leverage massive databases of expert knowledge made up of billions of constantly updated bits of data in order to make accurate predictions. This sort of predictive analysis represents the new paradigm for how computing resources will be used to transform our world.

Building on what the industry has already seen, it is likely that cybercriminals will replace botnets with intelligent clusters of compromised devices built around deep learning technology to create more effective attack vectors. Traditional botnets are slaves — they wait for commands from the bot master in order to execute an attack. But what if these nodes were able to make decisions with minimal supervision, or even autonomously, instead of waiting for master commands?

This would become a hivenet, instead of a botnet, that could leverage peer-based self-learning to effectively target vulnerable systems at an unprecedented scale. Hivenets will be able to use swarms of compromised devices, or swarmbots, to identify and tackle different attack vectors all at once. Hivenets would be able to grow exponentially, widening their ability to simultaneously attack multiple victims.

What Lies Ahead: Intelligent WarfareProtecting networks and services, including things as important as critical infrastructure, will require a systemic approach based on intentionally engineering vulnerabilities out of a network and then applying an adaptive layer of meshed security tools, unlike the separate and isolated security devices most organizations currently have in place. This integration could provide visibility across the distributed network to detect unknown threats, share and correlate threat intelligence in real time, dynamically segment the network and isolate compromised devices and systems, and respond to attacks in a coordinated fashion.

Artificial intelligence promises incredible benefits to organizations that can harness its power, but it also portends disaster as cybercriminals use it for malicious purposes. Whoever can leverage technologies like machine learning and AI will have the quintessential security defense system to survive the escalating AI war.

Derek Manky formulates security strategy with more than 15 years of cyber security experience behind him. His ultimate goal to make a positive impact in the global war on cybercrime. Manky provides thought leadership to industry, and has presented research and strategy ... View Full Bio

An exploitable command injection vulnerability exists in the measurementBitrateExec functionality of Sony IPELA E Series Network Camera G5 firmware 1.87.00. A specially crafted GET request can cause arbitrary commands to be executed. An attacker can send an HTTP request to trigger this vulnerability...

In Eclipse Vert.x version 3.0 to 3.5.1, the HttpServer response headers and HttpClient request headers do not filter carriage return and line feed characters from the header value. This allow unfiltered values to inject a new header in the client request or server response.

In Eclipse OpenJ9 version 0.8, users other than the process owner may be able to use Java Attach API to connect to an Eclipse OpenJ9 or IBM JVM on the same machine and use Attach API operations, which includes the ability to execute untrusted native code. Attach API is enabled by default on Windows,...

Systems with microprocessors utilizing speculative execution and Intel software guard extensions (Intel SGX) may allow unauthorized disclosure of information residing in the L1 data cache from an enclave to an attacker with local user access via a side-channel analysis.