7 Misconceptions of AI, Machine Learning and Cybersecurity

“Artificial intelligence has been seeping into our lives in all sorts of ways, without us noticing,” President Obama said in a recent interview. The truth is, we use intelligent services and products daily. Learning algorithms drive cars and planes, help us find information, fight online fraud and money-laundering and much more.

However, crosstalk about artificial intelligence reveals that people still envision it as harmful ‘superintelligence’ that will eventually turn against mankind. A plethora of other misbeliefs challenge the perception of ‘good AI’. Here is a rundown of the most popular ones, and the truth behind them.

Machine learning is a new technology

Machine learning is anything but new. The first algorithm was written in the 70s and, by the 1990s, the science of using algorithms to make predictions was applied in data mining, adaptive software, web applications and language learning. With the advent of big data and the availability of computation power, advances continued in areas such as supervised and unsupervised learning. In the field of cybersecurity, security companies such as Bitdefender have been using machine learning algorithms for just under a decade.

Artificial intelligence = machine learning

Despite being used interchangeably – as they are closely entwined in many applications on the market – machine learning and AI are subtly different. AI is a subfield of computer science making intelligent machines, while machine learning is a subset of AI and is typically associated with statistics, data mining and predictive analytics. Simply put, machine learning is the actual implementation of the methods (algorithms) that support AI.

Machine learning is only summarising data

In cyber-security, this technology helps analysts go through thousands of malicious files daily to correctly and quickly answer the quintessential question: “Is this file clean or malicious?” For instance, if a million files need to be analysed, samples can be split into smaller groups (called clusters) where each file is similar to the others. Then, a security analyst looks at one file from each cluster and applies the findings to all of them. However, its value is reflected by its applicability in multiple areas: malicious URL detection, APT identification, anomaly detection for network events and spam filtering, for instance.

Machine learning replaces traditional anti-malware technologies

Unfortunately, this scenario is not realistic. No single technology has yet proven itself efficient in dealing with the full range of malware samples. Algorithms complement each other, as well as traditional, heuristic and signature-based malware detection, to ensure the highest catch rate possible. Perceptrons, Neural Networks, Centroids, Binary Decision Tree and Deep Learning – each algorithm plays a specific role. Some specialise in particular malware families, others on new malicious files and some are built to minimise the number of false positives.

“Machine learning isn’t a panacea,” says Dragos Gavrilut, antimalware research manager and machine learning expert at Bitdefender. “A single technology won’t solve all problems, but several technologies, combined, can solve the majority of problems. Working as an ensemble, they are able to provide a high rate of detection of new, unseen threats”.

Machine learning can’t predict unseen events

On the contrary, machine learning can foresee zero-day malware with a high degree of accuracy. The fundamental principle of machine learning is to recognise patterns that emerge from past experiences and make predictions based on them. This means security solutions can react to new, unseen cyber-threats faster than automated cyber-attack detection systems used today. The technology has also being adapted to fight off sophisticated attacks such as APTs, where threat actors are especially careful to remain undetected for indefinite periods of time.

AI will automate us out of our jobs

People have long speculated on the implications of computers becoming more intelligent than humans. And the recent media buzz on machines and autonomous technologies has started to raise fears and anxieties related to layoffs. In 2013, an Oxford study estimated that 47 percent of jobs in the US are “at risk” of being automated in the next 20 years. This is already happening, albeit at a small scale. Retail companies like Amazon started automating their warehouses a long time ago, with a reduction in low-skilled workers as a result.

Nobody needs human security experts anymore

Blurring the line between man and machine, machine learning is a great cyber-weapon, but it can’t handle the burden of fighting cyber threats alone. Machine learning algorithms may yield false positives and a human’s expertise is needed to retrain those algorithms with proper data.

“We don’t turn the systems on and simply go home,” says Cristina Vatamanu, malware researcher at Bitdefender and PhD student in Machine Learning Theory. “A low number of false positives is crucial to us because detecting a clean file as malware can render programs and operating systems unusable. To achieve the best results, machines and cybersecurity experts need to work together.”

New figures from the FCA show that reported data hacking attacks against financial services companies have quadrupl… https://t.co/p1nWq2fkZq15 hours ago

Find us on Facebook

Follow us on Google+

ABOUT IT SECURITY GURU

The IT Security Guru offers a daily news digest of all the best breaking IT security news stories first thing in the morning! Rather than you having to trawl through all the news feeds to find out what’s cooking, you can quickly get everything you need from this site!