News

AI Security A Huge Concern

A new panel by Webroot shows that 86% of security professionals are concerned that AI and ML (machine learning) technology could be used against them. And they are more on target than off, because it’s already happening right now with fake celebrity videos of an inappropriate nature.

Seventy-five percent of cyber security professionals in the US believe that, within the next three years, their company will not be able to safeguard digital assets without AI, and yet overall, 99 percent believe AI could improve their organization’s cyber security. AI is a double-edged sword.

Those surveyed noted key uses for AI including time-critical threat detection tasks, such as identifying threats that would have otherwise been missed and reducing false positive rates.

“There is no doubt about AI being the future of security as the sheer volume of threats is becoming very difficult to track by humans alone,” says Hal Lonas, chief technology officer at Webroot.

AI changes the technology landscape

This is the first time in history that AI has come up to the level predicted in Sci-Fi for decades. And some of the smartest people in the world are working on ways to tap AI’s immense power to do just that.

And some bad guys are using it to make fake celebrity videos designed to draw in (or phish) a manipulated user to then open an infected attachment.

With help from a face swap algorithm of his own creation using widely-available parts like TensorFlow and Keras, Reddit user “Deepfakes” tapped easily accessible materials and open-source code that anyone with a working knowledge of machine learning could use to create serviceable fakes.

“Deepfakes” has produced videos or GIFs of Gal Gadot (now deleted ), Maisie Williams, Taylor Swift, Aubrey Plaza, Emma Watson, and Scarlett Johansson, each with varying levels of success. None are going to fool the discerning watcher, but all are close enough to hint at a terrifying future.

After training the algorithm — mostly with YouTube clips and results from Google Images — the AI goes to work arranging the pieces on the fly to create a convincing video with the preferred likeness. It could be a celebrity, a co-worker, or an ex-girlfriend. AI researcher Alex Champandard shared with Motherboard that any decent consumer-grade graphics card could produce these effects in hours. That’s terrifying!

Here’s how it plays out…

Your user gets a spear-phishing email based on their social media “likes and shares”, inviting them to see a “private” celebrity video with…their favorite movie star in a compromised scenario. Play this forward and your user will be able to order fake celeb videos with any two (or more) celebrities of their liking and get it delivered within 24 hours for 20 bucks.

A high volume of these video downloads will come with some extra spice–additional malware like Trojans and Keyloggers that give the bad guys full access. Never has it been more important to educate your staff with security awareness training that sends them frequent simulated tests using phishing emails, the phone, and text to their smartphone.

If you need help with security training give EnhancedTECH a call at 714-970-9330 or contact us at [email protected] for a complimentary cybersecurity consultation.

Samantha Keller (AKA Sam) is a published author, tech-blogger, event-planner and mother of three fabulous humans. Samantha has worked in the IT field for the last fifteen years, intertwining a freelance writing career along with technology sales, events and marketing. She began working for EnhancedTECH ten years ago after earning her Bachelor’s degree from UCLA and attending Fuller Seminary. She is a lover of kickboxing, extra-strong coffee, and Wolfpack football.Her regular blog columns feature upcoming tech trends, cybersecurity tips, and practical solutions geared towards enhancing your business through technology.