How Hackers Use A.I. to Trick You Into Clicking Sketchy Links

Twitter users might need to be more careful about the links they’re clicking on.

ZeroFox’s Philip Tully and John Seymour revealed at the Black Hat USA 2016 hacker convention on August 4 that they can use machine learning (i.e. artificial intelligence) to make Twitter users open links to malicious websites with a success rate of between 30 percent and 66 percent.

The duo created SNAP_R, a “recurrent neural network that learns to tweet phishing posts targeting specific users,” to prove that artificial intelligence could be used to aid in “spear phishing” attempts. Spear phishing is a direct attempt to phish the user into clicking a bad link, as opposed to normal phishing, which casts a wide net across many users (like in chain or spam emails that ask for login information). SNAP_R essentially finds a target, writes a tweet it thinks will interest them, uses Google’s URL shortener to hide a malicious link, and then tweets at its target in hope of making them click the link.

Google analytics for a malicious link sent via SNAP_R.

“On tests consisting of 90 users, we found that our automated spear phishing framework had between 30% and 66% success rate.” Tully and Seymour write in a paper about SNAP_R. “This is more successful than the 5­-14% previously reported in [large-scale] phishing campaigns, and comparable to the 45% reported for [large-scale] manual spear phishing efforts.”

Artificial intelligence is often used to help protect data, not compromise it. Seymour and Tully wanted to flip that on its head to show that, even though hackers aren’t afraid of A.I. making things un-hackable, the general public should be worried that hackers can use similar tools against their targets.

OpenAI, an Elon Musk-backed project studying the way artificial intelligence will affect us in the future, said in July that the technology can pose a risk. ““An early use of AI will be to break into computer systems,” OpenAI wrote. “We’d like AI techniques to defend against sophisticated hackers making heavy use of AI methods.”

Revealing how SNAP_R works is supposed to “foster greater awareness and understanding of” spear phishing attacks. Discussing the tool’s inner workings could help someone find a way to protect Twitter users from similar threats, provided Twitter users are able to curb their curiosity and be more careful.

“Our approach is predicated on the fact that social media is rapidly emerging as an easy target for phishing and social engineering attacks,” Seymour and Tully write. “We use Twitter as our platform because of its low bar for admissible posts, its community tolerance of convenience services like shortened links, its effective API, and its pervasive culture of overexposing personal information.” Whoops.