In 2018 the California-based company FireEye tipped Facebook and Google off to a network of fake social media accounts from Iran that was conducting campaigns to influence people in the United States.

In response, Google and Facebook, using backend data to determine that a branch of the Iranian government was responsible, removed dozens of YouTube channels, a score of Google+ accounts and a handful of blogs.

Lee Foster, manager of information operations at FireEye, was at the forefront of the firms’ investigation. “Right now, you know something's automated just by the sheer volume of content pushing out,” he says. ”It's not possible for a human to do this, so it's clearly not organically created. Often you'll see automated retweeting of some list of accounts that just to boost out a message. “

But the landscape is about to change, he says, as artificial intelligence comes online that can mask its automated roots.

“Imagine having a capability out there that can automate the organic creation of original content effectively enough that it looks real, but you don't even have to have it operate or touch it,” Foster says.

AI could micro-target citizens with deeply personalized messaging.

His fears are shared by other analysts. A recent Brookings Institute report outlined some of the changes that are in store. “In the very near term, the evolution of AI and machine learning, combined with the increasing availability of big data, will begin to transform human communication and interaction in the digital space,” the report, The Future of Political Warfare, predicts. “It will become more difficult for humans and social media platforms themselves to detect automated and fake accounts, which will become increasingly sophisticated at mimicking human behavior.”

The days of AI catfishing is fast approaching. A sophisticated AI could detect information about people, determine who is susceptible to a particular message, and tailor the interaction as if the AI was a person. Brookings says AI will “micro-target citizens with deeply personalized messaging. They will be able to exploit human emotions to elicit specific responses. They will be able to do this faster and more effectively than any human actor.”

So what’s the solution? Artificial intelligence that can compete with the volume and analysis it will take to detect manipulated photos, articles and social media messages. It will take an AI to catch an AI, dueling each other to determine what's real.

It will take an AI to catch an AI, dueling each other to determine what's real.

“I suspect that may well be the case,” Foster says of this future. “The thing that the AI brings to this is sheer volume. You're not going to have enough a human talent in place to be able to catch all of that. It's going to have to be a very capital mix of human intelligence and talent, combined with a kind of AI tools that can detect these automated campaigns.”

The big data behind social media is a key front in this struggle. Facebook, Google and Twitter use AI to determine what content and ads appear in search results, newsfeeds and timelines. This same bits of information, in nefarious hands, can be used to target more sinister messages.

Last week Facebook vowed to use staff and automation to weed out fake news, including video and photos. "The same false claim can appear as an article headline, as text over a photo or as audio in the background of a video,” Facebook product manager Tessa Lyons said in the statement. “In order to fight misinformation, we have to be able to fact-check it across all of these different content types.”

But while these companies can use algorithms to also detect disinformation, the results may not do much good. “Social media companies can tweak their algorithms to better detect disinformation campaigns or other forms of manipulation (and they have begun to do so), but the underlying systems and revenue models are likely to stay the same,” Brookings says.

To train an AI to spot fake news, we have to define it.

What's worse is that "fake news" can be difficult to distinguish from similar but more broadly accepted online behavior. News sites and blogs that are generally considered legitimate (as well as comedians and pranksters) repurpose and edit photos under the protection of Fair Use. Outlets of all stripes push agendas and perspectives. The matter of teaching an AI what "fake news" is raises the tough question of how exactly to define it.

In this landscape, it may be difficult to train an AI to ignore minor infractions and focus on the more nefarious activity. “It's a tough line to draw and it's difficult to systematize that,” Foster says. He says the AI will try to do what researchers do now: try to divine the motivation of the media pushes.

“The fact that they're masking these origins and hiding these affiliations is an important part of this,” he says. “We don't write reports on what RT or Sputnik wrote today, because we know they're Russian media arms. We do point out when we see internet accounts pretending to be Americans that are heavily promoting RT or Sputnik articles into U.S. audiences, because there’s more subversive activity going on there. They're masking their true origins and affiliations. That's where we try to draw a line in terms of what constitutes an influence campaign.”

Whether it happens during the training of the AI, or after the AI has identified suspicious behavior, humans somewhere in the loop will be making the call. Using AI as a tool to detect these campaigns may be necessary, but the computational power of a machine may rely on the processing power of the human brain to unmask these clandestine campaigns.

A Part of Hearst Digital Media
Popular Mechanics participates in various affiliate marketing programs, which means we may get paid commissions on editorially chosen products purchased through our links to retailer sites.