InfoWars for reals —

Facebook no longer wants to be a tool for enlisting "useful idiots."

Enlarge/ Facebook Security gave details last week on how the company is fighting nation-state and other groups' efforts to use the social network to amplify false news and for covert propaganda efforts.

Facebook Security has revealed more of how the company has begun to combat the spread of propaganda and "fake news," acknowledging for the first time that the company tracked a campaign that attempted to influence the 2016 US presidential campaign. Facebook began to fight "fake news" posts (sort of) earlier this year when the company introduced a "disputed" label that is now being added to some shared stories of questionable provenance. But the company has also launched a less-visible effort to clamp down on "false amplification" of propaganda efforts on its social media platform.

During the 2016 presidential campaign, Facebook Security team members monitored a number of activities that "we assessed to fit the pattern of information operations," according to a paper published by the company last week. The paper, authored by Facebook Security's Jen Weedon, William Nuland, and Facebook Chief Security Officer Alex Stamos—entitled "Information Operations and Facebook"—acknowledges that Facebook accounts were used as part of a coordinated effort to spread misinformation and influence the shape of political conversations. Facebook did not attempt to attribute the campaign to a specific party.

Further Reading

While acknowledging that activity, the authors also downplayed its scope. "In short," the Facebook team wrote, "while we acknowledge the ongoing challenge of monitoring and guarding against information operations, the reach of known operations during the US election of 2016 was statistically very small compared to overall engagement on political issues." Nevertheless, Facebook reported the activity as part of a growing trend that the company now feels compelled to combat because of its potential poisoning effect on the more organic conversations on social media.

"In brief, we have had to expand our security focus... to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people," Stamos and his team wrote. The white paper is an attempt to bring transparency to how the company is handling organized efforts to exploit Facebook as a vehicle for information warfare.

It does not appear to be a coincidence that Facebook published its paper on the same day a Senate subcommittee was hearing testimony on the impact of the sort of information operations Facebook is now tracking. And if Facebook and other social media companies don't deal with it themselves, they may find an administration obsessed with "fake news" pushing them in directions they would rather not go. One expert suggested in recent testimony that an independent agency might be needed to rate the accuracy of news sources.

The anatomy of an information operation

What Facebook, the government, and the military refer to as "information operations" are the modern instantiation of propaganda and psychological operations (PSYOP)—a form of what the corporate world refers to as "marketing." There are differences in substance between information operations and marketing, but the intent is the same: to get a particular group of people to do a particular thing—whether it be reporting improvised explosive devices or showing support for Donald Trump.

Jonathan Nichols, a cybersecurity expert who once specialized in information operations, explained to Ars how information operations basically work. "You start out with a psychological objective," he said: getting a target audience to like or not like a particular thing, person, or concept. "Then you set up a list of supporting target activities, like making statements in social media—anything that shows the audience is eating the message."

In a purely digital information operation, for example, this could be something as simple as social media posts or "shares" that reinforce the message being delivered. The degree to which these "supporting activities" occur is used to measure the effectiveness of the campaign.

The next step, Nichols said, is to "identify a potential list of target audiences" that can be steered to the desired behavior. For each demographic group, an information operations team would create a list of "conditions" (things that they strongly want to keep) and "vulnerabilities" (the things that they want to change). "This is a lot like the marketing process," Nichols noted.

Understanding the target audience shapes both the content and the form of the message. "Then you develop a list of arguments—lines of persuasion, logical arguments that convince the audience that by doing an action you will either support a condition that you want to maintain or will affect one of their vulnerabilities," Nichols said. "So the argument might be that by supporting Trump, jobs will come back to you, or by supporting American troops, you support stability and your kids will be safe."

With a set of arguments in hand, the next step is to "produce products that are most effective to reach the audience you're targeting," said Nichols. Those "products" are tailored to how the audience is best reached: radio broadcasts, social media posts, news stories, e-mails, pamphlets, or (in hostile territory with no infrastructure) loudspeaker announcements, for example. "If the target audience is less literate, you don't want a lot of text," Nichols explained. "Maybe lots of pictures."

After sending out the message, the information operators then measure the effects of the campaign and adjust it. "Maybe you find out something you thought was a vulnerability isn't, so you refine your product."

These sorts of campaigns are most effective when they can leverage opinion-makers. "I might be able to get a mullah or a popular Twitter account to agree with me," Nichols explained. "There's always going to be a certain segment of the target audience that will agree with me." These individuals, who agree with the reasoning of the campaign and essentially work against their own best interests to advance it, are often referred to as "useful idiots" (a Soviet-era term used to describe unwitting individuals who spread Russian disinformation).

What really separates information operations from most legitimate marketing campaigns is the range of sources that can be used to spread the message. With most advertising, you know the source of the information—it's overt. "When an American soldier hands you a pamphlet, you know who's behind the message," Nichols said. The same is true, he added, when something is published by RT (formerly Russia Today) or Sputik—it's clear that the Russian government has sponsored the message. These overt information channels are referred to as "white" information sources. But information operations also use what are referred to as "gray" and "black" sources: material from unknown sources (like an anonymous Twitter account or a billboard without attribution) or sources that are intentionally deceptive or covert about their identity (such as Guccifer 2.0 and DCLeaks, for example).

A visual of "white," "gray," and "black" information sources used in the alleged Russian information operations during the 2016 US presidential campaign, presented by Clint Watts of the Foreign Policy Research Institute in testimony before the Senate Armed Services Committee on April 27.

Nearly all of these factors were identified by Facebook as part of the information operations campaign they tracked during the election. "One aspect of this [activity] included malicious actors leveraging conventional and social media to share information stolen from other sources, such as e-mail accounts, with the intent of harming the reputation of specific political targets," the Facebook Security team noted. The campaign used a "relatively straightforward yet deliberate series of actions":

Private and/or proprietary information was accessed and stolen from systems and services (outside of Facebook);

Dedicated sites hosting this data were registered;

Fake personas were created on Facebook and elsewhere to point to and amplify awareness of this data;

Social media accounts and pages were created to amplify news accounts of and direct people to the stolen data.

From there, organic proliferation of the messaging and data through authentic peer groups and networks was inevitable.

At the same time, Facebook observed "a separate set of malicious actors engaged in false amplification" of the messages being promoted by the information operations, using fake Facebook accounts "to push narratives and themes that reinforced or expanded on some of the topics exposed from stolen data," the Facebook Security team wrote.

However, the report from Facebook downplayed the impact of the campaign on the overall election. The Facebook Security team said that Facebook's research into the "overall civic engagement during this time on the platform" found that the actual reach of the messages being distributed through Facebook by the information operation "shared by false amplifiers was marginal compared to the overall volume of civic content shared during the US election."

That conclusion, however, does not mean that a future information operation could have a much broader effect on an election. And the same pattern of behavior has played out on Facebook and other platforms in other countries recently. So Facebook has committed to tracking these campaigns and terminating fraudulent accounts used to amplify them:

We’ve made recent improvements to recognize these inauthentic accounts more easily by identifying patterns of activity — without assessing account contents themselves. For example, our systems may detect repeated posting of the same content, or aberrations in the volume of content creation. In France, for example, as of April 13, these improvements recently enabled us to take action against over 30,000 fake accounts.

When the news breaks, we fix it

Facebook has also committed to do more to prevent targeted data collection by malicious parties in an attempt to make it more difficult to create cloned accounts or hijack legitimate accounts. But the Facebook Security paper shied away from taking on the topic of "fake news" itself.

That was not the case at the Senate hearing last week. The US Senate's Armed Services Committee heard testimony on "Cyber-enabled Information Operations" from former NSA Deputy Director John C. "Chris" Inglis, RAND Corporation senior information scientist Dr. Rand Waltzman, former acting Undersecretary of Defense for Policy Michael Lumpkin, and Clint Watts of the Foreign Policy Research Institute. Much of the testimony focused on Russian use of social media to "weaponize" information, along with how to counter the dissemination of false "news" and other misinformation.

Watts went as far as to propose the government set up an independent agency separate from the government to act as a "consumer reports," rating media sources on their accuracy. That data would then be displayed on social media networks next to posts linked to sites, "so if the consumer wants to read about aliens invading the US, they can," Watts said. "But they know the accuracy of that [media source] is about 10 percent.” Watts labeled sites such as InfoWars and ZeroHedge in a chart he presented as "gray" sources, potentially used to spread propaganda.

Inglis, while not ascribing any particular solution, told senators:

I do see a role for government both in facilitating the creation of an enduring, values-based framework that will drive technology and attendant procedures to serve society’s interests, and in reconciling that framework to and with like-minded Nations in the world. Conversely, I believe government’s failure to serve in this role will effectively defer leadership to a combination of market forces and the preferences of other nation-states which will drive, unopposed, solutions that we are likely to find far less acceptable.

It's not clear whether these opinions are shared by current administration officials—the Trump administration hasn't yet issued a cohesive "cyber" policy, and a National Security Council spokesperson recently declined to tell Ars whether the NSC had filled positions in the cyber policy directorate. But combined congressional inquiries into the "interference" in the 2016 election could trigger legislation that attempts to address information campaigns.

Sean Gallagher
Sean is Ars Technica's IT and National Security Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland. Emailsean.gallagher@arstechnica.com//Twitter@thepacketrat

Clinton blames Comey and Wikileaks. I blame Clinton and the DNC primarily based on my reporting, and Comey and other factors count as secondary. While "fake" news certainly was present, the real information operation was around amplifying and distorting the WikiLeaks content, and the very neat way in which "e-mails" played into Clinton's previously-existing e-mail scandal.

I'm not going to lie. Homegrown information operations by the Deplorables probably amplified the Russian information ops far beyond what they dreamed of. Reddit is, IMHO, a much more effective PSYOP organization than the Russian intelligence community.

I think a lot of people are focusing on the quick fix and not the long term solution, which is to emphasize critical thinking in our education systems. The problem is that many people don't have the ability to recognize and disregard "fake news", not that Facebook allows it to exist.

I think a lot of people are focusing on the quick fix and not the long term solution, which is to emphasize critical thinking in our education systems. The problem is that many people don't have the ability to recognize and disregard "fake news", not that Facebook allows it to exist.

Facebook could have avoided all of this when they decided to open up the platform to everyone other than college students. That's when the average political and emotional intelligence level of the user pool took a steep nosedive, and why many people using Facebook at the time (including me) jumped ship as fast as possible.

To be fair, how would they have known something like this would have occurred?

"Fake News" really only became a thing fairly recently.

Fake news has ALWAYS existed, we call them rumors. The difference is the ability to mass publish, disseminate, and polish rumors.

Do people actually think Hillary lost due to fake news when there was so much damning real news?

Damning how? Is she in trouble I have not heard about? Did she have to pay a $25m fraud settlement or something?

More to the point I think you are missing the systemic nature of this. Hillary has been the target of fake news since the GOP started investigating her in the 1990s. All those radio blowhards calling her a criminal constantly led to "lock her up" chants (famously led by General Flynn--actual criminal who actually did damning things and is actually going to be locked up) and so on. Nobody is saying people read a fake news story and it changed their minds--propaganda is not that simple.

I have a strict personal "No facebook" rule.I don't have an accnt, I don't go there, and I don't get any information from FB. I tend to think anything that comes from there is probably B.S. to begin with.Consider the source, is also a rule of mine lol.

There are differences in substance between information operations and marketing, but the intent is the same: to get a particular group of people to do a particular thing

I find it enlightening and disturbing the various ways in which espionage and marketing are similar. Whether it's on the influencing side as this article discusses, or on the surveillance side as so many other articles have covered, the similarities are many. Why is one so acceptable and the other is not? Why do we decry the 'surveillance state' yet willingly assist the 'surveillance corporation'? Has marketing always embraced the tools of espionage so widely and openly? It's a fascinating subject in my opinion. And one that probably deserves better coverage (at least outside of Ars) because it's not clear to me where it's taking us as a society.

I have a strict personal "No facebook" rule.I don't have an accnt, I don't go there, and I don't get any information from FB. I tend to think anything that comes from there is probably B.S. to begin with.Consider the source, is also a rule of mine lol.

Oh, you have an account there. You just haven't logged in yet. But trust me, FB has it ready and waiting for you when you do.

Facebook got rid of most of its human-based review team, in favor of an 'algorithmic' approach.

I suspect it was because humans would filter out stories that generated clicks and long online dwell times. Telling the human reviewers "it's obviously false, but it makes us money" would be reported, while it's easy to automatically evaluate the response to stories and promote the ones that make Facebook money.

This is also a consequence of investing so much social capital in corporate platforms with sole owners. The early internet was much more federated. Everybody could run their own site, email, bulletin board, etc. In a federated system, the damage can be more limited because conspiracy nutters go to conspiracy sites, whereas grandma keeping in touch with grandkids used email or something. I can STILL do those things but everybody who comes to My Awesome Site(TM) will still use Facebook too.

Facebook could have avoided all of this when they decided to open up the platform to everyone other than college students. That's when the average political and emotional intelligence level of the user pool took a steep nosedive, and why many people using Facebook at the time (including me) jumped ship as fast as possible.

To be fair, how would they have known something like this would have occurred?

"Fake News" really only became a thing fairly recently.

This post is hard to figure... is he/she joking?

In case you are not joking, you just won an "all expenses" trip to the Bahamas. Click here. LOL.

I have a strict personal "No facebook" rule.I don't have an accnt, I don't go there, and I don't get any information from FB. I tend to think anything that comes from there is probably B.S. to begin with.Consider the source, is also a rule of mine lol.

I envy you. Unfortunately, it's the only way most of my friends and family communicate, so I'm stuck scrolling through Kermit sipping tea memes to find out about the birth of a cousin.

The answer is to teach and learn critical thinking skills in the whole population.

At broad scale that's true, but that's something FB can do nothing about. And there is no reason why the problem can't be attacked on multiple fronts - both education and technology.

All that said, people currently in power have no reason to improve critical thinking skills of the citizens, as their bullshit would become too obvious and people won't be as easy to manipulate. So the best we can hope for is for companies like FB and Google to try some technical solutions.

2) Facebook is culpable. This phenomenon has been observable (and measurable, in the Russian case) for at least two years. Facebook has chosen to turn a blind eye to propaganda and deception in order to increase their advertising revenue. Their statement on this subject are interesting, but they had a moral responsibility to discuss this publicly prior to the election and did not do so. Folks who are tossing around Zuckerberg as a 2020 presidential candidate would do well to keep this in mind.

Facebook got rid of most of its human-based review team, in favor of an 'algorithmic' approach.

I suspect it was because humans would filter out stories that generated clicks and long online dwell times. Telling the human reviewers "it's obviously false, but it makes us money" would be reported, while it's easy to automatically evaluate the response to stories and promote the ones that make Facebook money.

Or they were just trying to save money since you don't pay an algorithm. In any case, it looks like they may be going back to using people

This is also a consequence of investing so much social capital in corporate platforms with sole owners. The early internet was much more federated. Everybody could run their own site, email, bulletin board, etc. In a federated system, the damage can be more limited because conspiracy nutters go to conspiracy sites, whereas grandma keeping in touch with grandkids used email or something. I can STILL do those things but everybody who comes to My Awesome Site(TM) will still use Facebook too.

It's rather more nuanced. Those small conspiracy sites still exist, but the people who read run them, and the people who read them, realized that they could take their message to Facebook and reach a much wider audience. At the same time, Facebook had essentially zero controls on what was classified as "news" and thus these conspiracies were given weight that they did not deserve, because people see something tagged as news and believed it.

Then the heavyweights got involved. Pretty much anyone who needed to push an agenda realized that they could push their own version of reality onto FB as news and then sockpuppet amplify it with fake accounts, until it went viral (still hate that term). A whole lot of time, energy, and money was invested in building up the necessary moving parts to manipulate massive numbers of people on FB.

All the while these actors, essentially propagandists, are working very hard to undermine real news services like the AP, Reuters, and anyone whose reporting presents a viewpoint not in alignment with their own.

At this point, you have an entire body of people who think FB is a good source of news, yet don't trust mainstream media organizations because they are furiously demonized by their own news sources.

I don't think FB has any real hope of curbing the relentless onslaught of fake news and propaganda without taking drastic actions, any of which is likely to force large parts of their userbase to abandon the platform. And therein lies the rub. Is FB willing to give up a very heavily engaged part of their userbase, and risk the loss of lots of ad revenue? We'll see, but i doubt it.

Lastly, call in InfoWars "grey" news, while simultaneously refusing to acknowledge that MSM was totally in the pocket of DNC/Dems is completely dishonest. You cannot have a MSM that is behind a particular candidate at rates of over 90% and then act as if InfoWars is "grey news".

A lot of you would do well to get outside your bubble and reflexive dismissal of sources you don't like for whatever reason.