You are here

Who Shares A Fake Story? Fake Users

How does fake news find its way into our respective social media feeds and how far can it travel?

Filippo Menczer, a professor at the Indiana University School of Informatics and Computing, says "bots" propel fake news into the highest corridors of influence online, even as far as the Twitter feed of the President of the United States.

As Menczer describes it, a bot is a computer program designed to control an online account. Topics range from tracking the weather to abstract art projects.

"Some bots are deceptive," Menczer said. "They want to make it look like they are real human users, but are in fact controlled by a person, or an organization, or perhaps a company (that may) control many, many fake accounts, tens, hundreds, thousands, tens of thousands (of accounts)."

But on close inspection, these software-controlled social media accounts are discernible from actual users. But the sheer number of accounts makes it very difficult for companies such as Twitter and Facebook to screen and eliminate false profiles.

For celebrities that spend time reading tweets from their followers, this outsized presence of fake accounts could have a measurable impact. As Menczer puts it, an on-demand army of bots can be used to create "a so-called 'astroturf' fake grassroots campaign, making it look like there is support or objection towards an idea, or making it look like someone is popular — a politician, an idea, a law, a bill — or making it look like it’s very unpopular."

Researchers looked at 1.3 million accounts that tweeted regularly about Russian politics, determining that 45 percent, or 585,000 of them, were run by computer programs.

Not only are such political bots staggeringly prevalent, but it’s extremely difficult to trace one to its source.

"You may have an account that looks like a housewife in Nebraska, supporting a particular candidate, and then, six months later, all of their tweets are deleted, and then this person becomes a journalist in Ukraine, tweeting about a different topic," Menczer said. "These attacks are sophisticated, created on a large scale, and it’s very hard to see who’s behind them."

Massive systems of these programs, anonymous and untraceable, are targeting the U.S. president’s Twitter feed, he said. Though we cannot know how the president forms his opinions, Menczer cautions, we do know he uses Twitter, and at least sees some of the fake news channeled his way in links shared by what appear to be hundreds of followers mentioning him on Twitter.

Manipulating online discourse for political ends is fairly easy to imagine, Menczer said.

"I could create a bunch of fake accounts that mention the president, re-weeting links to fake news, and so that if the president — or anyone else, by the way — it would not be difficult to make it so that when you open your Twitter app, you get the impression that a lot of people are telling you, 'Hey, look at this. This is important. This is outrageous. You should check it out.'"

Menczer gave the example of a bot-fueled story that targeted Trump’s Twitter feed. The story claimed the Clinton campaign registered millions of immigrants living in the country illegally to vote in the presidential election. Trump repeated this claim in a tweet.

"In the days prior to that (tweet), we have found many bots who have tried to put the link to that fake news article in the newsfeed of the president," Menczer said. The claim has not been verified by Washington Post nor any other news outlet.

With the rise of efficient systematic bot attacks, what recourse can be taken? Menczer hesitated to provide solutions.

"It goes a little too much into the political for my level of comfort," he said. "Certainly we can see that there are orchestrated attacks that try to manipulate opinions, and they do target influential users." But, he said, it is up to "the public and the policymakers, the citizens and the journalists" to look at this information and determine what is to be done.