NSF, Defense-funded research designed to counter misinformation campaigns

May 6, 2014

FOR IMMEDIATE RELEASE

BLOOMINGTON, Ind. -- Complex networks researchers at Indiana University have developed a tool that helps anyone determine whether a Twitter account is operated by a human or an automated software application known as a social bot. The new analysis tool stems from research at the IU Bloomington School of Informatics and Computing funded by the U.S. Department of Defense to counter technology-based misinformation and deception campaigns.

BotOrNot analyzes over 1,000 features from a user's friendship network, their Twitter content and temporal information, all in real time. It then calculates the likelihood that the account may or may not be a bot. The National Science Foundation and the U.S. military are funding the research after recognizing that increased information flow -- blogs, social networking sites, media-sharing technology -- along with an accelerated proliferation of mobile technology is changing the way communication and possibly misinformation campaigns are conducted.

As network science is applied to the task of uncovering deception, it leverages the structure of social and information diffusion networks, along with linguistic cues, temporal patterns and sentiment data mined from content spreading through social media. Each of these feature classes is analyzed with BotOrNot.

“We have applied a statistical learning framework to analyze Twitter data, but the ‘secret sauce’ is in the set of more than one thousand predictive features able to discriminate between human users and social bots, based on content and timing of their tweets, and the structure of their networks,” said Alessandro Flammini, an associate professor of informatics and principal investigator on the project. “The demo that we’ve made available illustrates some of these features and how they contribute to the overall ‘bot or not’ score of a Twitter account.”

Through use of these features and examples of Twitter bots provided by Texas A&M University professor James Caverlee's infolab, the researchers are able to train statistical models to discriminate between social bots and humans; according to Flammini, the system is quite accurate. Using an evaluation measure called AUROC, BotOrNot is scoring 0.95 with 1.0 being perfect accuracy.

“Part of the motivation of our research is that we don't really know how bad the problem is in quantitative terms,” said Fil Menczer, the informatics and computer science professor who directs IU’s Center for Complex Networks and Systems Research, where the new work is being conducted as part of the information diffusion research project called Truthy. “Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”

Flammini and Menczer said it’s their belief that these kinds of social bots could be dangerous for democracy, cause panic during an emergency, affect the stock market, facilitate cybercrime and hinder advancement of public policy. The goal is to support human efforts to counter misinformation with truthful information.

Related Links

Using predictive features that analyze content, timing and network structure, BotOrNot can uncover Twitter accounts like @StanBieberfan that are most likely –- in this case almost 80 percent -- to be social bots rather than human-operated accounts. | Photo by
truthy.indiana.edu