Research in my group, NaN, focuses on Web science, social media, social networks, social computing, Web search and data mining, distributed and intelligent Web applications, and modeling of complex information networks.

My calendar is a bit crowded. You may schedule an appointment with Tara Holbrook, our center’s administrative assistant. Or you can try your luck by email, phone (+1-812-856-1377), fax (+1-812-855-0600), or in person (Informatics East room 314).

The new Indiana University Network Science Institute (IUNI) unites 100+ researchers at IU — building on their world-renowned multidisciplinary expertise toward further scientific understanding of the complex networked systems of our world. Through pioneering new approaches in mapping, representing, visualizing, modeling, and analyzing diverse complex networks across levels and disciplines, IUNI will lead the way. We keep track of the big picture — ever-changing and interconnected. We’re laying the groundwork for innovative research and discovery in the area of network science.

The project, informally dubbed “Truthy,” makes use of complex computer models to analyze the sharing of information on social media to determine how popular sentiment, user influence, attention, social network structure, and other factors affect the manner in which information is disseminated. Additionally, an important goal of the Truthy project is to better understand how social media can be abused.

Since 25 Aug 2014, when a first misleading article was posted on a conservative blog, the Truthy project has come under criticism from some, who have misrepresented its goals. Contrary to these claims, the target is the study of the structural patterns of information diffusion. For example, an email sent simultaneously to a million addresses is likely spam, even if we have no automatic way to determine whether its content is true or false. The assumption behind the Truthy effort is that an understanding of the spreading patterns may facilitate the identification of abuse, independent from the nature or political color of the communication.

While the Truthy platform provides support to study the evolution of communication in all portions of the political spectrum, it is not informed by political partisanship. The machine learning algorithms used to identify suspicious patterns of information diffusion are entirely oblivious to the possibly political partisanship of the messages.

Read the facts below for a primer on Truthy. More detailed information can be found on the Truthy website and in our publications.

Updates:

8/28/2014: Despite the clarifications in this post, Fox News and others continued to perpetrate their attacks to our research project and to the PI personally. Their accusations are based on false claims, supported by bits of text and figures selectively extracted from our writings and presented completely out of context, in misleading ways. None of the researchers were contacted for comments before these outlandish conspiracy theories were aired and published. There is a good dose of irony in a research project that studies the diffusion of misinformation becoming the target of such a powerful disinformation machine.

9/3/2014: David Uberti wrote an accurate account of recent events in Columbia Journalism Review.

10/18/2014: Unfortunately, the smear campaign against our research project continues, with misleading information echoed in an op-ed by FCC Commissioner Ajit Pai, who did not contact any of the researchers with questions about the accuracy of his allegations.

10/22/2014: Amid news reports that the chairman of the House Science, Space and Technology Committee initiated an investigation into the NSF grant supporting our project, read our interview in the Washington Post’s Monkey Cage setting the record straight about our research.

10/23/2014: While the House Majority Leader joins the fray, IU releases a statement in support of our work.

11/3/2014: Jeffrey Mervis covers the controversy about this project in Science. We also provided additional information about our research in a slide deck embedded at the bottom of this post.

11/4/2014: Five leading computing societies and associations (CRA, ACM, AAAI, USENIX, and SIAM) wrote a joint letter to the chairman and the committee ranking member of the House Committee on Science, Space, and Technology expressing their concern over mischaracterizations of our research.

11/11/2014: The House Science Committee Chairman sent a letter to the director of the NSF on November 10, stating that our grant “was intended to create standards for online political discussion” and that a web service developed under the grant “targeted conservative social media messages.” These allegations are false, as we have explained in this post, in the slides embedded below, and in our publications — including the one quoted in the Chairman’s letter. On the same day, the Association of American Universities released a statement on the grant inquires by the House Science Committee.

11/21/2014: False rumors about our research continue to be spread. Some of the questions we have received suggested that our two separate project and demo websites were generating confusion, so we merged them into a redesigned research website with information and highlights about the research project, publications, demos, data, etc.

The project has focused on domains such as news, politics, social movements, scientific results, and trending social media topics. Researchers develop theoretical computer models and validate them by analyzing public data, mainly from the Twitter streaming API.

Social media posts available through public APIs are processed without human intervention or judgment to visualize and study the spread of millions of memes. We aim to build a platform to make these analytic tools easily accessible to social scientists, reporters, and the general public.

An important goal of the project is to help mitigate misuse and abuse of social media by helping us better understand how social media can be potentially abused. For example: when social bots are used to create the appearance of human-generated communication (hence the name “truthy”). We study whether it is possible to automatically differentiate between organic content and so-called “astroturf.”

On the more theoretical side, we have studied how individuals’ limited attention span affects what information we propagate and what social connections we make, and how the structure of social networks can help predict which memes are likely to become viral.

Hundreds of researchers across the U.S. and the world are studying similar issues based on the same data and with analogous goals — these topics were studied well before the advent of social media. In the US these research efforts are supported not only by the NSF but also by other federal funding agencies such as DoD, DARPA, and IARPA.

The results of our research have been covered widely in the press, published in top peer-reviewed journals, and presented at top conferences worldwide. All papers are publicly available.

Finally, the Truthy research project is not and never was:

a political watchdog

a database to be used by the federal government to monitor the activities of those who oppose its policies

a government probe of social media

an attempt to suppress free speech or limit political speech or develop standards for online political speech

a way to define “misinformation”

a partisan political effort

a system targeting political messages and commentary connected to conservative groups

Congratulations to Onur Varol, Emilio Ferrara, Chris Ogan, Fil Menczer, and Sandro Flammini for winning the ACM Web Science 2014 Best Paper Award with their paper Evolution of online user behavior during a social upheaval (preprint). In the paper, the authors study the pivotal role played by Twitter during the political mobilization of the Gezi Park movement in Turkey. By analyzing over 2.3 million tweets produced during 25 days of protest in 2013, the authors show that similarity in trends of discussion mirrors geographic cues. The analysis also reveals that the conversation becomes more democratic as events unfold, with a redistribution of influence over time in the user population. Finally, the study highlights how real-world events, such as political speeches and police actions, affect social media conversations and trigger changes in individual behavior.

We are excited to announce that the ACM Web Science 2014 Conference will be hosted by our center on the beautiful IUB campus June 23–26, 2014. Web Science studies the vast information network of people, communities, organizations, applications, and policies that shape and are shaped by the Web, the largest artifact constructed by humans in history. Computing, physical, and social sciences come together, complementing each other in understanding how the Web affects our interactions and behaviors. Previous editions of the conference were held in Athens, Raleigh, Koblenz, Evanston, and Paris. The conference is organized on behalf of the Web Science Trust by general co-chairs Fil Menczer, Jim Hendler, and Bill Dutton. Follow us on Twitter and see you in Bloomington!

The DESPIC team at the Center for Complex Systems and Networks Research (CNetS) presented a demo of a new tool named BotOrNot at a DoD meeting held in Arlington, Virginia on April 23-25, 2014. BotOrNot (truthy.indiana.edu/botornot) is a tool to automatically detect whether a given Twitter user is a social bot or a human. Trained on Twitter bots collected by our lab and the infolab at Texas A&M University, BotOrNot analyzes over a thousand features from the user’s friendship network, content, and temporal information in real time and estimates the degree to which the account may be a bot. In addition to the demo, the DESPIC team (including colleagues at the University of Michigan) presented several posters on Scalable Architecture for Social Media Observatory, Meme Clustering in Streaming Data, Persuasion Detection in Social Streams, High-Resolution Anomaly Detection in Social Streams, and Early Detection and Analysis of Rumors. See more coverage of BotOrNot on PCWorld, IDS, BBC, Politico, and MIT Technology Review.