PITTSBURGH—The rise of social media has seemed like a bonanza for behavioral scientists, who have eagerly tapped the social nets to quickly and cheaply gather huge amounts of data about what people are thinking and doing. But computer scientists at Carnegie Mellon University and McGill University warn that those massive datasets may be misleading.

In a perspective article published in the Nov. 28 issue of the journal Science, Carnegie Mellon's Juergen Pfeffer and McGill's Derek Ruths contend that scientists need to find ways of correcting for the biases inherent in the information gathered from Twitter and other social media, or to at least acknowledge the shortcomings of that data.

And it's not an insignificant problem. Pfeffer, an assistant research professor in CMU's Institute for Software Research, and Ruths, an assistant professor of computer science at McGill, note that thousands of research papers each year are now based on data gleaned from social media — a source of data that barely existed even five years ago.

"Not everything that can be labeled as 'Big Data' is automatically great," Pfeffer said. He noted that many researchers think — or hope — that if they gather a large enough dataset they can overcome any biases or distortion that might lurk there. "But the old adage of behavioral research still applies: Know Your Data," he maintained.

Still, social media is a source of data that is hard to resist. "People want to say something about what's happening in the world and social media is a quick way to tap into that," Pfeffer said. Following the Boston Marathon bombing in 2013, for instance, Pfeffer collected 25 million related tweets in just two weeks. "You get the behavior of millions of people — for free."

The type of questions that researchers can now tackle can be compelling. Want to know how people perceive e-cigarettes? How people communicate their anxieties about diabetes? Whether the Arab Spring protests could have been predicted? Social media is a ready source for information about those questions and more.

But despite researchers' attempts to generalize their study results to a broad population, social media sites often have substantial population biases; generating the random samples that give surveys their power to accurately reflect attitudes and behavior is problematic. Instagram, for instance, has special appeal to adults between the ages of 18 and 29, African-Americans, Latinos, women and urban dwellers, while Pinterest is dominated by women between the ages of 25 and 34 with average household incomes of $100,000. Yet Ruths and Pfeffer said researchers seldom acknowledge, much less correct, these built-in sampling biases.

Other questions about data sampling may never be resolved because social media sites use proprietary algorithms to create or filter their data streams and those algorithms are subject to change without warning. Most researchers are left in the dark, though others with special relationships to the sites may get a look at the site's inner workings. The rise of these "embedded researchers," Ruths and Pfeffer said, in turn is creating a divided social media research community.

As anyone who has used social media can attest, not all "people" on these sites are even people. Some are professional writers or public relations representatives who post on behalf of celebrities or corporations, others are simply phantom accounts. Some "followers" can be bought. The social media sites try to hunt down and eliminate such bogus accounts — half of all Twitter accounts created in 2013 have already been deleted — but a lone researcher may have difficulty detecting those accounts within a dataset.

"Most people doing real social science are aware of these issues," said Pfeffer who noted that some solutions may come from applying existing techniques already developed in such fields as epidemiology, statistics and machine learning. In other cases, scientists will need to develop new techniques for managing analytic bias.

The Institute for Software Research is part of Carnegie Mellon's School of Computer Science, now celebrating its 25th year.

__________________

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
- Martin Luther King, Jr.

A growing number of academic researchers are mining social media data to learn about both online and offline human behaviour. In recent years, studies have claimed the ability to predict everything from summer blockbusters to fluctuations in the stock market.

But mounting evidence of flaws in many of these studies points to a need for researchers to be wary of serious pitfalls that arise when working with huge social media data sets, according to computer scientists at McGill University in Montreal and Carnegie Mellon University in Pittsburgh.

Such erroneous results can have huge implications: thousands of research papers each year are now based on data gleaned from social media. “Many of these papers are used to inform and justify decisions and investments among the public and in industry and government,” says Derek Ruths, an assistant professor in McGill’s School of Computer Science.

In an article published in the Nov. 28 issue of the journal Science, Ruths and Jürgen Pfeffer of Carnegie Mellon’s Institute for Software Research highlight several issues involved in using social media data sets – along with strategies to address them. Among the challenges:

Different social media platforms attract different users – Pinterest, for example, is dominated by females aged 25-34 – yet researchers rarely correct for the distorted picture these populations can produce.
Publicly available data feeds used in social media research don’t always provide an accurate representation of the platform’s overall data – and researchers are generally in the dark about when and how social media providers filter their data streams.
The design of social media platforms can dictate how users behave and, therefore, what behaviour can be measured. For instance, on Facebook the absence of a “dislike” button makes negative responses to content harder to detect than positive “likes”.
Large numbers of spammers and bots, which masquerade as normal users on social media, get mistakenly incorporated into many measurements and predictions of human behaviour.
Researchers often report results for groups of easy-to-classify users, topics, and events, making new methods seem more accurate than they actually are. For instance, efforts to infer political orientation of Twitter users achieve barely 65% accuracy for typical users – even though studies (focusing on politically active users) have claimed 90% accuracy.

Many of these problems have well-known solutions from other fields such as epidemiology, statistics, and machine learning, Ruths and Pfeffer write. “The common thread in all these issues is the need for researchers to be more acutely aware of what they’re actually analyzing when working with social media data,” Ruths says.

Social scientists have honed their techniques and standards to deal with this sort of challenge before. “The infamous ‘Dewey Defeats Truman’ headline of 1948 stemmed from telephone surveys that under-sampled Truman supporters in the general population,” Ruths notes. ”Rather than permanently discrediting the practice of polling, that glaring error led to today’s more sophisticated techniques, higher standards, and more accurate polls. Now, we’re poised at a similar technological inflection point. By tackling the issues we face, we’ll be able to realize the tremendous potential for good promised by social media-based research.”