Story

Internet trolls are made, not born, CIS researchers say

You, too, could become a troll. Not a mythological creature that hides under bridges, but one of those annoying people who post disruptive messages in internet discussion groups – “trolling” for attention - and off-topic posters who throw out racist, sexist or politically controversial rants. The term has come to be applied to posters who use offensive language, harass other posters and generally conjure up the image of an ugly, deformed beast.

It has been assumed that trolls are just naturally nasty people being themselves online, but according to Cornell research, what makes a troll is a combination of a bad mood and the bad example of other trolls.

“While prior work suggests that trolling behavior is confined to a vocal and anti-social minority, ordinary people can, under the right circumstances, behave like trolls,” said Cristian Danescu-Niculescu-Mizil, assistant professor of information science. He and his colleagues actually caused that to happen in an online experiment.

They described their research at the 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing, Feb. 25–March 1 in Portland, Oregon, where they received the Best Paper Award. The team included Stanford University computer science professors Michael Bernstein and Jure Leskovec, and their doctoral student Justin Cheng ’12.

To tease out possible causes of trolling, the researchers set up an online experiment. Through the Amazon Mechanical Turk service, where people can be hired to perform online tasks for a small hourly payment, they recruited people to participate in a discussion group about current events.

Participants were first given a quiz consisting of logic, math and word problems, then shown a news item and invited to comment. To compare the effects of positive and negative mood, some participants were given harder questions or were told afterward that they had performed poorly on the quiz. To compare the effects of exposure to other trolls, some were led into discussions already seeded with real troll posts copied from comments on CNN.com. The experiment showed that negative mood and bad example could lead to offensive posting.

Following up, the researchers reviewed 16 million posts on CNN.com, noting which posts were flagged by moderators, and applying computer text analysis and human review of samples to confirm that these qualified as trolling. They found that as the number of flagged posts among the first four posts in a discussion increases, the probability that the fifth post is also flagged increases. Even if only one of the first four posts was flagged, the fifth post was more likely to be flagged. This study was described in a separate paper, “Antisocial Behavior in Online Discussion Communities,” presented at the Ninth International AAAI Conference on Web and Social Media, May 2015 at Oxford University.

Using day and time as a stand-in for mood, they found that ordinary posters were more likely to troll late at night, and more likely on Monday than Friday.

It might be possible to build some troll reduction into the design of discussion groups, the researchers propose. A person likely to start trolling could be identified based on recent participation in discussions where they might have been involved in heated debate. Mood can be inferred from keystroke movements. In these and other cases a time limit on new postings might allow for cooling off. Moderators could remove troll posts to limit contagion. Allowing users to retract posts may help, they added, as would reducing other sources of user frustration, such as poor interface design or slow loading times.

The point of their research, the researchers conclude, is to show that not all trolling is done by inherently anti-social people, so looking at the whole situation may better reflect the reality of how trolling occurs, and perhaps help us see less of it.