3 things that will change the world today

Send me interesting reports, magazines, promotions and exclusive content from the Verdict group

You are in control of the communications you receive from us and you can update your preferences anytime to make sure you are receiving information that matters to you.
Please check our Verdict Privacy Policy to see how we protect and manage your submitted data.

The problem of fake news and misinformation spread via social media has plagued almost every major political event in the last few years.

Much of the problem has been the work of bots, or automated Twitter accounts, which are used to influence opinion on social media sites, known as “social hacking”. These automated accounts have been found responsible of spreading false information in order to evoke a response from those on the social network.

A new study, published in the Proceedings of National Academy of Sciences of the United States of America has tried to uncover the behaviour of these Twitter bots. Although work has been done on identifying bots, this is believed to be one for the first studies that investigates the specific strategy they use.

Earlier studies assumed that Twitter bots were sharing information without specific strategies, but this study suggests that their tactics are actually more sophisticated than that, and that bots can select and pursue a specific target.

Twitter bots target individual users

Researchers from Fondazione Bruno Kessler and the University of Southern California Viterbi School of Engineering reviewed nearly 4 million tweets in an attempt to understand the behaviour of bots, including the type of content they are sharing, and which users are being targeted.

Looking specifically at the referendum on Catalan independdence that took place in 2017, the researchers discovered that influencers who supported Catalan’s independence were specifically targeted by the bots and became over 100 times more likely to engage with them.

Rather than broadcasting the same content to everyone, bots can generate content based on the views of their targets, particularly focusing on highly influential human users to exacerbate social conflict.

Researchers found that bots produced 23.6% of the total number of posts during the referendum on 1 October, and were able to spread certain messages among different groups, accentuating “the exposure to negative, hatred-inspiring, inflammatory content, thus exacerbating social conflict online.”

For example, the study found that during the referendum on Catalan independence that took place in Spain in 2017, bots generated specific content with negative connotation that targeted the most influential individuals among the group of Independentists. Content encouraging violence against the government and police was targeted specifically at this group, demonstrating that bots can pursue individuals and tailor content to them.