Twitter steps up to fight against online trolls with its new policy to garner healthy conversations

by Ali

Twitter announced its new safety initiative by introducing ‘behavior-based signals’ to seek out offensive speech within tweets and to promote healthy conversation and experience.

Twitter Safety introduced a new way to filter out offensive speech in tweets. (Image source: Twitter Safety)

Twitter will automatically show replies which garner a healthy discussion and hide those which promote abuse. However, users will be given freedom to view full conversations if they will by simply clicking the “show more replies” option.

They have managed to tackle issues of behavios in public conversations. (Image source: Twitter Safety)

On Tuesday, Twitter Safety posted a tweet about introducing the new way to filter out offensive speech used in tweets and also stated that they have found a way to tackle the situation through technology. According to Twitter, the new technology will entirely focus on ‘behavior, not content’.

The San Francisco-based company stated that its new algorithms will ‘improve the health of public conversations’. Twitter will identify these ‘troll tweets’, reported by users, and specifically examine those who repeatedly tweet and mention accounts that don’t follow them.

According to a blog post written by Twitter’s VP Trust and Safety, Del Harvey, and Director of Product Management and Health, David Gasca, Twitter will incorporate its new approach to prevent abuse that would otherwise satisfy its policies.

In regards to online abuse, they enlightened that only 1% of the accounts have been reported by other users. However, at the same time, this has an impact large enough to get attention. Twitter will use new behavioral techniques in order to detect any signs of unhealthy conversations without having to wait for the users to report the conversations in the first place.

Terming it as ‘troll-like behavior’, they mentioned as to how this has an undesirable impact without violating Twitter’s abuse policy.

Harvey and Gasca said, “Some troll-like behavior is fun, good and humorous. What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter.”

They further added, “Some of these accounts and tweets violate our policies, and, in those cases, we take action on them. Others don’t but are behaving in ways that distort the conversation.”

Twitter will incorporate certain signals to detect if there is potential for troll-like behavior. Some of the indications might include if an account has not confirmed email address, if the same person signs up for multiple accounts, or if there are indications of a coordinated attack. Twitter, in this case, will hide these undesirable conversations to promote a safe and abuse-free environment.

Signing up for multiple accounts and not confirming email addresses are few of the indicators. (Image source: Pexels)

Twitter mentioned that since their experiment on a small scale, they have seen less abuse reports. (Image source: Twitter Safety)

Further elaborating on the policy, Twitter has followed through to testing it on the small scale. The results have so far been exemplary, with a 4% drop in abuse report in searches and an 8% drop in abuse reports in conversations.