Twitch aims to squash toxic chat with a new moderation tool

By Sillicur at Tuesday, December 13, 2016 9:32:00 AM

If you’ve ever watched a live stream of a big event on Twitch.tv, for example a Dota 2’s Boston Major or a top tier professional game of Counter-Strike: Global Offensive (CS: GO), or any other popular eSports title, you’ve seen the extremely toxic behaviour in the chat window. The toxic behaviour includes racist, sexist and just about any other offensive thing you can think of.

Yes, of course you can just go into full screen mode or turn off the chat to avoid these comments, but the whole point of a chat window is actually to discuss stream you are watching. In the several years of watching Dota 2 and CS: GO matches on Twitch, I have not once seen a civil discussion. Twitch aims to change all that with a new moderation tool that has been released today.

AutoMod to the rescue

AutoMod is a new tool that has that uses “machine learning and natural language processing” to help Twitch streamers keep toxic chat out of their channels. The tool has reportedly been in testing for months, and completed a successful trial run during a US political convention live stream.

The tool doesn’t just block out clearly offensive text, for example a racist word, but also detects emotes or characters stringed together in a certain way by someone who wants to evade the chat filtering process. However, it is important to note that channel owners can choose to what degree the chat filtering process works, so you might still find streams full of toxic behavior if the owner choose to allow it.

According to Kotaku, Twitch sent out a Press Release, explaining how AutoMod works, which reads:

"AutoMod is a unique moderation tool that does more than filter inappropriate chat. When a user sends a message that AutoMod flags as potentially inappropriate, the message is held in a publishing queue awaiting moderator approval. AutoMod also enables broadcasters to adjust the degree of filtering in the event they are more or less conservative about the type of dialogue they want to see in their chat. Beyond identifying inappropriate words and phrases, AutoMod can detect potentially inappropriate strings of emotes and other characters or symbols that others could use to evade filtering. – Source

Of course, no tool will be perfect at launch, but the initiative taken by Twitch to stamp out toxic behaviour is admirable. When chat is so toxic, some people might exclude themselves as they fear ridicule from trolls. According to Venture Beat, Twitch inclusivity boss Anna Prosser Robinson said:

“Inclusivity is something that is important to both our community and our brand. One of the best ways we can help bring about change is to provide tools and education that empower all types of voices to be heard. AutoMod is one of those tools, and we hope it will encourage our users to join us in our continued focus on fostering a positive environment on social media.” - Source

It is without a doubt an excellent move by Twitch. Hopefully, more people will now be able to voice their opinions on Twitch chat without being mercilessly harassed by trolls.

One concern could be that AutoMod could flag some innocent comments, but if the tool does snuff out toxic behavior, a few comments getting lost in the battle against those bent on creating a toxic chat environment isn’t all that big of a deal.

What do you think about the idea of AutoMod and what has your experience been with Twitch.tv chat in the past? Let us know in the comment section below.