Study finds Reddit’s controversial ban of its most toxic subreddits actually worked

It seems like just the other day that Reddit finally banned a handful of its most hateful and deplorable subreddits, including r/coontown and r/fatpeoplehate. While the move was at the time derided by some as pointless, akin to shooing criminals away from one neighborhood only to trouble another. But a new study shows that, for Reddit at least, it has had lasting positive effects.

The policing of hate speech online has become a flashpoint for many a flamewar these past few months especially, as white nationalists, neo-nazis, and others with abhorrent but strictly speaking quite legal viewpoints struggle with being banned repeatedly from the internet’s biggest platforms.

The practice has led sites like StormFront to seek shelter at dismal ports like off-brand hosts and small social networks pitching their tolerance of certain types of free speech being “censored” by others. It’s an example of one of the objections made to the idea of banning troublesome users or communities: that they’ll just go elsewhere, so why bother?

Researchers at the Georgia Institute of Technology took this question seriously, as until someone actually investigates whether such bans are helpful, harmful, or some mix thereof, it’s all speculation. So they took a major corpus of Reddit data (compiled by PushShift.io) and examined exactly what happened to the hate speech and purveyors thereof, with the two aforementioned subreddits as case studies.

Essentially they looked at the thousands of users that made up CT and FPH (as they call them) and quantified their hate speech usage. They then compared this pre-ban data to the same users post-ban: how much hate speech they produced, where they “migrated” to (i.e. duplicate subreddits, related ones, etc.), and whether “invaded” subreddits experienced spikes in hate speech as a result. Control groups were created by observing the activity of similar subreddits that weren’t banned.

What they found was encouraging for this strategy of reducing unwanted activity on a site like Reddit:

Post-ban, hate speech by the same users was reduced by as much as 80-90 percent.

Members of banned communities left Reddit at significantly higher rates than control groups.

Migration was common, both to similar subreddits (i.e. overtly racist ones) and tangentially related ones (r/The_Donald).

However, within those communities hate speech did not reliably increase, although there were slight bumps as the invaders encountered and tested new rules and moderators.

All in all, the researchers conclude, the ban was quite effective at what it set out to do:

For the definition of “work” framed by our research questions, the ban worked for Reddit. It succeeded at both a user level and a community level. Through the banning of subreddits which engaged in racism and fat-shaming, Reddit was able to reduce the prevalence of such behavior on the site.

Of course, it’s not so simple as all that. Naturally, many of the users who previously spewed racial slurs at CT just moved over to Gab or Voat, where their behavior is proudly fostered. But the point of the bans at Reddit wasn’t to eliminate racism; it was to discourage it on the platform. To that end, it accomplished its goal (I’ve asked Reddit what it thinks of the study and its conclusions). And similar strategies may work for other platforms.

The question of how to to combat racism and hatred at large is one that is really too much for a major platform like Reddit or even Google or Facebook. The best they can hope to do is strike at it when and where it appears. But as ineffective as that might seem, it worked for Reddit and it may work elsewhere: bigotry is easy and those who cherish it are lazy. Make it difficult and many people may find it more trouble than it’s worth to harass, shame, and otherwise abuse those different from themselves online.