Leave a Comment

Here’s a bot that wants to make Twitter less awful

Toxicity on Twitter has become an accepted evil. To some, the verbal abuse is even viewed as the cost of having your 124 character soap box. What are you expecting, decency?

As it turns out, yes. Many people do believe they deserve to engage in social media without the risk of inciting a riot. Twitter has, at best, provided vague efforts to address the problem at a glacial pace. Their online abuse tab in the Help Center encourages targets to remain passive since “abusive users often lose interest once they realize that you will not respond.” And, as anyone who’s ever been bullied can attest to, playing dead works every time.

After the atrocious incident with Zelda Williams, where abusive messages about her father’s death lead her to quit social media altogether, Twitter released another vague promise. Vice President of Trust & Safety Del Harvey assured everyone that Twitter was “in the process of evaluating how we can further improve our policies to better handle tragic situations like this one.” However, the results of these alleged internal debates over policy changes have yet to become apparent to the public.

Twitter’s hands-off approach is troubling. To say the least.

Some people are tired of waiting. A few third party members have begun cropping up that take matters into their own hands, through a method called “collaborative blocking.” The Block Bot is the most widely used system, combining crowd sourced data and manual labor from the administrators to compile a tiered list of trolls for automatic blocking. When downloaded, the bot “works in the background, fetching the names of those to be blocked from a central server, and discreetly blocking them.” Users of Block Bot choose the severity of their blocking needs, from Level 1 (“abusive, stalker, doxxer or faker”) through 3 (the more “subjective asshole or annoyance”). The administrators suggest only a Level 2 blockage for general users, which includes the “worst of the worst” as well as “a wider selection of unpleasant people, in the opinion of the blockers.” The bot’s tagline boasts that it helps “you ignore people from annoyance to biggot on Twitter.” Unsurprisingly, The Block Bot’s page is littered with irate twitter members blocked by the system, most often citing the “you people just can’t handle differing opinions [expletive] [expletive] [expletive]” argument.

Of course, the system has its flaws. Because the service is automatic and the list of blocked members is compiled using the administrators’ judgements, users sometimes don’t even know who they’re blocking or whether they’d even want to block them in the first place. Moreover, the verbiage of aiming to block out the “anti-feminist obsessives, who viciously harass those who don’t support their warped views” makes me doubt that such a list encapsulates the full scope of Twitter’s abusive capabilities.

Two other blocking systems, Block Together and Flaminga, hope to provide similar but more comprehensive services. Block Together is in open beta, and hopes to take the basic concept of “collaborative blocking” and add plug-in functionality. Flaminga uses sharable and customizable lists in order to automatically mute users rather than block them, letting “you enjoy the conversation you want to have without the interruptions.” The software also provides smart filters, which allow you to ignore “tweetstorms” by muting all of the abuser’s followers, maintain a “grownups only” filter by preventing abusers from creating new accounts when blocked, and even a spam filter called “bad manners.” Creator of Flaminga, Cori Johnson, advertises the system as “the only Twitter client in the universe that will help you rise above the chatter!”

Toxicity is obviously not restricted to Twitter. And, as this PBS Game Show concludes, it shouldn’t just be up to software or design to fix the problem. But Twitter’s hands-off approach to the kind of damage their product deals on a daily basis is troubling. To say the least. Even Xbox Live, a digital space seemingly synonymous with online verbal harassment, has taken measures to ensure users are held accountable and others protected.

Though everyone’s experienced something unpleasant on Twitter, the mob-like harassment remains disproportionately targeted at women, LGBT individuals, and non-whites. If Twitter really believes it “helps you create and share ideas and information instantly, without barriers,” then maybe it should stop ignoring the pretty obvious barriers many of its users create for people trying to, you know, create and share ideas and information.