Twitter is now able to detect harassment without user reports

Twitter has been slowly getting better at fixing abuse and harassment on its platform — despite a few embarrassing blunders.

Now, the company says it’s finally getting better at addressing one of the most obvious problems: that most harassment isn’t dealt with unless it’s reported by the victim.

The company is now detecting more harassment proactively, Twitter said, addressing what has long been one of the company’s most frustrating (and obvious) flaws.

“This time last year, 0 percent of potentially abusive content was flagged to our teams for review proactively,” the company wrote in a blog post. “Today, by using technology, 38 percent of abusive content that’s enforced is surfaced proactively for human review instead of relying on reports from people using Twitter. This encompasses a number of policies, such as abusive behavior, hateful conduct, encouraging self-harm, and threats, including those that may be violent.”

While an important step, that of course means that the vast majority of harassment still isn’t detected by Twitter’s automated system. But it shows the company is actually capable of addressing what’s long been one of its most obvious flaws.

Twitter has made other improvements of late, including the ability to add “context” to reports, and changes to prevent spammy accounts from spreading.

Twitter also said it will start testing a new feature in June that lets users opt to hide all replies to their tweets. The feature, which was spotted earlier this year, wouldn’t block people from replying but would allow you to hide them from your view in the app. The goal, according to Twitter, is to make it so users don’t have to rely on blocking and muting in order to keep conversations “healthy.”

We didn’t, however, get an update on Jack Dorsey’s conversational health project, in which the company is seeking proposal to help it figure out how to measure whether a given conversation is healthy. Presumably, that’s still a work in progress, too.