In the wake of the U.S. Election, as Facebook and Google come under fire for the dissemination of fake “news” in their News Feed and search results, Twitter is tackling another area that’s been a flashpoint issue not only recently, but for years: the social media platform today is unveiling some major updates to its safety policy, aimed at helping users weed out abusive Twitter accounts and Tweets.

Abusive or hateful content — defined by Twitter as “specific conduct that targets people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease” — can now be reported to Twitter for removal not just by the targets of that abuse, but by bystanders.

On top of this, users can now mute not just accounts, but specific keywords and phrases that would apply to anyone, as well as conversations you’re tagged in but don’t want to keep seeing (the Twitter “canoe” conundrum), so that none of these come up in your mentions. Twitter also said that it is retraining its support teams and overhauling its systems to deal with abuse reports more quickly and sensitively.

More detail on how these will work below.

All in all, these changes will come as welcome news to Twitter users who have seen a lot of ugliness and hatred unfold and go viral on the platform. But — given that there were leaks of this update as far back as August — it is also very late, perhaps crippingly so. Twitter, with 317 million monthly active users, has been facing a lot of growth problems and had been exploring options to sell itself. But at least two potential buyers, Disney and Salesforce, both reportedly backed away in part because of abuse issues on the platform.

Notably, there are the first updates to Twitter’s privacy and abuse policy in a year, a spokesperson told me.

That’s not to say that this hasn’t been something that Twitter has been working on behind the scenes. “Because Twitter happens in public and in real-time, we’ve had some challenges keeping up with and curbing abusive conduct,” the company noted in a blog post published today. “We took a step back to reset and take a new approach, find and focus on the most critical needs, and rapidly improve.”

(Sidenote: Twitter’s trolling and abuse problem may not be getting highlighted at the moment in the same way that Facebook’s fake news problem has been in light of the last U.S. election and the influence it may have played — Facebook is significantly bigger, with 229 million MAUs in the U.S. versus only 67 million at Twitter.

But it doesn’t mean that the two were not connected. One of the most talked-about Twitter suspensions this year was when it banned Milo Yiannopoulos’ @nero account after he incited a band of people to Tweet abuse at actor Leslie Jones. Yiannopoulos is the tech editor at conservative blog Breitbart and a very popular and vocal Trump supporter.)

Here is more detail on how Twitter’s new policies will work:

Twitter says that it will update its “mute” feature — which first debuted in 2014 as a way of silencing specific accounts without those account holders knowing you have done so — so that you now can silence in a more granular way in your notifications/mentions.

You can now select keywords and phrases that you would like to weed out of your notifications view, and if you have ever been part of an ongoing conversation by way of your name getting tagged — the infamous Twitter “canoe” — you can now drop out of those, too. Twitter said that this update will be rolled out “in the coming days” and will continue to be updated.

Notably, muting in your notifications won’t get them out of your full Timeline, though. “We’ve seen that abuse is acutely felt in notifications, where the content is sent directly to you and it’s not necessarily something you’re seeking out,” a spokesperson said. “We wanted to solve for that first, especially because there are several ways you can already control what you see in your home timeline (unfollow, block and mute). That said, we’re working on expanding mute to other parts of your Twitter experience too.”

Here’s how the Mute Words feature will look:

The new reporting feature, meanwhile, will stick to trying to ferret out abuse, but the key here will be the way in which Twitter will do this:

Now, if you are watching an abusive situation unfold, you can report it yourself, even if you are not directly involved (bystanders being a central part of how Twitter’s open platform works).

Twitter uses a team of humans (not algorithms) to process these requests, and my guess is that when a specific incident is reported by multiple people, that will likely ramp up the attention it gets.

Apart from seeing incidents getting reported more frequently, the enforcement support team will be sharpening up its act more generally: Twitter said that it has retrained everyone, and it will continue to do so in a “refresher” program that will be complemented by a new set of policies and systems and internal tools to act faster and more effectively.

“Our goal is a faster and more transparent process,” Twitter writes in a blog post.