Twitter details how it reviews and enforces rules around hate speech, violence and harassment

Twitter has been under fire lately (slash always) for its methods to deal with harassment and abuse on its platform. In an effort to provide some insight into its thinking, Twitter has added some new articles to its help center that detail how the company reviews and enforces rules, as well as the factors it considers in its decision-making process.

In a subtopic on "legitimate public interest," for example, Twitter says it wants to ensure people can see all sides of an issue. With that in mind "there may be the rare occasion when we allow controversial content or behavior which may otherwise violate our Rules to remain on our service because we believe there is a legitimate public interest in its availability."

In determining if a piece of content could be of legitimate interest to the public, Twitter says it looks at the source of the content, its potential impact on the public and the availability of counterpoints.

"If the Tweet does have the potential to impact the lives of large numbers of people, the running of a country (emphasis TC's), and/or it speaks to an important societal issue then we may allow the the content to remain on the service," Twitter explains.

Twitter does not explicitly mention President Donald Trump, but my bet is that this is how Trump is able to do essentially whatever he wants to do on Twitter. The help article goes on to describe that the content of some people, groups and organizations "may be considered a topic of legitimate public interest by virtue of their being in the public consciousness."

The explanation on what counts as legitimate public interest lives inside Twitter's new help section article, "Our approach to policy development and enforcement philosophy." In that article, Twitter lays out its policy development process, enforcement philosophy and its range of enforcement options.

When determining whether to take action, for example, Twitter says context matters and that it looks at factors like if the behavior is directed at a person, group or protected category of people, whether the content is a topic of "legitimate public interest" and if the person has a history of violating Twitter's policies.

Twitter says it starts by assuming that people don't intend to violate its rules, noting that "Unless a violation is so egregious that we must immediately suspend an account, we first try to educate people about our Rules and give them a chance to correct their behavior."

There are a range of actions Twitter can take once it has determined a piece of content is in violation of its rules. It can limit tweet visibility, require someone to delete the tweet before they can tweet again and hide a tweet until the violator officially deletes it.

At the DM level, Twitter can require the violator to delete the message or block the violator on behalf of the reporter. At the account level, Twitter can put an account in read-only mode, which limits the person's ability to tweet, retweet or like content "until calmer heads prevail."

All of the above and more is now featured in Twitter's help center. The information itself is not new, but it does provide more detail than Twitter has in the past. This information comes after Twitter posted a new version of its rules that featured updated sections pertaining to abuse, spam, violence, self-harm and other topics earlier this month.