Company

Clarifying The Twitter Rules

Online behavior continues to evolve and change, and at Twitter, we have to ensure those changes are reflected in our rules in a way that’s easy to adhere to and understand. Today, we're publishing a new version of the Twitter Rules to clarify our policies and how we enforce them. While the fundamentals of our policies and our approach have not changed, this updated version presents our rules with more details and examples. In the weeks ahead, we’ll launch separate pages for each of our policies to provide even more context about what each policy covers and our rationale for enforcement.

Some of the biggest updates include the following:

Abusive behavior

We are making it clear that context — including if the behavior is targeted, if a report has been filed and by whom, and if the Tweet itself is newsworthy and in the legitimate public interest — is crucial when evaluating abusive behavior and determining appropriate enforcement actions. Expect more detail on how we review and enforce all of our policies and the range of enforcement options in a separate update on November 14.

Self-harm

We’ve always shared resources with people experiencing suicidal or self-harming thoughts when we learn of such behavior, and removed any Tweets that encourage or promote suicide games. Our updated policy on suicide and self-harm clarifies how strictly we enforce this policy, and how we communicate with anyone promoting or encouraging this type of behavior.

Spam and related behaviors

We are more clearly defining spam, how it behaves on Twitter, and sharing the factors we consider when reviewing accounts that may be spam. As part of this update, we’re also clarifying that when we review accounts that demonstrate spam-like behavior, we focus on behavioral signals, not the factual accuracy of the information they share.

Graphic violence and adult content

We’re providing more specific detail around the types of content we consider to be “graphic violence” or “adult content.” We’re also updating our media policy Help Center page so it includes examples that help set expectations around the types of content covered by this policy. Please note that the media policy will be updated again on November 22, to account for hateful imagery.

We have worked on this clarified version of our rules for the past few months to ensure it takes into account the latest trends in online behavior, considers different cultural and social contexts, and properly sets expectations around what’s allowed on Twitter. We incorporated feedback from our global Trust and Safety Council, who provided important guidance about how to best present our policies to the world. On November 22, we will share another version of our rules, which will include new policies around violent groups, hateful imagery, and abusive usernames. We are constantly evaluating our rules and iterating to make them clearer. As always, we appreciate your feedback, and we are looking forward to continuing working to make Twitter safer, together.