Should social media companies ban Holocaust denial from their platforms? What about conspiracy theorists that spew hate? Does good corporate citizenship mean platforms should remove offensive speech or tolerate it? The content moderation rules that companies develop to govern speech on their platforms will have significant implications for the future of freedom of expression. Given that the prospects for compelling platforms to respect

users’ free speech rights are bleak within the U.S. system, what can be done to protect this important right?

In June 2018, the United Nations’ top expert for freedom of expression called on companies to align their speech codes with standards embodied in international human rights law, particularly the International Covenant on Civil and Political Rights (ICCPR). After the controversy over de-platforming Alex Jones in August 2018, Twitter’s CEO agreed that his company should root its values in international human rights law and Facebook referenced this body of law in discussing its content moderation policies.

This is the first article to explore what companies would need to do to align the substantive restrictions in their speech codes with Article 19 of the ICCPR, which is the key international standard for protecting freedom of expression. In order to examine this issue in a concrete way, this Article assesses whether Twitter’s hate speech rules would need to be modified. This Article also evaluates potential benefits of and concerns with aligning corporate speech codes with this international standard. This Article concludes it would be both feasible and desirable for companies to ground their speech codes in this standard; however, further multi-stakeholder discussions would be helpful to clarify certain issues that arise in translating international human rights law into a corporate context.