“Public oversight,” writes Caplan, “needs to incorporate not only the context of speech, but the organizational dynamics of platforms, to understand where new rules should be developed (for types of content) and where more resources are necessary.”

This report provides fresh insights for those developing content moderation policy and regulation by illustrating how leading companies’ missions, business models, and team sizes influence their approaches. Along the way, the report teases out the emergent challenges and tensions these actors face in balancing context and consistency.

"Concerns about the rise of hate speech and disinformation have increased the amount of public scrutiny being placed on search engine and social media companies that are responsible for mediating much of the world’s information."–Robyn Caplan

Caplan identifies three types of content moderation strategies:

Artisanal: These teams operate on a smaller scale. Content moderation is done manually in-house by employees with limited use of automated technologies. Content moderation policy development and enforcement tend to happen in the same space. Platforms such as Vimeo, Medium, Patreon, or Discord operate in this way.

Community-Reliant: These platforms–such as Wikimedia and Reddit–rely on a large volunteer base to enforce the content moderation policies set up by a small policy team employed by the larger organization. Subcommittees of volunteers are typically responsible for norm-setting in their own communities on the site.

Industrial: These companies are large-scale with global content moderation operations utilizing automated technologies; operationalizing their rules; and maintaining a separation between policy development and enforcement teams. This is most popularly seen with platforms like Facebook or Google.

Caplan finds that each approach has different trade-offs between moderating content in context and maintaining consistency across a large scale, depending on resource needs and organizational dynamics.