Home > Facebook just removed six extremists from its platforms. Here's what should happen next.

Facebook just removed six extremists from its platforms. Here's what should happen next.

Melissa Joskow / Media Matters

Facebook just announced[1] the removal of a notable cross-section of extremists from social networks Facebook and Instagram, including neo-Nazi sympathizer[2]Milo Yiannopoulos[2], anti-Muslim bigot Laura Loomer[3], far-right YouTuber Paul Joseph Watson[4], conspiracy theorist Alex Jones[5] (again[6]), and white supremacist Paul Nehlen[7], a failed Republican congressional candidate, while also removing Nation of Islam Minister Louis Farrakhan for his record of anti-Semitic rhetoric. This move by Facebook is a step in the right direction, opening doors to making its platforms safer and inspiring some optimism that the tech company might be capable of taking responsibility for the ways its platforms have empowered extremists. But it is clear that there is more to do.

A long record of hate

The newly banned figures owed their influence to the massive reach they were allowed to cultivate through Facebook and Instagram, using their accounts to post content that dehumanized entire communities, promoted hateful conspiracy theories, and radicalized audiences -- all while they profited from directing people to their own websites.

After being banned from most other social media platforms, including YouTube, Twitter, and Facebook itself, Jones found a safe haven[8] on Instagram, where he had continued to post Infowars content that featured hate speech, promoted conspiracy theories, and amplified other extremists.

For his part, Yiannopoulos was banned from Twitter in 2016 for leading a racist harassment campaign against actress Leslie Jones, but the former Breitbart editor went on to use Instagram[11] and Facebook[12] to spread hateful anti-Muslim rhetoric and mock people of color.

What comes next

It’s a welcome but long overdue step in the right direction that Facebook has now taken definitive action against some of the most glaring examples of toxicity on its platforms -- especially considering the tech company’s record of struggling[22] to enforce policies that are effective in curbing the reach and influence of extremists. The company’s recent attempt to ban white supremacist content from its platforms proved insufficient, as its lack of specificity allowed extremists[23] to continue posting racist content as long as they weren’t too explicit.

However, there are still a number of achievable measures that Facebook could take to make users safer and to convince the public of the company’s resolve to fight extremism. Shireen Mitchell, who founded[24]Stop Online Violence Against Women[25] and the nonprofit Digital Sisters to promote diversity in the tech industry, has explained[26] how Facebook’s moderation policies have been weaponized[27] to harass women of color -- especially if they’re advocating for social change. Speaking to Media Matters, Mitchell said Facebook has banned people of color and activists like herself as a result of posts that mention white people in the context of racism and white supremacy. Her experience is consistent with a Media Matters analysis[19] of Facebook pages that showed that white supremacist content is often treated as equivalent to content from groups that actually fight oppression, such as the Black Lives Matter movement, seemingly treating white people as a protected group while ignoring the historical context of structural racism.

Some achievable measures that could help curb extremism while protecting users who experience oppression include:

Commit to enforcing standards against more codified white nationalism by more effectively pairing automated and human reviews to better identify violating content. Increasing the number of people tasked with platform monitoring and staffing those positions with culturally competent individuals would help identify white supremacists’ use of the coded extremist rhetoric and insidious false equivalences that artificial intelligence seems to be missing. Doing so would also help curb the uncritical amplification of dangerous content such as video clips of violent hate crimes or the manifestos of their perpetrators.

Proactively limit the visibility of content when its traffic is being directed from known toxic sources like anonymous message boards 8chan and 4chan. As reported by NBC’s Ben Collins, platforms are already able[28] to identify traffic coming from toxic sources. In light of recent crimes in which perpetrators[29] have gone on anonymous message boards to link to their Facebook accounts and broadcast mass shootings as extremist propaganda, the platform should more actively limit the visibility and spread of content that starts receiving high influxes of traffic from extremist sites.

Extend anti-discrimination policies currently applied to ads to include event pages and groups. Event pages and private groups are often useful tools that help extremists organize and mobilize. Existing anti-discrimination policies should also apply to content in these pages and groups.

Reassess fact-checking partnership with Tucker Carlson's Daily Caller, which has ties to white supremacists and anti-Semites. The Daily Caller has a long history of publishing white supremacists, anti-Semites, and bigots[30]; just yesterday it was revealed that The Daily Caller has fired the managing editor[31] of the affiliated Daily Caller News Foundation (DCFN) for his connections to white supremacists. DCFN provides significant funding[32] to the Daily Caller's fact-checking operation, Check Your Fact. Daily Caller founder Carlson constantly echoes[33] white nationalist talking points on his Fox News show. And yet Facebook has teamed up[32] with Check Your Fact as a fact-checker.

Pay attention to the cross-platform influence of highly followed users. White nationalists often use platforms like Instagram[11] to sanitize their images with lifestyle content while spreading extremist propaganda on other platforms. As Data & Society research affiliate Becca Lewis told Media Matters, influential extremists on Instagram “will simply mimic fashion, beauty, or fitness influencers, but will espouse white supremacist propaganda elsewhere. In those cases, Instagram acts as a kind of honeypot.” Lewis suggested Facebook emulate Medium’s cross-platform moderation approach, in which users that violate Medium’s content policies on other platforms get banned on Medium.

Increase transparency in metrics for third-party auditors. Experts have warned[34] about the risks of Facebook’s most recent privacy initiatives that limit[35] Application Programming Interface (API) access to researchers (or access to the tools that allows individuals unaffiliated with Facebook to build software that uses Facebook data), hide[36] Instagram metrics, and prioritize[37] groups on Facebook (which would allow[38] propaganda and extremism to propagate unchecked). As BuzzFeed’s Jane Lytvynenko pointed out, the move makes it harder for researchers and experts to audit content and metrics on the platforms. While it might save the tech company some bad press, it hinders outside researchers in their efforts to identify and scrutinize security concerns.

Overall, FB frames these tools as privacy improvements but it’s not halting the vacuuming of data. Privacy doesn’t amount to platform security.

The changes are going to further hinder research into FB. Fixes suggested by lawmakers and academics, ultimately, weren't embraced.