“Most of these stories are old or sensationalized or even completely not true. Yet they keep reappearing over and over again,” he said. “There clearly is a big ‘demand’ for such articles if you see how many people are willing to like and share them.”

Researchers of extremism say the horrifying attack in New Zealand should be the catalyzing moment that makes platforms like Facebook and others put more focus on removing anti-Muslim hate speech from their platforms. But they aren’t optimistic about it happening.

“Islamophobia happens to be something that made these companies lots and lots of money,” said Whitney Phillips, an assistant professor at Syracuse University whose research includes online harassment. She said this type of content generates engagement, which in turn keeps people on the platform and available to see ads.

Facebook / Screenshots

In an emailed statement, a Facebook spokesperson said the company has been taking down content specific to the attack — it said it had removed 1.5 million videos of the attack in the first 24 hours — but addressed questions about anti-Muslim hate speech by linking to a blog post from 2017.

“Since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement,” the statement said. “We are adding each video we find to an internal database which enables us to detect and automatically remove copies of the videos when uploaded again. We urge people to report all instances to us so our systems can block the video from being shared again.”

Megan Squire is an Elon University computer science professor who has been collecting data about extremist behavior on 15 different platforms since 2016. She told BuzzFeed News that platforms typically move to take down anti-Muslim hate speech after a reporter asks Facebook about a group of pages. But larger structural issues are not addressed.

“Sometimes, their ultimate decision is a good decision, the problem is that it comes from a place of corporate ass-covering instead of a strong ideological position,” Phillips said.

This is true for anti-Muslim hate speech and other bigoted speech on social media platforms, none of which happens in isolation, Phillips said. When Infowars was de-platformed, it was companies responding to news of the day. The same is happening with anti-vaccination disinformation across Facebook, YouTube, and others.

“The trickiest aspect of this story is how good for business hate is for social media platforms,” said Phillips.

Structural problems in journalism also contribute by focusing on the shooter instead of their victims. “I think that there’s not a lot of sympathetic portrayals of individual Muslim people and so the ideas about Islamophobia get to be these abstract concepts that don’t connect to individual people,” Phillips said.

The Facebook algorithm, for example, recommends related groups that can point people to extremism. Even after the New Zealand attack, the company allowed groups with names like “War against Islam” and “Bikers Against Radical Islam Europe” to exist. They have memberships in the thousands.

Groups are also frequently created with fake identities or through pages, making it difficult to track their origin — and if the groups are "closed" or "secret," only members can see inside them. That also means they're generally poorly moderated — groups are tasked with policing themselves and there's no way on Facebook to report an entire group, only the content within it.

“I believe that because of the changes Facebook made, that platform is one of the most safest places for them to coordinate online,” she said. “They know that by using the social media platforms they can spread their message and they figured out how to do that.”

Squire says she’s able to find anti-Muslim groups on Facebook easily, and is currently tracking about 200 of them. Some try to name themselves in such a way that plays into freedom of speech arguments, but other groups will spread anti-Muslim hate speech without fear.

“They’ll name their groups something like ‘Infidels against radical Islam,’” she said. “So they claim that they’re not against all Islam but they’re pumping out the same propaganda.”

Shireen Mitchell, the founder of Stop Online Violence Against Women, researches the impact of social media on its users. She points out that those who spread hate know how to game social media networks, so an algorithmic solution from the companies will not be enough.

“They’re using the tool as the tool was designed,” Mitchell said. “People have to be honest that bots and trolls exist. There’s too much denial. That in itself feeds the trolls.”

In her study of how the Russian Internet Research Agency used social media to target black issues during the 2016 election, she saw that the key was to find a wedge issue and capitalize on the rage. It was about hijacking the conversation. Mitchell said that strategy works because companies are more afraid of censoring voices than keeping their users safe.

“They’re putting censorship up against safety,” Mitchell said. “Safety should be priority, not censorship.”