This episode features tech and policy journalist Timothy Lee, discussing a question that's increasingly in the spotlight: How much should tech companies be actively moderating their users' speech? For example, should Facebook be trying to fight fake news? Should Twitter ban bullying? Should Reddit ban subreddits that they consider hate speech? Timothy and Julia look at the question not just from the legal perspective, but also from the moral and strategic perspectives as well.

Reader Comments (14)

One of main points that I thought that you two missed was how you define the terms "conservative", "right" and "left".

You both talked about how companies that take down racist and white supremacist websites could potentially anger the "conservatives", and you seem to be implying that they could alienate mainstream conservatives. But I do not see why these white supremacists necessarily need to be identified with mainstream conservatives.

White supremacists are just as conservative as ISIS is conservative. ISIS believes in racial superiority, they are religious fundamentalists, they believe in traditional gender roles, and they want to return to a nostalgic past.

Yet we don't assume that mainstream American conservatives will be angry when Facebook or Google starts to censor ISIS.

It is also not historically true that traditional "business conservatives" have been associated with racial supremacy. FDR and Woodrow Wilson were "liberal" and "progressive", but they were also more racist than their Republican opponents. They either believed in, or were willing to use for political purposes, racial supremacy and the subjection of people of color.

One of the main tests that I think you need to apply is to ask if a person would be equally angered by the censoring of racial supremacists (and especially nazis or the KKK), ISIS recruiters, or environmentalist terrorists groups (like ELF, the earth liberation front). If someone is only angered when it is the KKK or Nazis that are being censored, but encourage the censorship of ISIS and ELF, then they are almost certainly supporting the members of the KKK or Nazis but are uncomfortable to say so openly. The same is true if someone is only offended by the censoring of the ELF or ISIS recruiters. But if someone encourages the censorship or against censorship of all three of these opposing groups then they are simply for or against the tactic of censorship.

We should stop saying that "conservatives" are offended by censoring Nazis. There are plenty of conservatives in the US and around the world that are for censoring Nazis. Instead the people against censoring Nazis are either Nazis themselves, Nazi sympathizers, or blanketly against censorship. And the last group exists among all ideological camps. Most of the US conservatives that you discussed who are offended by the censorship of Nazis are simply Nazi sympathizers. The way to differentiate between sympathizers and anti-censorship people is to see if the person who is against censoring Nazis is for or against censoring ELF and ISIS.

The courts that have mandated public access to private malls for campaign and protest purposes have seriously erred. These types of rulings violate private property rights under the 5th and 14th Amendments of the US Constitution. The US Supreme Court should revisit and overrule Pruneyard Shopping Center v Robbins (1980) US SCT.

Government treatment of private social media companies as utilities would ruin the social media companies. The social media companies have ample incentive to regulate the content of their users. If a social media site transforms into a morass of hate speech, bullying, obscenity, and other distasteful discourse, people will move on and use a different social media site. Free Speech and an Open Market provide the correct remedy for offensive content.

I've heard conservatives complain that Ayatollah Khamenei and the Specially Designated Global Terrorist entity Al-Manar are allowed to have Twitter accounts, and that tech companies weren't doing enough to stop ISIS propaganda, though you could argue that aiding ISIS is illegal while racist hate speech is protected. But I doubt that all pro-ISIS tweets are illegal.On the flip side, the SPLC monitors hate speech, and the Middle East Media Research Institute (MEMRI) translates and publishes, among other things, terrorist media "to alert the public to threats and in no way constitutes an endorsement of such activities," and its YouTube account keeps being terminated for copyright infringement, but it could also be mistaken for promoting terrorist propaganda. How do you alert the public about terrorist messages without inadvertently promoting them?

@Will, Even as conservatives like Ben Shapiro are harassed by Nazis for criticizing Trump, leftists call Shapiro a "Nazi" for being conservative, so if the tech industry is run by such leftists, conservatives understandably oppose censoring anyone labeled a "Nazi", since they could be next.

Facebook and Instagram ban porn; Twitter and Reddit don't, even though they all have a minimum age requirement of just 13, which seems wrong for a site that allows hardcore porn, but apparently it's legal. If they can decide whether or not to allow legal porn, then they can decide whether or not to allow legal hate speech.

Platforms with minimal moderation like Gab and 8chan get overrun by trolls, Nazis, and pedophiles, no fun for normal people. But excessive censorship is no fun either. Yahoo News censors comments with the words "Nazis," "Republic of the Niger," "Pussy Riot," and "Senator Dick Durbin."

Any search engine or news aggregator has to decide what to display and in what order. It could favor click-bait like a trashy tabloid, or it could favor true info over fake info, though accuracy is harder to quantify than clicks.

"How much should tech companies moderate speech?" is an odd question. They already moderate speech. They moderate it in the extreme. They have black box algorithms, not subject to any kind of review, that decide what you do and don't get to see.

Whether the algorithms filter on truth, bullying, or click-responsiveness, they DO moderate. So the question is: given that these are now the de facto public square, what filtering do we and don't we want?

Your point is well taken, but the proof of the pudding is in the equivalencies.

I'm not sure where my boundary for censorship might lie, were I to wield such power, but if I were to block pro-ISIS Islamists and not American soi-disant Nazis, it wouldn't be out of sympathy for white supremacist ideas; it would be because the two are vastly different in the magnitude of their threat. Marching down the street with torches and swastikas, to the jeers of the local populace, memes of Obama as a monkey or Bernie Sanders being fed into an oven, or whatever, are comparable to the everyday antisemitism, homophobia and misogyny expressed in polls by moderate Muslims worldwide. To find a white-supremacist equivalent pro-ISIS Islamism, you'd have to go to something like the pre-Holocaust antisemitic violence of the 20s and 30s. Nothing like that is remotely going on. (Police violence against young black men is a serious problem, but reflects a lot of deep game-theoretic problems in law enforcement that are hard to solve, and is not motivated by the level of evil ideology that Islamism is.)

Of course, you may consider my equivalencies grossly unfair, and conclude that I'm tacitly in favor of the torch-wielding assholes. I hope you don't. At least I can offer in my defense that ELF seems a lot like the contemporary Nazis, a bunch of losers trying to tap into a fleeting feeling of strength, but no great threat to civil society.

2:06 PM 02/10/2018.This is one of the very few websites that does not delete my work: 100Here's an example.Nobody likes this, and they delete it!JHS=Jekyll.Harvey.Syndrome.Lost in Motionhttps://www.youtube.com/watch?v=4OR-n3Rg6E8?........?Little Kicks (Elaine dancing)https://www.youtube.com/watch?v=UKdoo2R2cXM

Here's an idea that I didn't hear mentioned; it has the advantage of taking power away from governments and large corporations and putting it in the hands of regular people: user-level moderation or filtering.

As background, some sites like Slashdot use something called user moderation: every now and then users get a number of moderator points, which they can use to upvote or downvote articles or comments. Upvoted items are more likely to be seen, downvoted items are less likely to be seen. This can work well when the site's userbase is homogenous and is effective against spammers and other disruptive posters.

My idea is to extend the user moderation; you can upvote and downvote articles, responses, tweets, posts, or whatever, and *you* see more things like what you upvoted, and less things like what you downvoted. Others aren't affected by your choices, except that you can subscribe to others' preferences and have them included in your personalized rating/filtering system.

There's even a place for specialized preference lists that target individual topics. For example, I might wait to eliminate neonazi propaganda, gun control discussions whether pro or anti, but read more about AI and biathlon; without having to upvote or downvote hundreds of items. The lists could have reputation scores (that you can upvote/downvote/follow someone else's preferences) with the idea that you could see how well a given list achieves its stated goal without collateral damage.

They should not censor any type of content. Shutting out "neo-nazis" is just a dishonest and cowardly way of censoring ideological opponents. All the major social media platforms are controlled and moderated by liberals, so they have an incentive to shut out ideological opponents. "Hatespeech" is just a poor excuse for censoring alternative viewpoints. And it is really just an ill defined made up bullshit term.