Diana MacPherson called my attention to this new post by Twitter on conduct that they’re going to block. And they’re starting with religion. Click on the screenshot to read:

Here’s what Twitter says:

We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* [JAC: they give two studies in the article’s footnotes] shows that dehumanizing language increases that risk. As a result, after months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanizes others on the basis of religion.

Starting today, we will require Tweets like these to be removed from Twitter when they’re reported to us:

Twitter notes that if you’ve already put one of these up, it will be removed but your account won’t be blocked. But after the rule was set (July 9, 2019), accounts may be deleted if they start posting stuff like the above. (But how would you know? Who reads Twitter-policy updates? Shouldn’t you at least get a warning?)

But note that they’re starting not with ethnicity, race, or other common subjects said to attract “hate speech.” They’re starting with religion. Why? Here’s what they say:

Why start with religious groups?

Last year, we asked for feedback to ensure we considered a wide range of perspectives and to hear directly from the different communities and cultures who use Twitter around the globe. In two weeks, we received more than 8,000 responses from people located in more than 30 countries.

Some of the most consistent feedback we received included:

Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules.

Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters.”

Consistent enforcement — Many people raised concerns about our ability to enforce our rules fairly and consistently, so we developed a longer, more in-depth training process with our teams to make sure they were better informed when reviewing reports. For this update it was especially important to spend time reviewing examples of what could potentially go against this rule, due to the shift we outlined earlier.

But this doesn’t at all explain why they started with religion. The next bit is said to help explain “why religion first?”, but it doesn’t seem to, either:

Through this feedback, and our discussions with outside experts, we also confirmed that there are additional factors we need to better understand and be able to address before we expand this rule to address language directed at other protected groups, including:

How do we protect conversations people have within marginalized groups, including those using reclaimed terminology?

How do we ensure that our range of enforcement actions take context fully into account, reflect the severity of violations, and are necessary and proportionate?

How can – or should – we factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?

Well, you could say that delineating “hate tweets” and enforcing rules consistently is easier with religion than, say, gender or race, but I don’t think so. In both cases you have to separate hatred for people with dislike of policy (e.g. “Deport all Muslims” vs. “Islamic doctrine is often oppressive”; or “Send blacks back to Africa” vs. “Affirmative action is wrong”). Note that both examples, which involve religion and race, show the potential blurring of lines, for sentiments against affirmative action or against Islamic doctrine can be and have been deemed “hate speech”.

This blurring is why I object to Twitter doing this kind of policing, as drawing lines will be arbitrary. But if they feel they have to draw lines, then the tweets above, which are bigoted against people, are clearly reprehensible. And since Twitter is a private company, they can do what they like. But I want them to hew to the First Amendment as closely as possible, and the tweets above don’t violate that.

Diana felt more strongly than I, and told me this (quoted with permission):

It sounds like a bad idea all around to me. How many times have religious groups had atheists banned from social media just for being atheists? So now if someone criticizes a religion, is that going to be counted as violating their rules? And why religious groups that get special protection? Twitter calls them marginalized – really? Christians are marginalized? It just seems like really faulty thinking all around.

I’ve seen reasonable speech characterized as hate speech too often to immediately get on board with Twitter’s rules. Yes, the examples above are beyond the pale—if you must police speech on a social-media platform. But there will be many other examples where criticism of religion might be either chilled or censored. To many, completely innocent pictures in my tweets—like animal pictures that come from my websites—are labeled by Twitter as “sensitive material” that you have to click to see. I think that’s because I tweet Jesus and Mo cartoons, which got me censored in this way.

Although Twitter still allows us to Post Jesus and Mo strips, it also acts as an informant when somebody else objects to “sensitive” material, as when Maajid Nawaz tweeted Jesus and Mo as well:

Twitter’s formally informed me Pakistani authorities notified them that the above violates Pakistan’s blasphemy law. Punishment for this in Pakistan is death. I’m Pakistani origin & visit family there. Twitter has a moral duty to tell me who precisely is trying to have me killed pic.twitter.com/OiyZh2hQy4

Does Twitter need to inform Nawaz that his content violates Pakistani law? Shouldn’t Twitter just tell Pakistan to “bugger off”?

Well, at least Twitter doesn’t ban the cartoons in the way that WordPress does to help out the Pakistani government when it accuses me of “Jesus and Mo”-related blasphemy.

The more I ponder this, the more I’m coming around to Diana’s point of view, and thinking that so long as social media doesn’t violate the First Amendment principles of free speech as interpreted by the courts, it should allow everything to be posted.

Do the rules above seem reasonable, or do you, like me, see a slippery slope?

76 Comments

Yes, it would be nice if media companies acted in the public interest, including protection of free speech.

Nice, but not necessary. And often, not as profitable. America decided some time ago that corporations shouldn’t have any accountability or duty to anyone but shareholders, media included. I don’t agree with that decision, but here we are.

“America decided some time ago that corporations shouldn’t have any accountability or duty to anyone but shareholders, media included.”

But no doubt many if not most U.S. corporations believe that the U.S. government/taxpayer has a duty to support corporate interests across the planet. Do these noble legal fictions believe that the flower of U.S. (“human capital”) have a duty to go in harm’s way on behalf of corporate financial interests?

I think there is a difference between some whacko yelling stupidities over his garden fence and the same whacko yelling the same thing on twitter. Both are “free speech” but if there is no type of enforcement on social media then it would tend to devolve into a cesspool of hate where decent person wants to spend any time.

That said, I don’t expect twitter to act on the constant stream of Jew hate emminating from the Arab world. So please inform us when you get wind of a group that reports such to twitter who subsequently does nothing.

I’d like to see this double standard exposed for what it is – if it happens.

If I were a Christian I would think the rules would show my faith as being insecure. It implies my all powerful God is unable to deal with hateful speech and that I am further fearful that others, will turn that speech into hateful action.

These are not cases of speech but emotional belief in causation. Can a song cause someone to commit suicide? Can gay marriage cause beastiality? Does a woman’s revealed face cause insatiable lust in men?

What is extra amusing is there are reports of individuals being threatened on Twitter, the offensive tweets being reported and Twitter doing nothing about them. Why, well these tweets are often directed at religious groups like the Satanic Temple or atheists and for some reason they aren’t deemed worth protecting….so it will be interesting to see how Twitter proceeds now.

Sadly, I think many religious people don’t/can’t think about the problem that deeply. They just assume that everyone knows what a religion is. Just as they assume that there is such a thing as a “true Christian”. If I had a dollar for every liberal Christian who has complained about how Republican Christians aren’t doing it right… (and another dollar for the conservative ones who say the same thing looking the other direction….)

I’m against religious tax exemptions, too, but as long as the First Amendment has Free Exercise and Establishment clauses, the state will of necessity be “in the business of deciding what is and is not a religion.”

As it stands, the courts will generally consider only whether purportedly religious views are sincerely held, and abjure regarding the veracity of any religion.

Although they won’t say so expressly, I think that’s because the courts know that the truth claims of religions are (to bowdlerize it a bit) like armpits — everyone has ’em, and all of ’em stink.

In the specific US context, yes, that’s correct. In the more “how could the world work better” sense, I think “freedom of conscience” and a few other things would cover both religion and other sources of ethics, etc.

Yes and that always bites you in the ass. Not to mention that one can simply say insulting their religion is insulting them. Think of “Islamophobia” being applied to criticism of Muslim people instead of criticism of Islam. How many times have screaming matches with insults like “bigot” and “nazi” been hurled at anyone that criticizes Islam? Think of Sam Harris and Batman on Real Time.

What if hate speech were defined as that which denies the humanity of the targeted person or group? Calling people/groups “rats”, “viruses”, “filthy animals”, “maggots”, etc., denies their existence as human beings and implicitly makes them targets for extermination (killing) or removal (lock them up, deport them) without explicitly calling for the kind of violence that could bring the poster to the attention of law enforcement. [I would extend this definition to animals as living beings: that is the basis for laws against animal cruelty.]

I think they were saying that’s subjective too when they referenced terms that might sound offensive but are meant to be endearing, like Lady Gaga’s ‘monsters’. There are also examples that could be demeaning or could be lighthearted depending on the context – calling a group of people ‘animals’, for example, could be dehumanizing or it could be lighthearted (“Look at these pictures from last week’s pool party! These guys are animals, lol!”).

I see on Microsoft’s Outlook Hotmail website featuring the latest news apparently worth knowing that Miley Cyrus’s pet pig has died. I reasonably assume that Microsoft has done its market research and has a good handle on what the U.S. great mass pop culture considers important.

This caught my eye: “they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language [i.e. hate speech and dehumanizing language]. Many people wanted to ‘call out hate groups in any way, any time, without fear.'” Remember that the they’re talking about is “hateful conduct” and “dehumanizing language”, so they’re talking about protecting hate speech against “hate groups” and “non-marginalized groups”.

It reminds me of how Facebook recently changed their policies to allow death threats and calls for violence against “dangerous individuals” which seems in practice to be defined as right-wing people like Milo Yiannopoulos, not people who are actually dangerous. From Facebook’s policy (newly added text in bold):

“Do not post threats that could lead to death (and other forms of high-severity violence) of any target(s), where threat is defined as any of the following…
* Calls for high-severity violence (unless the target is an organisation or individual covered in the Dangerous Individuals and Organisations Policy)…
* Statements advocating for high-severity violence (unless the target is an organisation or individual covered in the Dangerous Individuals and Organisations Policy)
* Aspirational or conditional statements to commit high-severity violence (unless the target is an organisation or individual covered in the Dangerous Individuals and Organisations Policy)”

I heard they are backtracking after criticism but it’s clear these social media companies want to establish a double standard where hate speech, death threats, etc. are okay against “non-marginalized groups” (e.g. whites, males) and “hate groups” (which only ever seem to include right-wing groups, as hate speech from the left doesn’t mark a group or person as hateful).

What struck me (as a non-Twitterer) was the first sentence of the notice: “We create our rules to keep people safe on Twitter…” The therapeutic phrasing is identical to that used by umpteen college administrations and student groups to justify assorted de-platforming exercises and other forms of censorship. So Twitter is following the current academic superstition that words are the same thing as violence. I bet that the Twittercrat writers of this notice are quite recent college grads, who learned the phrase, and very little else, in their “higher education”.

Just in time, Dear Leader is today hosting a “Social Media Summit” at the White House, with many of the leading lights of the right-wing bl*gosphere invited, like investigative hoaxster James O’Keefe and QAnon conspiracist nonpareil Bill Mitchell. In his tweet about it this morning, the Donald promises he will not let the social media platforms that shun such people “get away with it much longer.”

(In that same tweet, keeping up his tedious line of attack, Trump mocks Democratic presidential hopeful Elizabeth Warren as “Pocahontas (1000/24th)” — demonstrating that “fractions” is on the long list of topics Donald Trump knows fuck-all about.)

In Rwanda, before the genocide, Hutu radio stations regularly referred to Tutsis as “cockroaches,” implying they were fit for extermination. As long as I can still criticize religion, I don’t need to use dehumanizing language to do it. We don’t want Athiests demonized as devils or child eating monsters, or some other such hateful dehumanizing lies. I think Islam is abhorrent but I don’t feel the need to call Muslims insects or animals. i think I am ok with this step by Twitter.

Each of the example tweets contain an opinion — “are disgusting”, “are making this country sick”, “should be punished”, “we don’t want any more of them” — followed by an epithet — “rats”, “viruses”, “filthy animals”, “maggots”.

I’ll bet dollars to donuts that the opinions will be censored, with or without accompanying epithets.

But don’t worry the religious groups are still allowed to call atheists cockroaches and, from what I’ve seen in the past, get away with threatening us. Twitter is okay with that. Only religious people will be protected.

What Trump is doing is holding a cry in for the right wackos who are getting thrown off of Twitter and other platforms. He does not even invite any of the platforms to his summit. It is a joke as are most of Trumps doings.

But think about this folks. What is different about the internet media and our regular media. The regular media is highly regulated and always has been. The internet platforms who are much bigger have no regulation. This is the big joke really. These monsters must be regulated or the damage they can do is huge. Trump, for instants wants to say all the things and make up all the lies he wants on line. Then he also wants to call the regular media FAK news and throw them out of the white house if he does not like them.

If only people paid attention to things Mike Pence says. At CPAC and at other times he has talked about people making fun of “our beliefs.” At one point he said that wasn’t going to continue. I turned to my wife and said”Hmmm… blasphemy laws in our future?” White, Christian, nationalism!

A friend of mine, who is a lawyer, brought up a good point. Some time ago Congress immunized companies that allow people to blog, etc. from being sued for the content their users post. If the web companies begin to censor, that is editorialize, then they cease to be platforms, and become publishers, and should loose their immunity.

That is a start. I recommend that anyone interested in this subject read Roger McNamee’s book ZUCKED. I found it a great learning experience from someone who knows the platforms, especially Facebook very well. He also provides many suggestions to fix the problems created by the platforms.

Probably repeating a point made above, but speech threatening the lives of *people* who subscribe to a worldview is quite different from a negative, even harsh critique of the worldview itself. The former is arguably subject to censorship (even prosecution, seems to me, given the psychological harms involved), while the latter is or should be protected speech.

Making a general threat to a group that doesn’t incite imminent violence is not illegal under the First Amendment, at least under the courts’ interpretation. I agree that that is much more odious than criticizing worldview, but both are equally protected by the Constitution.

That such general threats are not illegal should be revisited given the real psychological damage to members of these groups (fear, insecurity, isolation) and the incitement to actual later (not imminent) violence, which is what we’ve seen under Trump and other populist leaders. is the Constitution that impeccable?

“Psychological damage” is not reason enough to ban threatening language, because if it was, it would be sufficient to ban other types of “hate speech”. But be my guest to try to reverse the courts’ interpretation, which will require a lawsuit with you as the injured party.

Not sure why psychological damage isn’t sufficient for banning mortal threats. In any case, how about incitement to actual but non-imminent physical and material damage (as documented recently) as grounds for banning threats against members of groups? Or is the connection between threats to groups and later violence against those groups not clear enough?

Why isn’t psychological damage sufficient for banning any kind of hate speech, threat or not? After all, those who want hate speech eliminated (as adumbrated by people like Christina Hoff Sommers or Heather Mac Donald) always say THEY have been caused psychological damage. Sorry, but psychological damage due to words alone, without those words inciting physical or material damage, hasn’t been sufficient to ban speech.

I, for one, would never try to ban someone who said, “Gas the Jews.” Nobody is going to do that, but is the psychological damage of some Jews enough to render that non-free speech?

“I, for one, would never try to ban someone who said, ‘Gas the Jews.'”

Putting aside the issue of psychological damage, I wonder whether you think well-documented non-imminent physical and material damage to group members might be sufficient grounds to ban mortal threats to them.

In the right context, “gas the Jews” is an incitement to imminent violence. Imagine a Nazi or far right rally in which a speaker calls for Jews to be gassed. Imagine then that the audience start chanting the phrase in response. To me, that could be viewed as an incitement to imminent violence. If I was Jewish, I would certainly be concerned for my safety.

Context is important, which is one reason why rules like the ones Twitter is trying to introduce are never going to work. You can’t make up glib rules that work in every context. For example, if they did say “using the phrase ‘gas the Jews'” is a banning offence, should tweeting this post lead to me being banned even though I only wrote it out because I am talking about it?

Yes but how many times have we already seen accounts of atheists suspended for criticizing religion? The answer is many. I think even Jerry got suspended from FB once and I know Seth Andrews got suspended for putting an image of Jesus bungee jumping off the cross. So you and I may understand the difference between criticizing ideas and being bigoted toward people, but most people, sadly, do not. And thus with these new edicts from Twitter I fear there will be no criticism of religion tolerated.

“…they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language.”
“How do we protect conversations people have within marginalized groups, including those using reclaimed terminology?”
“… considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?”

It really sounds like twitter is trying to find a way to allow this kind of language by certain groups, while banning it for others. I’m fine with banning the kinds of tweets highlighted above, as long as they attack people and not ideas, but the rules have to apply to everyone. Arbitrary enforcement of rules is incredibly damaging, eroding trust in the platform and encourage those that avoid censure to increase their vitriol.

The distinction between people and ideas is tempting, but might not be workable, eg, ‘Group X commit four times as many (insert bad actions) as group Y; therefore, (insert negative consequence for group X).’

Assuming the statistic is accurate, is this a legitimate expression of an idea or an illegitimate attack on group X? How confident are you that a busy social media censor will agree with your judgement? What if the consequence were left unstated and left to the reader’s judgement?

I had a FB page called Jesus Jokes (the new URL for which is listed above…we had 11k people…now we have 2. not 2k…just 2.) which FB took down for an image. Although I could never be sure which image bothered them, it was apparently either this one:

or this one:

The first may have made some gay people unhappy, I guess, but the second one is just historical.

They don’t seem to mind so much when an evangelical is telling atheists that they are disgusting, lack morals, need to get out of the country ‘because it was founded on xtian principles’, etc. You can bet I’ll be tagging all that crap as hate speech from now on. And yes, I’ve had my life threatened more than once for my FB page…which somehow lacks that oh, so forgiving quality of xtianity.

Yes, this is just the sort of thing I had in mind with this whole Twitter debacle. It seems fine and dandy for Christians to say vile things about atheists and both Twitter and FB has always been rather lax on enforcing rules when it comes to atheists being attacked.

I do feel like Twitter mob culture has gotten out of control. And from Twitter’s perspective, I don’t know if they’re beginning to worry about legal liability (if someone commits suicide after a campaign of online Twitter bullying, is it a stretch to think there could be a lawsuit at some point?)

How to best moderate such forums, I don’t know. In the real world, the nature of the spoken (vs. written) word means that different norms apply (I think people are, for the most part, much more moderated in public spaces – if you get on any kind of a soapbox and draw a crowd in a mall, for example, no matter the topic, they can and probably will tell you you’re being disruptive and have security escort you out. There is an expectation that verbal conversations are between small groups of people. Written sentiments seem inherently different in that sense.) It sounds as if they’re test driving this policy using religion with the assumption that the parameters for religious groups will be easier to define. I kind of see their logic, but I don’t think they’re going to be able to come up with reliable, consistent rules by defining hateful speech in concrete terms. Too much depends on context when it comes to speech. Even in our legal system, things are run by checks and balances, the ability to appeal, attempts at equal representation amongst those doing the judging, etc. – not by having concrete, black and white definitions that apply the same way in every single case. Human judgement can be highly biased but in some arenas you simply have to invoke it as there is no substitute.

But I am far more concerned with disinformation and conspiracy theorists peddling politically motivated lies to gullible and/or thirsty audiences. This is doing far more damage than the occasional tweet targeting marginalized people. Trump is currently hosting the elites of these scum at the White House.

The whole idea of marginalized groups can be pretty nebulous and only invoked when desired. For example, it seems strange that Muslims in Pakistan can be considered marginalized, since they make up 93% of the population and have established Islam as the official religion of the country.

… since Twitter is a private company, they can do what they like. But I want them to hew to the First Amendment as closely as possible …

I’ve got ambivalent feelings on this, ones I haven’t completely worked through yet. On the one hand, as a qualified free-speech absolutist (if I may employ such an oxymoronic descriptor), I’m in favor of free-speech principles being observed almost everywhere — especially as regards public platforms where the government encourages (or at least permits) what are essentially monopolistic conditions.

On the other hand, social media are privately held corporations, which are traditionally free to set speech policies of their own as they alone see fit. (And I find it ironic that the rightwingers who now do the most bitching about being banned from these platforms, and who are now looking to pass laws that would in essence nationalize private property (at least in part), are the very same ones who also do the most bitching about “creeping socialism.”) The general rule in a free-enterprise system is, if you don’t like the way a company does business, go start a competitor of your own.

On the other other hand (what, you have only two?), one of the main sources of discord in our society is the way in which factions — political and otherwise, be they right, left, or center — have retreated into their own media bubbles, such that we no longer have much ground for common discourse. The balkanization of social media would exacerbate the problem.

Finally, on the fourth hand, were social media to be prevented from banning any type of speech — and thus essentially become free-speech free-fire zones — in something akin to Gresham’s Law regarding monetary policy, bad speech would tend to drive out good, as we’ve seen happen on fora such as 4chan and reddit, meaning that many worthwhile voices with no taste for blood battle would be silenced.

I agree with pretty much all of your conclusions here. It’s a very convoluted topic. Complicated, I think, by the fact that cyberspace takes “physical space” out of the equation when physical space informs many of our (also convoluted, especially when you factor in both criminal and civil) intuitions about free speech vs. harassment. Making inappropriate comments to a woman in her place of work is sexual harassment, for example; while making inappropriate comments about a woman on one’s own time is free speech. What about Twitter, though? Is the woman ‘there’, being subject to harassment, or ‘not there’, as she’s separated in physical space? What if she has to Tweet as part of her job? The same could be said of disorderly conduct – running through the streets drunkenly screaming insults at someone minding their own business at 3 am in physical space may well get you into trouble for that. Online? Removing that bit of context makes things very different, and besides, there’s no one in your local ‘neighborhood’ to tell you to stop anyhow.

Why religion? Just guessing, but I bet that most of the aggressive complaints came from Muslims. A large minority are hypersensitive to criticism of Islam. Hence religion was the first area to be censored by Twitter. Since Islam is often associated with Honor societies, where reputation is very important, this isn’t surprising.

You’re right, Jerry. Slippery slope. Start with religion rather than race/gender “hate” speech because,as any authoritarian regime knows, you start by taking away the easy stuff (those who would balk at wokeness policing on race and gender are less invested in religion) and then chip away from there.

In short, they backdoor “penetrate” their members with a blasphemy punishment! Are those mthrfckrs never consider the single FACT that their miserable stupefied existence (and the vast amount of money they grab) amount to FREEDOM OF SPEECH within a SECURAL SOCIETY settings

Let’s assume Twitter is being pressured by the Christian super majority to keep its religion from being criticized, satirized, and otherwise denigrated in any way. Let’s be evenhanded, Twitter, consider the criticism of Atheists as hate speech and ban it.

It doesn’t make sense to me for Twitter and Facebook to block posts. On both platforms, users can control what they see and look at, and block users they don’t like. There are lots of posts that you never see!

Would someone be able to post these?

We don’t want more [Religious Group] in this country. Enough is enough with those $%&%#%s!

[Religious Group] should be punished. We are not doing enough to rid us of those you-know-whats.

Sentiment is sentiment, and many people will be able to find a way to convey what they mean, while avoiding restrictions on vocabulary.

A very high percentage of hate crimes have been perpetrated against Jews, and muslims have also been targeted. There isn’t evidence that other groups are being targeted disproportionately the way these minorities are. Trump has said horrible things about Hispanics, but (so far) there have been no mass homicides of Spanish-speakers. If hatred toward religious minorities hadn’t been a precursor to actual murders, I’d be more concerned. The only downside I see is that after a crime has been committed, social media is a method of determining motive and sources of radicalization. Otherwise, I won’t boo-hoo for the dubious rights of the alt-right.