The social network has quietly developed software to suppress posts from appearing in people’s news feeds in specific geographic areas, according to three current and former Facebook employees, who asked for anonymity because the tool is confidential. The feature was created to help Facebook get into China, a market where the social network has been blocked, these people said. Mr. Zuckerberg has supported and defended the effort, the people added.

Facebook has restricted content in other countries before, such as Pakistan, Russia and Turkey, in keeping with the typical practice of American internet companies that generally comply with government requests to block certain content after it is posted. Facebook blocked roughly 55,000 pieces of content in about 20 countries between July 2015 and December 2015, for example. But the new feature takes that a step further by preventing content from appearing in feeds in China in the first place.

Facebook does not intend to suppress the posts itself. Instead, it would offer the software to enable a third party — in this case, most likely a partner Chinese company — to monitor popular stories and topics that bubble up as users share them across the social network, the people said. Facebook’s partner would then have full control to decide whether those posts should show up in users’ feeds.

…

The current and former Facebook employees caution that the software is one of many ideas the company has discussed with respect to entering China and, like many experiments inside Facebook, it may never see the light of day. The feature, whose code is visible to engineers inside the company, has so far gone unused, and there is no indication that Facebook has offered it to the authorities in China.

But the project illustrates the extent to which Facebook may be willing to compromise one of its core mission statements, “to make the world more open and connected,” to gain access to a market of 1.4 billion Chinese people.

…

Several employees who were working on the project have left Facebook after expressing misgivings about it, according to the current and former employees.

…

It’s unclear when the suppression tool originated, but the project picked up momentum in the last year, as engineers were plucked from other parts of Facebook to work on the effort, the current and former employees said. The project was led by Vaughan Smith, a vice president for mobile, corporate and business development at Facebook, they said. Like Mr. Zuckerberg, Mr. Smith speaks a smattering of Mandarin.

…

Over the summer, several Facebook employees who were working on the suppression tool left the company, the current and former employees said. Internally, so many employees asked about the project and its ambitions on an internal forum that, in July, it became a topic at one of Facebook’s weekly Friday afternoon question-and-answer sessions.

…

“It’s better for Facebook to be a part of enabling conversation, even if it’s not yet the full conversation,” Mr. Zuckerberg said, according to employees.

Yes Fuckerberg, you fucking hypocrite, “all animals are equal, but some animals are more equal than others.” Sound familiar? I sure as fuck hope so.

In my recent post “On the Facebook 30-Day Ban and Censorship,” I had already wondered whether on Facebook “blocking and banning people for no good reason might be a feature instead of a bug.” It looks like it might actually be a “feature” after all. However, this latest “feature” of just letting governments “prevent content from appearing in feeds” is a lot more dangerous compared to blocking and banning people from the network, because in the former case people don’t get any indication at all that they’re being censored. If you don’t know that your content is being blocked (from appearing in feeds), you won’t try to do anything about it either. Meanwhile you’re being watched by the government.

And don’t ever think that this censorship “feature” will only be used outside of the USA. In fact, it wouldn’t surprise me one bit if this has already been extensively used, for example during the recent US elections. And before you leave a comment about this, yes, I’ve read in the article that this “feature” has “so far gone unused,” but I wouldn’t take their word for it.

Once such a “feature” has been built and is available, it’s going to be used everywhere, and even most of Facebook’s own engineers won’t know where and when it’s being used. The general public will also never know who has access to it and when they are affected by it.

Publicly Facebook will also deny that it’s being used, just like the US government always denied that they were spying on their own citizens, until Edward Snowden proved otherwise. Not that I needed the Snowden revelations (June 2013) to know that they were spying on people inside the USA and worldwide; I had warned about it here on my blog well in advance (all the way back in September 2006).

I’m very grateful to those engineers at Facebook who quit their jobs because they didn’t want to work on this censorship “feature” and be a part of making the world a worse place to live in. I would have done the same thing in their place (and I have in the past). Thank you.

Also of concern is that WeChat’s notices for censoring content have become much more opaque. Users no longer have any form of notification that messages are filtered on chat and the messages that are provided for URL and public post blocking are ambiguous. Furthermore, the nuance between the contexts in which censorship takes place has grown more sophisticated. WeChat turns on censorship only for users registered from mainland China and imposes greater restrictions on those features within the app which allow for the greatest reach. The recognition of the difference between public spaces (public accounts platform) and semi-public spaces (group chat) suggests a desire to integrate censorship ever more tightly and seamlessly into the app depending on the perceived “threat.”

While these controls, combined with more opaque censorship, do on the one hand provide a more friction-less user experience for the vast majority of users, it also shields them from encountering such censorship. By calibrating censorship in such a way, WeChat may hope to either squeeze out the minority who hope to use WeChat as a source for independent news and commentary (as either a broadcaster or consumer) or limit their audience as much as possible.

As content filtering and other controls become more invisible and seamless to users, documenting how these features work through the type of research demonstrated in this report is increasingly important to ensure users can make more informed decisions about the apps they use and the social media experiences upon which they rely.