We Need to Talk about Online Censorship

Online censorship and algorithms are writing the internet, both literally (through code) and figuratively (controlling how we use the internet).

Algorithms in social media platforms both track what we say and control what we see. It’s incredibly harming to democracy, and to various activist movements, to have the amount of censored information that exists on our social media platforms. It’s important to make a distinction before continuing: algorithms are not the dark power. They’re fully essential for computer operation. It’s the powers behind the algorithms, who design them for questionable tasks and mine their data, who we should be concerned about.

The #MeToo movement, which blew up on Western social media, was blocked in China on a prominent social media platform called Weibo after a student caused controversy by sharing a letter detailing her sexual harassment by a college professor. #MeToo being blocked didn’t stop these women from banding together and sharing their stories. They opted to fly under the radar with their own version of the hashtag. As you can see, the movement itself completely depends on the ability to navigate online censorship. They began using #RiceBunny, along with emojis of rice and bunnies. The Chinese words for ‘rice’ and ‘bunny’ are ‘mi’ and ‘tu’ respectively, which effectively sends the same message online, albeit avoiding algorithmic detection.

While we give kudos to these feminists in China for sneaking under the algorithm, the issue still remains. In 2011, a Facebook algorithm blocked a group of environmental activists because their content was mistakenly flagged as spam. Algorithms are highly complex pieces of code which have a huge capacity to both monitor and control us, usually under the guise of freedom or convenience. Algorithms are absolutely not foolproof, but they still have an alarming amount of control over our information and the content we are exposed to, as well as monitoring our behaviour.

Take YouTube, for example. The ‘suggested’ videos algorithm is the subject of much criticism. We see what YouTube’s algorithm decides we would like to see, which often prevents us from being exposed to a well-rounded variety of opinions on an issue. It has also glitched in the past and encouraged users to view extremely controversial content they may have had zero interest in, such as Logan Paul’s return to YouTube.

Facebook News operates in a similar way: the news stories we see are tailored to what Facebook’s algorithm thinks we would be interested in, or else what Facebook’s investors want us to be seeing. Facebook, of course, claims their algorithms’ ability to ‘personalise’ information is for the benefit of its users. We’d be incredibly naïve to believe this. Facebook’s algorithm is for the benefit of Facebook and its corporate contacts.

We know that our information online is shared and sold to third parties. Facebook’s business model is built around encouraging us to share as much information on its platform as possible, so it can monetise this data.

This means we may be pushed towards products from marketers cleverly designed to appeal to Facebook’s record of our psychographics, or perhaps even the political equivalent, which is where things get dangerous. Many other social media platforms use this structure. We are isolated from seeing opposed views. Sometimes the information appearing on our feeds is even handcrafted to feed our biases, which really isn’t healthy. It’s also been shown that these algorithms can push ‘their own’ biases of race, gender and politics on unknowing users. For example, Google’s algorithms have been caught suggesting arrest records when “black sounding names” were typed into its search engine.

Because social media is such a prominent part of our lives, we rarely stop to consider the implications of using it – but perhaps we should be thinking twice. The key thing to remember is that if an online service is free, you (and your data) are the product.