The proposed law with the long name aims to rid social network sites of hate speech. The decision process would be outsourced to private companies, meaning any controversial content would be deleted in order to avoid draconic fines without having to justify their reasoning. As most hate speech lies in a legal grey zone, Facebook et. al. would be able to decide what is morally acceptable online. The proposed law ultimately risks censoring nonconformist opinions or at least driving them further away from societal mainstream (or at least social networks).

Anyone who has spent time on Facebook, Reddit, Youtube or 4chan will have noticed that lots of the comments contain misinformation, obvious false facts, racist statements and even death threats. There clearly is a problem. Since the American elections last year this has also become a political issue. In numerous instances state sponsored trolls, especially from Russia, have spammed online newspaper comments sections and social media to influence the public opinion. This can give many readers the impression that not only is non-mainstream thought (i.e. mostly obvious conspiracy theories or malicious statements) quite common, but that it’s actually acceptable to engage in blatant hate speech online. In the case of the US, it can even be a contributing factor in winning elections, just think of Trump’s favourite term “fake news” or Breitbart and Fox News.

In this context it is quite understandable that the state wants to end the perceived notion that the internet is some sort of a lawless space, where unscrupulous behaviour is the norm and facts don’t matter. Germany’s Minister of Justice, Heiko Maas (SPD), has put a lot of political capital into making the Netzwerkdurchsetzungsgesetz (NetzDG), a proposed anti-hate speech law, one of his defining legacies. He plans on pushing the law through the Bundestag before the summer break and the elections. But the problem is that the proposed law also stands in conflict with one of the most important freedoms of any democracy - the right to freedom of speech. Maas has repeatedly denied this and claimed that any talk of his new law curbing freedom of speech is “pretty grotesque”.

So what does the proposed law with the long name entail? According to the draft version of the law, it will ensure a “peaceful coexistence of a free, open and democratic society” against “hate crime” and “malicious false information (‘fake news’)”. However, in reality, it forces social networks to deal with hate speech. In other words, outsource the problem to private companies. Social networks would have to delete obvious hate speech, such as threats, libel, revenge porn, and racist, antisemitic or homophobic content, within one day. For more complex issues they would be given a week. This would not apply to social networks with less than 2 million German users, professional social networks, such as LinkedIn, or messenger services, such as WhatsApp. Failure to comply with the NetzDG could be fined with up to €50 million; although fines would only be enforced if a social network fails to comply with establishing a transparent complaints function.

And how exactly would this curtail freedom of speech? The main issue is that the state would outsource law enforcement to private companies. But it should also be seen as part of a more complex debate whether social networks are a kind of speakers’ corner 2.0 or more like traditional publishing houses, who are responsible for the content on their websites. It also brings up a greater philosophical question of who gets to decide upon what is morally acceptable and what constitutes a credible threat.

Social media companies have grown enormously in the past decade. Governments have struggled on how to govern the internet (e.g. the net neutrality debate) and even categories social network companies. Most non-Western countries, have decided to censor any critical content and create their own internet services. In Europe, the EU has been the driving force in defining the balance between privacy protection and surveillance, as well as data retention. Some member countries, such as Germany, have leaned towards the first. Others, such as the UK, have showed a clear preference for the latter.

Data retention laws have, however, not managed to solve the problem of internet trolls and online hate speech. Even though existing laws already cover this phenomenon and the time of netizens being truly anonymous is long over. Yet many internet users still perceive themselves as anonymous, hiding behind generic avatars and fake names, spreading conspiracy theories and hate speech. Maas’ NetzDG aims to show exactly these people that this behaviour is not acceptable. But the wider consequences for all netizens could be quite problematic.

While clearly this is a societal problem, the NetzDG would outsource the issue to social media sites. As private companies, they do what makes economically sense. In this case, use algorithms to filter out any controversial posts and comments and poorly paid moderators who have to make important decisions in split seconds. In order to avoid fines and to streamline the process, they would rather be too careful and delete more than necessary. Facebook currently has over 1.8 billion users and Youtube around 1 billion users. If social networks were to sensibly deal with hate speech, companies would have to employ hundreds of thousands of moderators. Of course this would be way too costly.

A recent article in the Guardian reveals how Facebook’s moderators censor content and clearly shows how problematic this is. For example, saying “someone shoot Trump” must be deleted because it’s a credible threat against a head of state; whereas “let’s beat up fat kids” is tolerable because it’s not regarded as a credible threat. Strangely, “handmade” art containing nudity and sexual activity is okay but digital art showing the same content isn’t. Often moderators only have several seconds to decide if a photo should be deleted or not.

Recent scandals have also shown that Facebook’s policy of banning all IRL nudity from its site is highly questionable. Last year, a Norwegian writer posted “seven photographs that changed the history of warfare”, one of them an iconic scene of Vietnamese children, including a naked girl, running away from a napalm attack. The writer was subsequently suspended from Facebook. When one of the largest Norwegian newspapers shared the post in protest, it was asked by the company to either remove the image or pixelate it, prompting a large public outcry and forcing Facebook to alter its nudity police (now it’s also acceptable to show “child nudity in the context of the Holocaust”).

Adding to the difficulties of outsourcing policing of the internet to private companies is the simple fact that many social media sites employ moderators in countries with low labour costs. The Philippines and India are the most popular countries, as they share many commonalities with Western countries, e.g. language, religion and moral values, and thus citizens from these countries are seen as able to judge these difficult issues. Of course, these are also pretty conservative countries, which doesn’t make their decisions any less controversial. Not to mention the horrible pictures and written content moderators have to view every day.

YouTube has also been criticised heavily for letting extremist groups, such as Islamic State supporters, earn advertising revenue for posting on their site, prompting widespread outrage by advertisers and civil society. They responded by excluding any videos that might be slightly controversial from their advertising revenue system. This resulted in many of the most popular YouTubers not being able to earn any income on videos that contained problematic words, highlighting the extremely difficult process of filtering out extremist or hate inciting content. Even the company’s most profitable users weren’t given a chance to defend themselves or given transparent accounts of why their videos were excluded from earning advertising revenue or in the worst case deleted.

National laws, such as the NetzDG, also ignore that the internet doesn’t know national borders. Many social networks do not have offices in every country and national laws do not apply to them. This has be dealt with at the EU level, especially considering that big countries, like Germany, the UK or France, are able to “deal” with Facebook or YouTube. However, smaller EU member states will have difficulties holding social network companies accountable.

Clearly, social network companies cannot be held accountable to the same degree as publishing houses. They have billions of users that can hardly be censored (in the sense of deleting illegal content) appropriately without severely restricting freedom of speech or employing armies of qualified moderators. Yet most politicians and lawmakers ignore that the underlying issue is the lack of media competence and journalistic moral values of many social media users. In the past, those who were able to reach a large public, such as journalists, were formally trained and aware of the responsibilities of publishing content. The huge amount of users untrained in the sensitivities of journalism makes it inappropriate for social network companies to be treated similarly to publishing houses. This doesn’t mean that the internet should be a lawless space and social network companies shouldn’t be held responsible what happens on their sites.

Instead of completely outsourcing the problem to private companies, law enforcement agencies should be better equipped to deal with online crimes, including hate speech. Existing laws already make hate speech a punishable offence. Years of ignoring illegal hate speech (and not treating it as a punishable offence) has created the impression that it is somewhat acceptable and not a crime. Not to mention that most hate speech falls into a grey zone; it’s not exactly illegal (e.g. not a credible threat) or maybe just some ridiculous conspiracy theory. Instead of forcing social network companies to delete these posts, maybe it’s better to encourage society to engage with spiteful netizens (even though that’s exactly what encourages many trolls). The proposed law could also drive many so-called “critical citizens” away from mainstream social networking sites (as only sites with more than 2 million German users would be covered by NetzDG) and even further away from mainstream society.

Furthermore, the biggest threat of NetzDG to freedom of speech is that it fails to establish a transparent process of why content is censored or deleted, often resulting in arbitrary decisions. Users aren’t given a detailed account, not to mention the right to defend themselves, of why their content was deleted. This will ultimately lead to self-censorship in mainstream media, even with content that has nothing to do with hate speech. Users will refrain from using any controversial words (or discussing controversial topics) that might result in their posts being deleted or excluded them from earning advertising revenue. Since more and more of our lives are happening online, private companies (with their indiscriminate algorithms and time pressured moderators) shouldn’t be in the position to define what is acceptable or not. Maas is right to demand that social network companies create easily accessible complaints sections, but he is wrong to outsource the problem completely to the companies.

Despite protests by some CDU ministers and numerous civil society groups, the proposed law has been approved by Merkel’s cabinet. Maas hopes to get it now approved by the Bundestag before the elections. However, it seems likely that the Federal Constitutional Court (Bundesverfassungsgericht) will prevent the law from being implemented in the current form, possibly citing concern over it violating the constitutional right to freedom of speech.