Although the overall removal rate of 70,6 % turned out to be only slightly lower compared to the last monitoring (-1,1 percentage points), this result is mostly owed to Facebook’s consistently high removal rate of 84,5% (+0,9 percentage points) and Instagram’s improvement to 77,2% (+6,6 percentage points). Twitter’s performance remained low at 44,9% (+1,4 percentage points), and YouTube only removed 67,8% of illegal hate content, a major drop of 17,6 percent points compared to its last checked performance.

This monitoring exercise, which has been coordinated and conducted by the International Network Against Cyber Hate (INACH) and its partners of the project sCAN (Platforms, Experts, Tools: Specialised Cyber Activists Network) to check the compliance of social media platforms with the European Commission’s Code of Conduct on Countering Illegal Hate Speech, was the first one in which the platforms have not been aware of the monitoring.

Between May, 6th and June, 21th, 12 organizations which are specialized in dealing with online hate have reported 432 cases1 to the platforms through their public reporting channels, out of which 90 have been re-reported through reporting channels available to organisations recognized by the IT companies as “trusted flaggers2” after having been rejected by the platforms.With the EU Code of Conduct, the companies have agreed to assess and remove illegal hate speech online that is against national law or their Terms of Services within 24 hours. Yet, only Facebook managed to reach a tolerable level in removing reported hate speech within that timeframe (64%). Instagram, Twitter and YouTube remained below 50%.

In addition, the companies’ performance in providing feedback was poor: to 42% of reports the companies provided absolutely no feedback, reactions within the required 24 hours came to not even half of reports (46%). Again, only Facebook provided timely feedback to 70% of reports while YouTube remained silent to 97% of reports.

Providing no feedback, late feedback or meaningless feedback is a major issue that needs to be addressed by the companies as soon as possible. If people report online content that is hateful, discriminatory or inciting violence, it is not enough for platforms to send an automated reply stating that they received the report, or not even that. “Users need to know that their efforts in making the internet a friendlier place are taken seriously so they feel encouraged and valued” emphasises Ronald Eissens, General Director of INACH.

Hence, INACH, together with the partners of the sCAN project and other member organizations that have participated in the monitoring, urges the social media companies, especially YouTube, Instagram and Twitter, to improve their removal practices further and to react and respond meaningfully to all users reporting hateful content.