Updates

Content policy

2019

Facebook announced new measures dedicated to identifying and removing non-consensual intimate images (also known as revenge porn) shared via the social media platform, as well as to supporting the victims of such abuse. The company will be using a new detection technology, powered by machine learning and artificial intelligence (AI), to 'proactively detect near-nude images or videos that are shared without permission on Facebook and Instagram'. Once identified by the AI tool, the content is reviewed by a member of Facebook's Community Operations team, who will decide whether to remove the image or the video. The removal will in most cases also be accompanied by disabling the account from which the content was shared without permission. Facebook has also launched the Not Without My Consent victim-support hub, for victims of revenge porn to be able to look for organisations and resources to support them.

UN Special Rapporteur for Freedom of Opinion and Expression, David Kaye, speaking about the controversial proposed Article 13 of the European Union Copyright Directive, said 'Europe has a responsibility to modernise its copyright law to address the challenges of the digital age. ... But this should not be done at the expense of the freedom of expression that Europeans enjoy today.' He especially criticised pressure to implement pre-publication filtering to monitor and restrict user-generated content to prevent copyright infringement

Firms including Facebook, Google, and Twitter have written a letter to the British government calling for a clear differentiation between illegal and harmful content. The letter was co-ordinated by the trade body Internet Association and has been sent to the culture, health, and home secretaries. The letter comes after the government demanding social networks and Internet service providers (ISP) to remove abusive, humiliating or intimidating content in 2017. In addition, a white paper on online harms is expected to be published in the fortnight. According to the BBC, the companies outlined six principles that regulation must follow: (a) ‘be targeted at specific harms, using a risk-based approach’; (b) ‘provide flexibility to adapt to changing technologies, different services and evolving societal expectations’; (c) ‘maintain the internet liability protections that enable the internet to deliver significant benefits for consumers, society and the economy’; (d) ‘be technically possible to implement in practice’; (e) ‘provide clarity and certainty for consumers, citizens and internet companies”; 6. “recognize the distinction between public and private communication’.

During the 40th session of the Human Rights Council, the United Nations (UN) Secretary General, Antonio Guterres has mandated the Special Advisor on Genocide Prevention, Adama Dieng, to ‘bring together a UN team to scale up our response to hate speech, (to) define a system-wide strategy and present a global plan of action on a fast track basis’. This decision comes as a response to the fact that ‘hate is moving into the mainstream, in liberal democracies and authoritarian systems alike’ spreading antisemitic and islamophobic content.

In line with the previously-announced plan of action against hateful content online, France’s Digital Minister, Mounir Mahjoubi, announced that a law strengthening the responsibility of online platforms for not taking down racist and hateful comment will be presented before the French Parliament before summer 2019. This law proposal was also highlighted as necessary by France’ President, Emmanuel Macron, in light of the recent antisemitic episodes that took place in Paris. On the radio, he declared that such law would entail an obligation on platforms’ means and procedures and an ‘obligation on the results’. This means that one the one hand, platforms will be required to take stronger preventive measures towards hateful and racist content as well as ensure that the internal moderation process is transparent and human-controlled. On the other hand, online platforms will be punishable by fines if hateful and racist comments are not taken down in a timely manner.

Twitter’s Head of Public Policy, Colin Crowell, will appear before a parliamentary panel on information and technology in New Delhi on 25 February. Concerned about ‘safeguarding citizens' rights’ on social media ahead of national elections in April 2019, at the beginning of February the parliamentary panel had already asked Twitter’s CEO, Jack Dorsey to appear before it but representatives of twitter India declined the call due to ‘short notice of hearing’.

GIP Digital Watch

Submit Content

The GIP Digital Watch observatory reflects on a wide variety of themes and actors involved in global digital policy and Internet governance. We welcome information and documents from your organisations. Submitted content will be reviewed and published by our team of knowledge curators.
You can submit your content at digitalwatch@diplomacy.edu