Freedom of expression

Quick links

Updates

The European Court of Human Rights (HCHR) ruled that publishers who share hyperlinks on the Internet cannot be held liable for the content of those hyperlinks. The HCHR found that a ruling in Hungary violated 444.hu's rights by holding it liable for the content reached through a hyperlink in a news article. Online content often uses electronic hyperlinks to substantiate or offer further information to readers. This judgement is considered a 'breakthrough for Internet freedom'.

The European Court of Human Rights (ECHR) decided in case of MAGYAR JETI ZRT v. HUNGARY concerning liability for hyperlinks in the domestic defamation law and its compatibility with freedom of expression. In its decision, the ECHR stated that hyperlinks can not be equated to the traditional publication, as they only direct the users to already available content. The ECHR noted that information in a hyperlink can be changed at any time and to subject publishers to liability for a hyperlink would "have foreseeable negative consequences on the flow of information on the Internet, impelling article authors and publishers to refrain altogether from hyperlinking to material over whose changeable content they have no control. This may have, directly or indirectly, a chilling effect on freedom of expression on the Internet."

Facebook and France have reached an agreement allowing for the French government to appoint its officials in Facebook offices. For six months, starting 1 January 2019 French officials will perform assessments under French jurisdiction in order to establish ‘smart regulation’ of content and removal procedures of hate speech on Facebook.

The United Nations General Assembly Resolution A/RES/68/163 proclaimed 2 November as the International Day to End Impunity for Crimes against Journalists, urging member states to counter current trends to vilify journalists. In addition, the resolution condemns all attacks and violence against journalists and media workers and calls on member states to take measures 'to prevent violence against journalists and media workers, to ensure accountability, bring to justice perpetrators of crimes against journalists and media workers, and ensure that victims have access to appropriate remedies'.

In his introduction to the report, Fake news, data collection, and the challenge to democracy, Adrian Shahbaz said 'Events this year have confirmed that the internet can be used to disrupt democracies as surely as it can destabilize dictatorships' [...] 'With or without malign intent, the internet and social media in particular can push citizens into polarized echo chambers and pull at the social fabric of a country, fueling hostility between different communities.'

In a report submitted to the UN General Assembly in August 2018, but made publicly available in October, the UN Special Rapporteur for the promotion and protection of the right to freedom of opinion and expression, David Kaye, analysed the impact of artificial intelligence (AI) on human rights. The report pays particular attention to the right to freedom of opinion, the right to freedom of expression, the right to privacy, the obligation of non-discrimination, and the right to an effective remedy, and how AI could challenge these rights and principles. Among the recommendations outlined in the report, Kaye suggests that states should ensure that AI is developed in keeping with human rights standards, and that any state efforts to develop policies and regulations in the field of AI should ensure consideration of human rights concerns. Companies are also advised that their efforts to formulate guidelines or codes on ethical implications of AI should be grounded in human rights principles. Moreover, human rights impact assessments and public consultations should be carried out during the design and deployment of AI systems, and AI code should be fully auditable.

Several international instruments guarantee the right to freedom of expression. The Universal Declaration of Human Rights affirms that this right includes the freedom to hold opinion without interference and to seek, receive and impart information and ideas. The Internet, with the opportunity it offers people to express themselves, is seen as an enabler of the exercise of this particular human right. Although these freedoms are guaranteed in global instruments and in national constitutions, in some countries freedom of expression is often curtailed through online censorship or filtering mechanisms, imposed by states, often for political reasons.

Safeguarding freedom of expression

Online freedom of expression has featured high on the diplomatic agenda in the past few years; it is, for example, on the agenda of the UN Council of Human Rights, as well as of regional intergovernmental bodies such as the Council of Europe. Freedom of expression on the Internet has also been discussed at numerous international conferences, including in the framework of Internet governance-related processes. The IGF annual meetings have also featured many discussions on issues related to the protection of freedom of expression online.

The discussion on online freedom of expression has been a contentious policy area. This is one of the fundamental human rights, usually appearing in the focus of discussions on governmental content control, censorship, and surveillance. Online freedom of expression also spans a number of other Internet governance-related issues such as encryption and anonymity, net neutrality, and intellectual property rights. Some of these aspects have been analysed in reports issued by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, who has emphasised on numerous occasions that the right to freedom of expression online deserves strong protection. Issues under study by the special rapporteur include protecting against censorship while addressing online gender-based abuse, and continuing blockages of Internet services around the world. Freedom of expression also appears in broader discussions on human rights and access to the Internet.

Freedom of expression is protected by global instruments, such as the Universal Declaration of Human Rights (Article 29) and the International Covenant on Civil and Political Rights (Article 19), and regional instruments such as the European Convention on Human Rights (Article 10) and the American Convention of Human Rights (Article 13).

In the Universal Declaration of Human Rights, freedom of expression (Article 19) is counterbalanced by the right of the state to limit freedom of expression for the sake of morality, public order, and general welfare (Article 29). Thus, both the discussion and implementation of Article 19 must be put in the context of establishing a proper balance between two needs. This ambiguous situation opens many possibilities for different interpretations of norms and ultimately different implementations. The controversy around the right balance between Articles 19 and 29 in the real world is mirrored in discussions about achieving this balance on the Internet.

The main governance mechanism for addressing online freedom of expression is the UN Human Rights Council Resolution on Protection of Freedom of Expression on the Internet (2012). NGOs such as Human Rights Watch, Amnesty International and Freedom House have developed numerous mechanisms for discussing and implementing freedom of expression on the Internet. Freedom House evaluates the level of Internet and mobile phone freedom experienced by average users in sample countries around the world. The latest Freedom on the Netstudy (2017) notes that Internet freedom worldwide has declined for the seventh consecutive year.

Actors

In line with its objective to build strong and democratic parliaments, the IPU assists parliaments in building

...

In line with its objective to build strong and democratic parliaments, the IPU assists parliaments in building their capacity to use information and communications technologies (ICT) effectively. In 2005, the IPU, together with UNDESA, established a Global Centre on ICT in Parliament, mainly aimed at promoting the use of ICTs in parliaments as a mean to increase transparency and effectiveness. The IPU has also been mandated by its member states to carry on capacity development programmes for parliamentary bodies tasked to oversee observance of the right to privacy and individual freedoms in the digital environment.

Privacy and data protection online has been the subject of many UNHRC resolutions.

...

Privacy and data protection online has been the subject of many UNHRC resolutions. General resolutions on the promotion and protection of human rights on the Internet have underlined the need for states ensure a balance between cybersecurity measures and the protection of privacy online. The Council has also adopted specific resolutions on the right to privacy in the digital age, emphasising the fact that individuals should not be subjected to arbitrary of unlawful interference with their privacy, either online or offline. The UNHRC has also mandated the Special Rapporteur on the right to privacy to address the issue of online privacy in his reports.

Challenges to the right to privacy in the digital age (such as surveillance and interception) are among the is

...

Challenges to the right to privacy in the digital age (such as surveillance and interception) are among the issues covered by activities of the High Commissioner for Human Rights. At the request of the UN General Assembly, the Commissioner prepared a report of the right to privacy in the digital age, which was presented to the Assembly in December 2014. The office of the Commissioner also organises discussions and seminars on the promotion and protection of the right to privacy in the online space, and collaborates on such issues with the UN Special Rapporteur on the right to privacy.

The coalition, which is committed to advancing Internet freedom, had formed multistakeholder working groups: A

...

The coalition, which is committed to advancing Internet freedom, had formed multistakeholder working groups: An Internet Free and Secure; Digital Development and Openness; and Privacy and Transparency online. While all working groups worked on different aspects of Internet freedom, the Digital Development and Openness considered human rights online especially criminalisation of speech. The mandate of the working groups came to an end in May 2017 and was not renewed. In 2014, the coalition issued a statement on restriction on access to social media and in April 2017, one another condemning Internet shutdowns.

Instruments

Conventions

Cybercrime, referred to broadly as crime committed via the Internet and computer systems, is a phenomenon that continues to affect individuals and entities worldwide, and to pose increasing challenges to law enforcement authorities. One of these challenges is related to the transborder nature of cybercrime, which has implications for the efficiency and effectiveness of any measures aimed at combating this phenomenon.

The Convention on Cybercrime (also known as the Budapest Convention), adopted in 2001 within the framework of the Council of Europe, aims to respond to this challenge by setting a minimum common ground for national legislations on cybercrime, as well as a framework for international cooperation among countries. The Convention, initially signed by 31 countries, is currently ratified by 48 countries (40 members states of the Council of Europe, plus Australia, Canada, Dominican Republic, Japan, Mauritius, Panama, Sri Lanka and the United States of America).

The first part of the Convention describes types of acts that countries should deem as criminal offences through national legislations; these include: offences against the confidentiality, integrity and availability of computer data and systems (such as illegal interception and data and system interference); computer-related offences (forgery and fraud); content-related offences (child pornography); and infringements of copyright and related rights. This section also includes provisions on procedural law, outlining the powers and procedures necessary for law enforcement authorities to engage in activities aimed at detecting, investigating and prosecuting cybercrime. The second part of the Convention focuses on international cooperation aspects, and outlines a series of general principles for such cooperation, as well as specific provisions on mutual assistance between countries in areas such as accessing of stored computer data, and interception, preservation, and disclosure of data. In 2003, a Protocol has been added to the Convention, on the criminalisation of acts of a racist and xenophobic nature committed through computer systems.

The Budapest Convention can be seen as the only de-facto international treaty on cybercrime (given that it has also been ratified by countries beyond Europe, and therefore does not have a regional coverage only), and it continues to be open to accession by interested countries. The Convention has been used as basis for many national legislations dealing with cybercrime-related issues, as well as for regional legal instruments in this area (such as the Cybercrime Directive of the Economic Community of West African States and the African Union Convention on Cyber Security and Personal Data Protection). However, some countries have also raised concerns over certain provisions of the Convention (especially those related to international cooperation and mutual assistance), which were seen as violating international law norms and countries' sovereignty.

The implementation of the Convention is facilitated by the Cybercrime Convention Committee, which is composed of representatives of signatory states. The Committee is also tasked with facilitated the exchange of information regarding the use of the Convention, as well as with considering future amendments that can be brought to the treaty. In June 2013, the Committee adopted a series of guidance notes representing the common understanding of parties regarding the use of the Budapest Convention on the following issues: computer systems, botnets, transborder access, identity theft, DDOS attacks, critical infrastructure attacks, malware, and spam. In addition, the Cybercrime Programme Office, opened by the Council of Europe in 2014, supports countries in building and strengthening their capacities to respond to cybercrime challenges, on the basis of the Budapest Convention.

Resolutions & Declarations

The IPU Resolution (21 October 2015) provides a broad framework for digital policy. It introduces two new developments in the global digital policy:

- Operational paragraph 3 underscores that legislation regarding surveillance must be based, inter alia, on the principles of necessity and proportionality. It is the first example when a global state institutions adopts these principles officially.

- Operational paragraph 4 calls for upholding the principle of net neutrality as one of the first explicit references to the princple of net neutrality of a global state institution.

After the Second World War and the creation of the United Nations, the international community started to pay more attention to the issue of human rights and the need to ensure their protection at an international level. The Universal Declaration of Human Rights, adopted by the UN General Assembly in December 1948 (after a two year long drafting and negotiation process), was the first concrete step in this direction.

Following a preamble where it is underlined, among others, that UN member states have pledged to achieve ‘the promotion of universal respect for and observance of human rights and fundamental freedoms’, the Declaration emphasises the fact that human rights and fundamental freedoms are equally applicable to all individuals, irrespective of race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth, or other status. The document then describes specific human rights and fundamental freedoms, such as: the right to life and liberty, equality before law, the right to an effective remedy for acts violating the fundamental rights, the right to privacy, the right to property, the right to freedom of thought, conscience, and religion, the right to freedom of opinion and expression, the right to freedom of peaceful assembly and association, the right to education, etc. The Declaration also notes that any limitation in the exercise of rights and freedom can be determined only by law and only for the purpose of ‘securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society’.

Although it had been adopted long before the Internet appeared, the Convention has a number of provisions which are of direct relevance for digital policy and the protection of human rights in the digital environment. The right to privacy and the right to freedom of expression, recognised through the Convention, are two fundamental rights and freedoms that also have relevance in the online space, and whose protection is therefore equally important in both the offline and online world. To this aim, a number of international bodies, such as the United Nations Human Rights Council, the Council of Europe, and the European Parliament, have affirmed the principle according to which the same rights that people have online must also be protected online.

The Universal Declaration of Human Rights, while not a treaty itself, constituted the foundation of international human rights law, as it inspired the elaboration and adoption of a large number of legally binding international, regional, and national legal instruments dealing with the promotion and protection of human rights and fundamental freedoms. Examples in this regard include: the International Covenant on Civil and Political Rights, the International Covenant on Economic, Social and Cultural Rights (these two Covenants, together with the Universal Declaration, are known as the ‘International Bill of Human rights’), the International Convention on the Elimination of All Forms of Racial Discrimination, and the United Nations Convention on the Rights of the Child.

The promotion and protection of human rights, on the basis of the Convention and the subsequent human rights international law, falls within the responsibility of both nation states and international intergovernmental organisations. Within the UN framework, a number of bodies address human rights issues: the Human Rights Council, the High Commissioner for Human Rights, the General Assembly, the Economic and Social Council, as well as a number of Special Representatives mandated to look into specific human rights such as the right to privacy and the right to freedom of expression, etc.

Following a United Nations General Assembly resolution adopted in December 2001 (Resolution 56/183), a World Summit on the Information Society (WSIS) was launched, with the aim to contribute to the development of a unitary global vision on an inclusive and development-oriented information society. The summit was held in two phases: the first phase took place in Geneva, from 10 to 12 December 2003, and the second phase took place in Tunis, from 16 to 18 November 2005. Although a UN summit, WSIS was not limited to governmental participation, but it also welcomed representatives of the private sector, the technical community, and the civil society.

The Tunis Agenda is one of the two final documents adopted at the conclusion of the second phase of WSIS. The document contains provisions on: financial mechanisms for bridging the digital divide, Internet governance and related issues, and the implementation and follow-up of the WSIS outcomes.

One of the main characteristics of the Agenda is that it deals extensively with the concept of Internet governance (IG). Firstly, it provides a working definition of Internet governance, as proposed by the Working Group on Internet Governance: 'the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet’. This definition outlines two key principles that are also separately underlined in the document: that Internet governance encompasses not only technical issues related to the management of the Internet technical resources (names and addresses), but also public policy issues; and that the various stakeholders (private sector, civil society, the academic and technical communities) have roles and responsibilities in Internet governance.

Secondly, the Agenda lays the foundations for the Internet Governance Forum (IGF), created a forum for multistakeholder dialogue on public policy issues related to key elements of Internet governance (such as cybersecurity, cybercrime, spam, freedom of expression, privacy and data protection, digital divide, multilingualism). In addition, it introduces the concept of ‘enhanced cooperation’, aimed to enable governments to carry out their roles and responsibilities in international public policy issue pertaining to the Internet, and called for the launch of a ‘process towards enhanced cooperation’.

Although the Tunis Agenda is not a legally binding instrument, it outlines a series of recommendations regarding the implementation of the WSIS objectives and action lines at national, regional, and international level. Some of these include: building national e-strategies as part of the broader national development plans, using bilateral and multilateral technical assistance programmes, involving UN regional commissions and UN agencies in the implementation process, and the participation of all stakeholders in the implementation activities. An overall review of the implementation of WSIS outcomes was also called for 2015.

The Agenda was endorsed by the UN General Assembly through its Resolution 60/252 from April 2006.

Activities undertaken as part of the follow-up and review of the implementation of WSIS outcomes include: the designation of facilitators and co-facilitators of WSIS action lines (mostly UN agencies), the creation of the UN Group on the Information Society (tasked with facilitating the implementation of the WSIS outcomes), meetings on the action line facilitators, the WSIS Forums (held annually since 2009).

In December 2015, a UN General Assembly High Level Meeting was held in New York, and it was dedicated to an overall review of the implementation of the WSIS outcomes, as required by the Tunis Agenda. The meeting concluded with the adoption of an inter-governmentally agreed outcome document which, among others, reaffirmed the commitments set out in the Tunis Agenda, acknowledged progress made over the previous 10 years, and called for more efforts in bridging the digital divide and strengthening the information society. A new high-level meeting on the overall review of the implementation of the WSIS outcomes is planned for 2025, and it is aimed to take stock of progress and identify both areas of continued focus and challenges.

This paper argues that mobile is both a medium and a media delivery platform. Changes in handset devices and levels of literacy will affect who has access to what content and there are key equity issues to be addressed.

Publications

The latest edition of glossary, compiled by DiploFoundation, contains explanations of over 130 acronyms, initialisms, and abbreviations used in IG parlance. In addition to the complete term, most entries include a concise explanation and a link for further information.

The booklet looks at legal restrictions on and informal obstacles to personal use of encryption for communication and the exercise of anonymous speech online in four countries: Morocco, Pakistan, South Korea, and the United Kingdom.

The book, now in its sixth edition, provides a comprehensive overview of the main issues and actors in the field of Internet governance and digital policy through a practical framework for analysis, discussion, and resolution of significant issues. It has been translated into many languages.

Reports

The report, prepared by the Global Commission on Internet Governance, outlines a series of recommendations to policy makers, private industry, the technical community and other stakeholders on modalities for maintaining a ‘healthy Internet’. It tackles aspects such as: the promotion of a safe, open and secure Internet, human rights for digital citizens, the responsibilities of the private sector, safeguarding the stability and resiliency of the Internet’s core infrastructure, and improving multistakeholder Internet governance.

The report explored the connection between encryption and human rights, and argues that individuals should be able to encrypt their communications and personal data as an essential protection of their rights to privacy and freedom of expression.

This report addresses internet freedom around the world and measures Internet freedom in different countries. Categories used are 'obstacles to access', 'limits on content', and 'violations of user rights'.

The report is a compilation of essays from journalists, academics and organisations in relation to online threats against female journalists. It features recommendations by the OSCE Representative on Freedom of the Media, on how participating States, media organisations and intermediaries can assist in ensuring that female journalists and media actors can work without fear and exercise their human right to freedom of expression.

Beginning with an overview of knowledge societies, the report discusses building infrastructure, using social media, and fostering open data. It also looks at gender sensitivity, environmental sustainability, and ethical consideration.

GIP event reports

This session discussed issues surrounding freedom of expression (FoE), as an element at the centre of every democratic society.

The speakers talked about the limits of, and control over individuals expressing themselves freely online, and they tackled the question of copyright reform, and whether it restricts the freedom of expression, or enables the original content creators to enjoy it. Mr Cristian Urse (Head of Council of Europe Office, Georgia) moderated the session.

Mr Giacomo Mazzone (Head of Institutional Relations and Members Relations South, EBU-UER European Broadcasting Union) reported from Plenary 2 on fake news. He said that the session clearly pointed to the important issues that should be discussed in more depth. The key speech noted that fake news and misinformation have a big impact on democracy and on the lives of citizens. The European Commission is looking for effective remedies, that would not hurt crucial European values. The question on who controls the controllers was raised. In 22 out of 28 countries, the media are the least trusted institutions, and in this regard, improvements should be made. During Plenary 2, Mazzone explained, Google’s representative gave a list of the possible technical solutions to the problem. They looked into an algorithm solution or technical implementation that could solve the issue of fake news, however, not everyone in the session was in agreement with their proposals.

Ms Irina Drexler (National Campaign Coordinator, No Hate Speech Movement) spoke about the ‘No Hate Speech Movement’ in Romania, which was initiated by the Ministry of Youth and Sports together with nine other NGOs that work actively on human rights issues. She mentioned ActiveWatch, an organisation which monitors the press in Romania, while advocating for the following three pillars: Human rights and FoE, anti-discrimination, and media education. She stated that the media in Romania has greatly contributed to the polarisation of society. Drexler then spoke about the polarisation which exists among the press itself, between the ‘good press’ and the ‘bad press’. When it comes to social networks, several courts ruled that the media outlets need to take down articles that are critical of the ruling party, which was a clear violation of the FoE of journalists. On a positive note, Drexler said that ‘there is a more active role that the national council on audio-visual has been taking in 2017’. She underlined the fact that investigative journalists and their families are often attacked.

Ms Natalia Mileszyk (Author, Communia Association and Public Policy Expert, Centrum Cyfrowe Foundation) discussed whether copyright is a tool to restrict the FoE in the light of ongoing EU reform. She explained why it is important to talk about copyright in a FoE session. In her view, copyright is a way of restricting and limiting someone’s ability and possibilities to publish materials online. Article 13 in the proposed EU directive aims to implement a filtering obligation for platforms that host user-generated content. ‘In the European Union (EU) there is an agreement that there should be something like a notice and take-down if there is a copyright infringement.’ Around eighty organisations sent a letter to EU officials, arguing that having a private entity ‘to monitor everything we do online’ was harmful for the FoE. In the EU, copyright is perceived as another digital single market agenda, and that it is not related to human rights regulations. This shows the importance of ‘always be[ing] aware and conscious about human rights’.

Mr Wolfgang Benedek (Professor of International Law, University of Graz) spoke about the challenges for intermediaries respecting freedom of expression. He said that nowadays, companies act more as gate-keepers and have the responsibility and accountability of intermediaries. He spoke about the right to be forgotten or the right to be erased, and noted that the existing information on how platforms implement this in practice is not clear. The Google transparency report showed that 686 000 reports have been made and these refer to 2.5 million URLs out of which 44% have been de-listed, which gives a clear idea of the high de-listing rate. The criteria are often used in an intuitive way. Benedek wondered how to organise self-regulations in a way that they would not have a chilling effect on the FoE, and that the process in general would be foreseeable and also legal, in the sense that it takes human rights into account as well as providing an effective remedy. He concluded by saying that standards and practices on content moderation should be publicly available and the users informed. Restrictions should be based on principles.

Mr Pearse O'Donohue (Director for Future Networks, Directorate-General for Communications Networks, Content and Technology, European Commission) discussed how to tackle illegal content while ensuring the FoE and access to information. He said that the European Commission issued a recommendation on measures to tackle illegal content online, such as material promoting terrorism, child pornography, etc. ‘We have a set of policies, but now we set out a set of overarching principles and objectives in line with the Charter.’ O’Donohue also mentioned that the Commission could not put itself in a position to decide on what disinformation and fake news are. It is important to be in a position to rapidly adjust when mistakes are noticed, but it is also important to balance between the implications of online dis-information for democracy, versus the FoE. He noted the potential problems with regard to an outright ability to express oneself and the rights of holders of copyright. When it comes to the press, he noted that quality journalism is essential for maintaining democracy.

Mr Giorgi Gvimradze (Head, News and Current Affairs Department, Georgia Broadcasting Corporation) spoke about religious feelings versus the FoE. Speaking as a media representative, he admitted that there is no public space for a qualitative and academic discussion of this issue. While it is ‘more or less possible to define’ what FoE is, there is no clear understanding about what the religious feelings are. When it comes to the FoE, it is not about ‘who is guilty?’, but about the violation itself, ‘who is violating what, what is a work of art?’. He said that while criticism of some religious beliefs, practices and theology are accepted, at the same time, the same issues in religions of minorities are referred to as Islamophobia or antisemitism and are criticised. The traditional media has more resources and should take advantage of the digital platforms and bring responsibility to it, he concluded.

Ms Claudia Luciani (Director of Democratic Governance and Anti-Discrimination, Council of Europe) opened the session by presenting the big picture about human rights and the challenges that new technologies are bringing.

She explained how the Council of Europe is dealing with these challenges in key areas: democracy, rule of law, and human rights. She referred to numerous mechanisms such as the programme for digital citizenship education, which aims to enable pupils to make informed choices. The Budapest Convention on Cybercrime is also an important mechanism that preserves the rule of law, by enabling trans-border access to data between authorities, while respecting data protection principles. Moreover, with its recommendations, guidelines, and Convention 108, the Council is addressing the new challenges that could jeopardise human rights, such as algorithms.

Ms Nadia Tjahja (Steering Committee Member, Youth Coalition of Internet Governance, as moderator, emphasised that the focus of this session should be on information disorder, which goes beyond fake news. Here we should speak about misinformation, mal-information and disinformation, and how we can address these phenomena and whether there are effective remedies to do so.

Mr Paolo Cesarini (Head of the Unit, Media Convergence and Social Media, Directorate General for Communications Networks, Content and Technology, European Commission (EC)) reflected on the EC’s view on information disorder as a somewhat narrow category, focusing on disinformation. The key element in this view is the harm that can be caused to democracies as well as public policies. There is also the question of how to avoid information pollution when the nature of the Internet is such that everybody should be able to express their views. Since this phenomenon is evolving and promotion mechanisms will change, transparency should be one of the most important mechanisms. This transparency should refer to the flow of money that finances this information, political advertising, human versus automatic interaction, as well as the sources of this information in order to enable the user to recognise trustworthy sources. To cope with this phenomenon, resources will be required in order to promote media literacy, to support grass-root organisations, and for fact checkers to develop best practices for all groups of citizens.

Moving forward, Ms Ana Kakalashvili (Analyst, Institute for the Development of Freedom of Information) addressed the regulatory attempts of governments to cope with fake news. Through highlighting examples of some countries, she raised the question of whether strict regulation is the right answer. Regulations in the area of free speech, that are not carefully designed, are even more dangerous than the very problem that they should solve. This is especially the case in countries with young democracies. In the end, she made a distinction betwen propaganda and fake news, by stating that propaganda requires different treatment and official recognition as such.

Mr Patrick Penninckx (Head of Department, Information Society, Council of Europe) started with the question of trust and trusted information. According to the Edelman Trust Barometer, the media is the least trusted institution. People are not able to recognise the difference between rumors and real journalism, and the number of people who trust social media as source of information is growing. Along with that, there is a current trend of people reading less and less. Even though the media has specific roles in this process, especially in regard to providing high quality journalism, it is also important to debunk disinformation not just by the media, but by all other stakeholders.

Ms Tamar Kintsurashvili (Executive Director, Media Development Foundation), who teaches propaganda research methods at the Ilia State University, stated that the problem is much wider than fake news or information disorder, because propaganda is part of hybrid warfare. This problem in not only present in countries with an undermined democracy, but also in democratic countries (for example in the case of elections). In order to properly address the issue, we need a collaborative approach, but also to regain trust and preserve the quality of media.

Ms Clara Sommier (Public Policy Analyst, Google) explained how Google and YouTube are trying to address disinformation issues, and how so far, they have designed five different mechanisms. First, they are trying to promote quality and trustworthy content. Second, it is important to cut money flows, by disabling the advertising option for platforms that mislead users. Moreover, they work on empowering users via fact checking of tags. They have dedicated funds to supporting an industry that is working on innovative solutions to tackle this problem, such as via machine learning, as well as to support the press via a subscription option.

At the end of the session, Ms Jutta Croll (Project Manager and Chairwoman, Digital Opportunities Foundation) concluded with a brief message in which she reflected on the concept of information disorder, its motives to harm democracy, and questioned whether regulatory mechanisms are an effective remedy.

This session took place on 3 May 2018 at the United Nations in Geneva, Switzerland, and was moderated by Ms Alessandra Vellucci (Director, UN Information Service, Geneva). The event commemorated the 25th anniversary of the United Nations’ World Press Freedom Day. The moderator expressed her regret about the journalists killed exercising their profession in war-torn countries, and used this as a reminder of the importance of protecting and promoting free press.

Mr António Guterres (UN Secretary-General) gave the welcoming remarks by noting the importance of uncensored free press for democratic participation. Mr Abdulaziz Almuzaini, (Director of the United Nations Educational, Scientific and Cultural Organisation (UNESCO) Geneva Liaison Office) commented on the media’s role in increasing transparency and restraining power in the public and private sectors.

Mr Noel Curran (Director-General of the European Broadcasting Union) said that this event serves as a reminder for governments to uphold the freedom of expression in the name of democracy, citing Article 19 of the UN’s Universal Declaration of Human Rights. He continued by criticising the fact that many modern politicians have begun to publicly discredit the modern media, labelling journalists as ‘enemies of the people.’ To Curran, resisting this type of censorship means educating the public about episodes of media abuse, avoiding knee-jerk reactions against ‘fake news,’ and calling on media companies to protect their reporters.

Ms Elizabeth Laurin (Permanent Representative of France to the UN in Geneva), spoke next. She connected the freedom of the press to the UN sustainable development goal (SDG) #16, which aims to promote peaceful and inclusive societies through sustainable development. She praised digitalisation for democratising access to information, but cautioned against the speed at which fake news can travel through digital outlets. She added that educating the youth on how to seek verifiable news sources is important to counteracting the power of fake news.

Ms Nina Larson (President of the Association of the Accredited Correspondents to the UN) opened by noting that more than 30 journalists have been killed since the beginning of 2018. Larson encouraged the private sector to take steps to minimise the time required to correct the spread of misinformation online. Furthermore, she detailed the dangers of advanced audiovisual editing software that can be used to distort past events to benefit a particular agenda.

Mr Walid Doudech (Permanent Representative of Tunisia to the UN Office in Geneva) acknowledged that Tunisia has made great leaps forward in terms of press freedom since the 2011 Arab Spring, but noted that his country still needs to establish institutions that coordinate and monitor freedom of expression. In particular, Doudech called for bodies tasked with regulating access to information and personal data to end avoiding their economic and political misuse. He also posed the question of whether there should exist an international convention aimed at protecting journalists.

Ms Nathalie Prouvez (Chief of the Rule of Law and Democracy Section of the Office of the UN High Commissioner for Human Rights (OHCHR)), spoke specifically about measures states can undertake to guarantee freedom of the press. She noted that governments cannot wait for the development of new ‘instruments’ for targeting press freedom, but rather, must act now with the present set of ‘tools.’ To that end, Prouvez noted that the OHCHR is developing a way of measuring violations committed against journalists, which will help to target SDG #16. She encouraged governments to use this tool and their judicial systems to ensure that all cases of media censorship are duly investigated and prosecuted.

The session was then opened to questions and comments. Among the points raised was a question about the particular challenges that female journalists face, and a comment that social media has played a particularly important role in disseminating fake news, to which Curran added by noting that the social media platforms contribute to an unhealthy singularity of perspective in news coverage.

Larson re-joined the conversation by encouraging the audience to voice opposition to media attacks and to embrace education as the key to deconstructing censorship. Moreover, she noted that new algorithms designed to flag and hide clearly inaccurate news are under development in the digital world.

The session concluded as Doudech stressed the importance of involving journalists’ perspectives in the efforts to protect the sanctity of accurate media worldwide.

The exploratory workshop on the right to privacy in the digital age took place on 19 - 20 February 2018, at the Office fo the High Commissioner for Human Rights (OHCHR) in Geneva, Switzerland. The workshop was held following the Human Rights Council (HRC) Resolution 34/7 on March 23 2017 and it had ‘the purpose of identifying and clarifying principles, standards and best practices regarding the promotion and protection of the right to privacy in the digital age, including the responsibility of business enterprises in this regard’.

The session was opened by Ms Peggy Hicks, director of the Thematic Engagement, Special Procedures and Right to Development Division of the Office of the High Commissioner for Human Rights (OHCHR), who reflected upon the fact that the right to privacy lies specifically at the intersection between the human rights discourse and digital technology. Although data driven technology offers multiple opportunities for society at large, at the same time, it also poses some challenges, especially because ensuring the right to privacy is also linked to the enjoyment of other rights such as the freedom of expression and of association.

In particular, Hicks affirmed that there is currently an urgent need to strengthen the protection of human rights vis-à-vis:

Internet of Things (IoT): ‘smart’ devices will continue to create new sources of data thus posing new threats for individuals and groups.

FIRST PANEL – Setting the scene: the role of the right to privacy within the human rights framework and for civic protection

The first session focused on the position occupied by the right to privacy within the existing human rights legal framework.

Ms Anja Seibert-Fohr, UN Human Rights Committee member, affirmed that the arising challenges for the right to privacy posed by social media can be effectively tackled by existing provisions, such as the International Covenant of Civil and Political Rights (ICCPR), in particular Article 17, ‘1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation. 2. Everyone has the right to the protection of the law against such interference or attacks.’ She considered that an effective guarantee to the right to privacy is a problem that affects multiple countries. The main issues under consideration are regarding states’ access to data collected and/or administered by private parties, and whether such data can be transferred outside the country. The actual scope of application of Article 17 is rather broad: ‘interference’ includes electronic surveillance, online surveillance, online tapping and metadata collection. Moreover, the claim arguing that the ICCPR does not apply extraterritorially has to be rejected as well.

The interpretation of Article 17 by the UN Human Rights Committee regards the existence of substantive standards protecting the right to privacy: generally speaking the convention does not establish a total ban on surveillance, but it makes sure that when interference occurs, it is of arbitrary nature and is regulated by statute. In short, interference is possible only if authorised by the law and restricted to the circumstances of the case.

Mr Joe Cannataci, UN special rapporteur on the right to privacy, considered the legal protection granted by the Council of Europe’s Convention (CoE) 108. He said that the right to privacy is a fundamental and universal human right; however, it is neither absolute nor self-executing. Its main purpose is to ensure the protection of individuals, because citizens need to have available safeguards and judicial remedies regardless of whether the threat is national or international. The basic principles under application are usually necessity and proportionality. At the national level, effective guarantees are not always granted. For example, all the legislation concerning the oversight of domestic intelligence within the UN’s member states, requires amendments and reinforcement. Moreover, 75% of the UN’s member states have no system of detailed safeguards in place when it comes to the surveillance of their citizens by other states. He concluded by remarking that although developed within a European framework, Convention 108 is becoming the de-facto global convention on the protection of personal data because there are also non-European states who have (or are waiting to) ratify it. Moreover, in regards to surveillance, UN member states are in need of developing a new regulatory framework because the existing Convention 108 is not sufficient.

Ms Anita Ramasastry, chair of the UN Working Group on Business and Human Rights, focused on the relevance of the UN guiding principles on Business and Human Rights which were introduced in 2011, after tech companies in the US were accused of human rights abuses after providing user data to the government. She illustrated the three main pillars upon which the guiding principles rest.

States’ existing obligations to respect, protect and fulfil human rights and fundamental freedoms when companies transfer data. For example, the European Parliament has a data export restriction, i.e. prohibiting tech companies from transferring data to countries where the right to privacy is not ensured.

Corporate respect for human rights and applicable laws: for example when considering the principle of due diligence and the ‘cause contribution’ and ‘direct link’ test.

Access to remedy: both businesses and states have an obligation to provide effective remedies to breaches, both via judicial and non-judicial means to ensure reparation.

Mr David Kaye, UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression, mainly addressed the interlinkage between the right to privacy and other human rights. In particular, he maintained that ‘privacy is under threat in ways that neither we, n are not familiar with and law is not familiar with’. Legally speaking, under Article 2 of the ICCPR, state parties are bound to ensure the respect for the rights included in the Covenant. Artile 17 not only ensures the right to privacy, but it also protects against discrimination and interferences in the enjoyment of such rights. Moreover, the restrictions that may imposed by states under Article 19(3) do not apply to the right to hold an opinion (Art. 19(1)) and most importantly, they need to be put under the ‘necessary and proportionate’ test. He then concluded his presentation by affirming that privacy is seriously at risk and that encryption and anonymity in digital communication enable individuals to exercise their rights to freedom of opinion and expression in the digital age and, as such, deserve strong protection (as highlighted in the UN 2015 Report on ‘Encryption, anonymity and human rights framework’).

SECOND PANEL – Surveillance and communications interception

Ms Lorna McGregor, director of the Human Rights Centre, Essex Law School, moderated the second panel on the interlinkage of surveillance and the right to privacy. She referred to the 2014 OHCHR report on the ‘Right to privacy in the digital age’ and reasoned on the lack of transparency regarding implementation of surveillance techniques and in the application of the principle of proportionality.

Intellectual Property and Informational Technology Law, at the Strathmore University in Kenya, considered that since the publication of the aforementioned report, two important developments have taken place. First, in 2015 there was the appointment of Prof. Joe Cannataci as the first ever special rapporteur on the right to privacy. Second, during 2016, the discussion regarding the right to privacy started to also focus on the role of companies. He stressed the importance of including companies in the discussion because they are often found to be involved in censorship together with governments. He also warned about new forms of surveillance, such as ‘social media intelligence’, which uses users various tools to monitor data on the public domain, for example on social media. He concluded his speech by stating that there are three main dimensions to consider when addressing security: individual, business, and society, and that security is not only a matter of technology, but also a matter of policy reach.

Ms Sarah McKune, senior researcher at Citizen Lab,University Toronto, Canada, focused on cyber-threats targeting civil society and international cybersecurity initiatives. She explained that with the current technological developments, new forms of targeted surveillance are now possible (such as: phishing operations and network injections). She specified that legally speaking, there is no justification (under the principles of legality, necessity and proportionality) for the use of such surveillance by governments. She concluded by considering that what needs to be done is to ensure that a system designed for lawful interception is not used by governments for unlawful purposes against civil society leaders.

Ms Gail Kent, global security expert at Facebook, United Kingdom, considered Facebook’s policy regarding data sharing. She specified that Facebook, being a US-based company, abides by US national laws, which means that they provide content and data to governments, according to the law. This means that Facebook is allowed under certain circumstances to provide basic subscriber information (username, IP addresses, phone number and last log-in) but not to share lifetime content and metadata. Moreover, she explained that when a government is requesting access to data from Facebook, there is a three-step approach in place. First, the government is required to use its national legal framework allowing for data access, then the jurisdiction of the individual data requested is considered, and lastly, to make sure that such a request complies with human rights standards.

THIRD PANEL – Securing and protecting online confidentiality

The session, moderated by Mr David Kaye, UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression, discussed the variety of ways in which online confidentiality is under threat and measures to better protect it.

Mr ‘Gbenga Sesan, executive director of the Paradigm Initiative, Nigeria, talked about the differences in the meaning of confidentiality among various groups. In a study on different African countries, the trend observed – especially in electoral contexts – is that of using information as an opportunity to gain increased control. Sesan noted that the power dynamic is not tilted in favour of citizens; rather, confidentiality is linked to the sense of privilege attached to senior authority. While the tensions between security and privacy remain relevant, this conversation needs to be held together with civil society rather than in silos. Progress has been made in the area of encryption, in particular at the level of awareness among citizens and deployment in services used on a daily basis. In conclusion, Sesan stressed the importance of consent and choice in the use of information: ‘confidentiality is a choice – the fact that I have put my personal information online is a choice – it does not mean I waived my entire right to privacy for something else’.

Ms Fanny Hidvégi, European policy manager at Access Now, Brussels, highlighted the actions taken by states. She started with the Hungarian example: a bill that was introduced in 2015 to ban encrypted services and demand mandatory backdoors but did not pass into law. There is a need for a public debate over encryption, the zero-sum framing of the discussion is counterproductive. ‘We have a right to both security and privacy. Necessary cannot be just useful, reasonable and desirable, we need to search for evidence in that field’, she pointed out. EU policy work on encryption is currently taking place in three stages. 1) Direct co-operation with law enforcement agencies (undermining the MLAT reform to have direct co-operation with law enforcement agencies), 2) Regulation of vulnerabilities and dual use export technologies, and 3) E-privacy reform. According to Hidvégi, the latter is the most important law on the confidentiality of information which exists in the EU and is related to economic aspects and increasing trust in the Single Digital Market.

Shifting perspective to the national level, MrEduardo Bertoni, director of the National Access to Public Information Agency (NAPI) of Argentina, offered the example of his own country in complying with international standards. Argentina passed its first access to information law in 2016 and created NAPI as an oversight body. There was a deliberate decision to include the oversight of the data protection law in the mandate of NAPI, alongside the monitoring of access to information. In practice, there is a possible conflict between access to information and freedom of expression, on the one hand, and data protection on the other, therefore NAPI has two directorates at the same level: one for access to information and one for data protection. In cases in which both rights are concerned, NAPI internal regulations say that a transparent decision needs to be taken after hearing both directorates. Since 2016, work has been ongoing in Argentina to change the data protection law: the draft that will go before congress shortly includes specific provisions on privacy by design. Currently, Argentina is in the process of becoming a party to Convention 108 of the Council of Europe.

The ensuing discussion with the participants touched on the secrecy of telecommunications, the example of the Dutch law on encryption, the difficulty to find a common ground in different ministries if the work is done in silos, responsibility at different levels and easiness of encryption tools and the evolution of international law. Two main take-aways were put forward by the moderator: first, the simplicity of tools is key, but the flipside of that is that the convenience and efficiency of the digital age all have privacy implications; second, challenges in protecting online confidentiality exist, not just for governments or civil society, but for everyone.

The moderator, MsMalavika Jayaram, executive director of the Digital Asia Hub, Hong Kong, provided a background for the discussion, highlighting recent developments that raise some concerns: India’s Aadhar system is privacy-invasive, but often presented as a great example of e-governance; the Chinese social credit system provides inspiration or many legislations to beta-test it in environments which have weak rights protections; the social contract between citizens and states is evolving as the collection and processing of data is increasingly performed by private actors; among the youth, there is an invasion of privacy by peers. The panellists were given seven minutes to discuss the work of their organisations and the challenges faced in addressing personal data protection in this complex environment.

Ms Nighat Dad, executive director of the Digital Rights Foundation, Pakistan, discussed the absence of a privacy law or a data protection legislation in her country. In her opinion, telecommunication operators starting to work in Pakistan benefit from the fact that there are no local laws to protect users, and in practice these companies do not observe the same procedures as they do elsewhere, where legislation is in force.

The world’s biggest biometric database is currently being developed in Pakistan, it will have more than 200 million people registered through biometric procedures. Dad also discussed the system of mass surveillance in cities – sometimes in the ‘safe city’ projects that are being rolled out. There is no transparency about the collection, processing and distribution of data in ‘safe cities’. Work is also currently underway for a new cybercrime law, which features several provisions that give mass-surveillance powers to different bodies of the government without any accountability measures in place.

Ms Sophie Kwasny, head of the Data Protection Unit, Council of Europe, discussed the modernisation of the Council of Europe (CoE) Convention 108. There is no equivalent at the UN level on data protection, thus Convention 108 is unique. It was open for signature in 1981 to all countries and, alongside the membership of the CoE (47 countries), 4 more have acceded. Nearly 70 countries are observers.

A shift that Kwasny observed in the field of data protection is that governments have started calling for regulation and for an increase in the level of protection afforded. Modernisation of Convention 108 goes hand in hand with that, as the responsibility angle was stressed in the work undertaken over the last seven years with the involvement of over 50 countries. The changes pertain to increasing the rights of the data subjects, increased transparency and accountability, assessment of the likely impact on human rights prior to data processing, and the introduction of data breach notification obligations. In conclusion, Kwasny highlighted the horizontal scope of the convention (processing by law enforcement, private sector, etc.).

Mr Chawki Gaddes, professor of constitutional law and president of the National Authority for the Protection of Personal Data of Tunisia, presented the experience of his country as a precursor for data protection in the region, starting with a related provision in 2002 in the constitution. In 2014, the new constitution enlarged the scope of protection to private life more generally. In 2017, Tunisia became the 51st member of Convention 108 and is currently planning a new data protection law in accordance with the GDPR. In Gaddes’s view, to have the culture of privacy and data protection internalised on a mass scale, greater efforts need to be put into awareness raising and education. In transitional contexts, the role of a data protection authority is not to sanction, but to support the development of a rights-respecting culture. Among the activities included in their work, conferences and public events for judges and legal personnel are included. A recent success he noted was the withdrawal of the law project on biometric cards, which was declared unconstitutional.

The discussions that followed touched on incentives for data protection safeguards at the national level, the challenges faced by the Global South in protecting rights, as well as space for multistakeholder dialogue. An example cited was the strengthening of co-operation between the CoE and the private sector: in a recent exchange of letters, major Internet companies and associations committed to work together with the pan-European organisation on issues such as child online protection, freedom of expression or cybercrime.

SECOND DAY

FIRST PANEL – New and emerging issues

Mr Danilo Doneda, independent consultant and professor, at the State University of Rio de Janeiro, in Brazil, considered that new technologies are posing different challenges vis-à-vis data protection regulation. The main issue under consideration is that technological developments and automation have created an imbalance of privacy. This is because data protection principles are based upon the idea that individuals have control over and knowledge on how their data is used. He suggested that in order to improve data protection, preventive measures need to be taken and the focus should not only rest upon the individual, but rather it should be embedded in the technology itself (i.e. privacy by design) and take into consideration the social implications of data usage. He concluded by reaffirming that new uses of data demand new regulatory frameworks. However, the risk of bureaucratisation and excessive technicisms can create significant imbalances of power and information making it difficult (if not impossible) for citizens to be conscious ‘agents’ in the data protection and privacy discussion.

Ms Malavika Jayaram, executive director of the Digital Asia Hub, Hong Kong, drew attention to the fact that transparency and accountability are lacking when it comes to relying on algorithms and Artificial Intelligence (AI). In particular, she considered whether AI and computation can reduce discrimination, when the data they are upon express discrimination. For example, in the case of the house-sharing application Airbnb, the software’s design favours discrimination as it asks users to provide personal data (i.e. names) in order to secure the booking. In a study done a year back, bookings with foreign names or names that ‘sounded black’ had a 30% higher chance of being rejected. She concluded that AI can be an empowering instrument in allowing people to overcome local physical and language barriers; however, as long as it remains a ‘black box’ lacking transparency, it also poses serious challenges.

Mr Alessandro Mantelero, associate professor of Law, Polytechnic University of Turin, Italy, told the audience not to forget the collective dimension and the impact that AI has on society at large. In particular, he considered that the forms and categories of discrimination created by algorithms are different than those ‘traditional’ categories and groups which are objects of discrimination. This is because such categories are defined artificially by computation, which means that they change rapidly every 3 seconds when an individual’s behaviour changes without them being aware of it. He maintained that the debate regarding AI should be an ethical discussion comprising of specific rights (e.g. right to privacy), consumer protection, and taking into consideration the existing human rights legal framework. Although AI poses concrete legal issues, such discussion is ethical in nature because the mechanism in which AI operates is not consistent with the current societal values. He concluded his presentation by stressing the necessity to:

Consider the collective dimension when addressing data protection rather than focusing only on the individual level.

Criticise the ‘purpose’ argument: the problem with AI is that the purpose driving the computations is unknown, and in the case of machine learning. such purpose is evident although it is not clear how to reach it.

Move away from traditional approaches considering data protection only through the lenses of impact assessment. Rather, the discussion needs to also include societal and ethical values.

SECOND PANEL – Safeguards, oversight and remedies

The final session was opened by Mr Thorsten Wetzling, project director at the Privacy Project, Stiftung Neue Verantwortung, Germany, who explained the rationale of a multistakeholder approach to ensuring the rule of law to provide effective oversight and remedy to various data collection processes. The evolving practice of data collection (also via phishing, network exploitation, etc.) requires new mechanisms to govern as technological innovations continue to challenge the legal system. There is no shortage of guiding principles promoting effective intelligence oversight, yet it remains an ambitious and unattained benchmark. Modern security and intelligence agencies use a wide range of digital tools, thus effective checks and balances are imperative to monitor and to sanction the potential abuse of power. Importantly, oversight is not a fixed concept, it is work in progress, therefore a broader set of perspectives is needed.

According to Ms Katitza Rodriguez, international rights director at the Electronic Frontier Foundation (EFF), cross-border access to data by law enforcement agencies is a key priority right now, as many institutions and governments are exploring new cross-border data paths that threaten user privacy and data protection. Among these, she made a reference to the CLOUD Act proposal in the US, the European e-evidence debates, the negotiation of the second additional protocol to the Budapest convention, and provisions in bilateral agreements. Her intervention focused on the safeguards currently missing, as advocated by the EFF. They include: individual notice requirement (users should be notified as early as possible to challenge the decision or seek remedy), judicially authorised access, strong factual basis for surveillance, dual privacy protection norms, and content meta-data protected.

Mr Eduardo Bertoni, director of the National Access to Public Information Agency of Argentina (NAPI), focused on the data processing practices of corporations and drew on his experience heading an oversight body in Argentina. Jurisdiction, in his view, is the key problem faced when protecting cross-border data in data processing by global corporations. According to its mandate, NAPI has to control the implementation and application of the law of data protection for people in Argentina. As an Argentinian, though, it is difficult to exercise your right to access, suppress or correct information in a global database pertaining to an Internet company. This issue is taken into account in the draft of the new data protection law, whose provisions are similar to the GDPR in this regard: the processing of the data of Argentinian citizens needs to be in line with the national law. In Bertoni’s view, there are still many grey areas when it comes to jurisdiction, and clarity is needed as to when to apply national law.

The perspective of Internet service providers (ISPs) was provided by MrMike Silber, head of the legal and commercial department at Liquid Telecom, South Africa. Among the key pillars of the eco-system in which ISPs operate are cybercrime and privacy legislation, as well as lawful interception and customer protection. They cannot be considered in isolation, since one affects the rest: it is important for the impact of cybercrime laws on privacy to be clearly understood at the drafting stage, in order to provide a consistent legal framework. Operators ask for a mechanism to deal with the right balance as embedded in the law, rather than being requested to make subjective decisions and policy determinations on the fly. Conventions are useful, provided they are well thought-through, but he called for caution in over-reliance on conventions that are not well-defined. Silber also called attention to the work of technical bodies and the move towards standardisation (the Internet Engineering Task Force (IETF) integrating security considerations and privacy considerations in its requests for comments).

Lastly, Ms Lorna McGregor, director of the Human Rights Centre, Essex Law School talked about the right to remedy. What we have done in the digital era was to interpret and apply the existing framework of rights, which remain valid. By challenging practices, laws and policies in court and in front of other dispute resolution mechanisms, we can push the boundaries, as in the case of the right to be forgotten or data protection. ‘The right to remedy continues to be poorly discussed, because we are still learning the practices’, stressed McGregor. There is confusion over remedy in relation to algorithms, since it is understood as a way to solve a problem in a technology, rather than as a right under the international human rights law. Under the latter, states have an obligation to ensure effective access to justice where there is a claim; this is also reflected in the Ruggie framework on Business and Human Rights. However, remedies are generally thought of retrospectively, whereas it would be important to build them in from the start in our norms. Even if remedy without border is a key principle of international human rights, implementation remains a challenge. The three main obstacles McGregor identified for the effective implementation of the right to remedy are:

The lack of regulation in certain areas of digital space, such as intelligence sharing between states or the Internet of Things.

The lack of transparency and notification obligations towards the user (if one does not know that their rights are violated, one cannot assert a claim in this regard).

The use of evidence and the possibility to challenge that – additionally challenged by the use of predictive technology and algorithms (credit score systems, etc.)

The discussion concluded that oversight is critical in the digital age and it should be included in both mechanisms of internal review and in the design of platforms for clarifying issues as they emerge. There is a need to strengthen oversight co-operation, for example, between local data protection agencies and security bodies, but also at the international level. A call was made for multistakeholder participation in establishing oversight mechanisms. To seek remedy, notification is mandatory, but not enough: there is a risk of perpetuating a digital divide by establishing different regimes for access to data and protection of rights. For geopolitical reasons, some forms of protection are better than others in the area of privacy and data protection. To address these issues, a mapping of different types of harm and possibilities to redress them would be a useful starting point.

12:50 - 13:00 Concluding remarks

The summary of the discussion was provided by the Office of the High Commissioner, UN Human Rights, the host of the event. The main take-aways from the sessions were:

more work and further guidance are needed to unpack the available legal framework for the protection of privacy

in addition to developing the principles, greater effort is needed to ensure adequate implementation

there is still a lack of adequate legal and procedural guidance at the national level, and innovative institutional set-ups can be the way forward; it is critical to give all human rights sufficient weight in the design of new institutions

there is an increased reliance on extraterritoriality and demands for access to data stored abroad, and many attacks were noted on encryption and individual rights

there is an emergence of powerful data-driven technology that brings both risks and opportunities for consent and anonymity

the protection of children’s rights in digital space was a new discussion point and it needs to continue

learning from each other is fundamental and there is a need to work across silos

Written contributions that add to the discussion can be submitted via email (to privacyworkshop@ohchr.org) in the next month for inclusion in the report.

Mr Lukas Heckerdorn Urscheler (Deputy Director, Swiss Institute of Comparative Law (ISDC)) started by welcoming everyone to the event, which he described as part of the multistakeholder approach inherent in the work of the ISDC.

Session I

Dr Stephan Husy (Ambassador-at-Large for International Counter-Terrorism) pointed out that the online presence of so-called extremist groups is used for propaganda, financing, and recruiting. The Swiss government has adopted a whole-of-government approach, together with strong international public-private partnerships, when it comes to combatting extremist content online. He concluded that while online and offline extremism combatting measures are mutually dependent, the scale and speed of the Internet make it unique in addition to issues of attribution and jurisdiction.

Prof. Maura Conway (International Security, Dublin City University and Coordinator, VOX-Pol Project) outlined that while videos are the most visible, a significant part of ISIS’s online presence comes from the galleries of still photos together with written publications. These materials are well-made and available in multiple languages, and aim to portray ISIS as a utopia via non-violent material, proliferated through individual ‘crowd-sourcing’ methods and includes a variety of sub-communities based on gender and ethnicity. She concluded by stating that with the fall of the physical ISIS, the direction of their online presence remains unclear for now.

Dr Johanna Fournier (Legal Advisor, ISDC), presented a comparative legal study between five European states to look at the extent of their legal regulations in combatting extremist terrorism. While focussing on the issues of training, travelling, and financing, she concluded that there are differences among states in these areas in terms of how explicit states are in criminalising these terrorist activities online.

Session II

Mr Marc Porret (Officer, United Nations Counter-Terrorism Executive Directorate (UN CTED)), started by stating how the UN Security Council has added resolutions to combat terrorism, usually as a response to a major terrorist incident. Besides the regulatory and legal measures, the Security Council relies on providing a positive narrative. Moreover, offering assistance to member states together with the assessment of various implementation measures, is a significant part of the strategy. He concluded by stating the important role of other international organisations together with regional ones.

Dr Jerome Blomsma (European Comission, Directorate General Migration and Home Affairs) presented the EU Directive 2017/541 which seeks to prevent attacks by providing legal tools based on sanctioning and victim support. Online measures seek to criminalise the proliferation of violent extremist material while enabling member states to take down material. Such content has to show intent or incitement. He briefly presented the EU Internet Forum which brings together actors from the private and public sectors. Blomsma concluded that differences remain among companies when it comes to the rate and speed of compliance in taking down extremist online material, and implementation at national levels.

Ms Maryam El Hajbi (Specialist, the EU Internet Referral Unit (IRU), European Counter Terrorism Centre, Europol), started by pointing out how the Internet can be seen as the central tool to extremist groups, as it can be used to spread propaganda and for financial purposes by using cryptocurrencies. The IRU informs private companies when violent extremist material needs to be voluntarily taken down. They follow open source information while providing a platform for private companies and member states. El Hajbi concluded by saying that companies have differing capabilities of acting on recommendations.

Mr Adam Hadley (Director, Tech against Terrorism) stated that by relying on the ‘emerging normative framework’, they are seeking to build the capacities of small and medium sized enterprises (SMEs). When addressing harmful usage and content, they encourage transparent actions taken through terms of services, which respect end-users’ rights. In addition to platforms for propaganda and recruitment, it is important to address operational use. As part of their work, they are providing a ‘Trustmark’ certification to build trust. He concluded by adding that technology is also disruptive to the legislative processes.

MsLucie Krahulcova (Policy Analyst, AccessNow) outlined how confidence-building and governance mechanisms are essential to ensuring individual rights of access. She pointed out that taking down online sources together with increasing the role of law enforcement, can embolden extremist actors. The latter is also evident in the spill-over of ‘terrorism’ into policy and regulatory texts. This occurs simultaneously with decreasing the access of the civil society actors, while the criminalisation of hacking together, with automated mechanisms of taking down webpages, are challenging individual freedoms.

Session III

Mr Lukas Heckerdorn Urscheler (Deputy Director, ISDC) provided an overview of the approach of Council of Europe countries, which have regulations ranging from general regulations to specific provisional issues. This impacts the degree of the national implementation process. However, applications must have a legal base, legitimate aims and must be necessary for a democratic society to achieve security aims. After presenting various country approaches, the speaker concluded that there remain diverse approaches to critical issues, such as the role of state action, measures of redressing, and safeguards.

Dr Nikolas Guggenberger (Assistant Professor, University of Münster, School of Law) presented the new German network legislation which aims to fight ‘fake news’ and remains indirectly applicable to combatting terrorism online. Due to the scope required in order to be counted as a social network, it applies to bigger social network companies. In order to enforce material taken down, the legislation allows regulatory fines for failures to do so. The fine is mostly based on the process and set-up of the platform. He finished by stating that it is an unconstitutional and inappropriate privatisation of law, and that instead, there should be more resources to law enforcement and the judiciary.

Mr Tobias Bolliger (Senior Advisor, Swiss Federal Police Cyberunity) outlined how the Swiss police maintain an information and reporting system which incorporates information from individuals together with existing international co-operation. Due to the fact that almost all of the cases originate outside Switzerland, the federal police emphasise the need for international private-public co-operation.

Mr Oliver Leroux (Juge d’instruction, Maitre de conferences, University of Namur) outlined how terrorist criminal offence remains an area of broad legislation in the context of Belgium. He pointed out that in Belgium, the police can maintain online ‘patrolling’. While the police can undertake cyber-measures, these are based on strict legislative measures. Court-based rulings oblige Internet Service Providers (ISP) to block access and enforce sanctions. He concluded that a legal framework exists, but in practice it continues to be a complicated issue.

Dr Sharri R Clark (Senior Advisor for Cyber and Countering Violent Extremism, US Department of State) stated that the US does not request ISPs or platforms to take down material unless they are tied to child pornography or drugs. The US government focusses on counter-narratives while co-operating with the private sector and multilateral forums on voluntary and self-regulatory basis. Even though the US supports the use of automated tools for combatting terrorism online, these measures must respect individual rights and must not violate the US law. In conclusion, she said that resilience of the youth remains essential.

Dr Kyungho Choi (Research Fellow, Korea Legislation Research Institute) highlighted that South Korea does not possess a specific anti-terrorism bill due concerns about the protection of private individual’s rights and implications to wider public security, cybercrime and terrorism are largely approached from the general legal provisions.

Session IV

Moderator Professor Bertil Cottier (University of Lugano) started by introducing the panel.

Professor Solange Ghernaouti (University of Lausanne; Associate Fellow, Geneva Center for Security Policy (GCSP)) pointed out that blocking webpages is not a long-term solution because technical capabilities exist which enable circumventing them and accessing material.

Mr Kristian Bartholin (Deputy Head of the Counter-Terrorism Division and Co-Secretary to CODEXTER, Council of Europe) pointed out that there are multiple view of the notion of ‘freedom of expression’, as provided in the European Convention on Human Rights. However, if material can be seen as a threat to democratic society and rule of law, then it can be blocked. Internet Referral Units are a pragmatic solution, but their actions and referring should be accountable. He pointed out that instead of defining terrorism, certain actors or actions could be labelled ‘terrorist in nature’.

Dr Christina Schori Liang (Senior Programme Advisor and Senior Fellow, Emerging Security Challenges Programme, Terrorism and Organized Crime Cluster Leader, GCSP) highlighted how organised crime and terrorist activities are increasing in volume due state fragility, and are becoming increasingly present online. Nonetheless, a key for preventing radicalisation online is greater inclusion of the youth in decision-making and intellectual counter-narratives. Despite the challenges, the youth across the world are increasingly pushing back against cyber-crime and terrorism.

Ms Maryam El Hajbi and Ms. Krahulcova pointed out that counter-narratives alone are not sufficient but need to be in conjunction with other mechanisms in a broader context.

Dr Richard Hill (Vice Chairman, external affairs, Swiss Chapter of the Internet Society) emphasised that terrorists seek to spread fear, and that the best counter-action is to put things into context. However, such measures are ineffective and counter-productive when pop-up advertising exists. He concluded by stating that it would be important to have privacy rights codified at an international level.

Dr Bertil Cottier concluded by stating that due to the fragmentation and privatisation of international legal frameworks, the effectiveness of measures and co-operation remains limited.

This session, moderated by Ms Karmen Turk (Attorney-at-Law, TRINITI) featured discussions on the role of intermediaries in enforcing human rights in the online world. As an introduction, Turk emphasised that current debates on Internet governance often address the issue of human rights, but usually fail to define what the necessary framework would be to effectively enforce them.

Mr Wolfgang Kleinwächter (Professor Emeritus, University of Aarhus) argued that today ‘human rights are in the centre of the Internet governance debate’, but emphasised that discussions have evolved since the beginning of the year 2000. Indeed, there used to be a debate on whether we would need to rewrite human rights in the digital age or instead address emerging challenges within the existing legal instruments. The choice that was finally made and which is reflected by the outcome of the 2015 World Summit on the Information Society (WSIS) in Tunis is that existing rights are sufficient. Instead of reinventing new rights, current debates have thus shifted towards the question of the implementation of human rights in cyberspace. Kleinwächter emphasised the recent extension of the Human Rights Council’s mandate to address these issues, in particular through reports, fact-finding missions, and the appointment of new special rapporteurs on freedom of expression and privacy. They each constitute important instruments for ensuring that governments and companies alike protect and respect human rights online.

Mr Markus Kummer (Board Member, Internet Corporation for Assigned Names and Numbers (ICANN)) dealt with the question of the implementation of human rights online from the perspective of ICANN. Given its contribution to the good functioning of the Internet, ICANN promotes human rights, and in particular freedom of expression. But in the past, human rights were rarely addressed within ICANN’s internal discussions. This situation progressively changed after non-commercial stakeholders and certain ICANN Board members attempted to progressively introduce human rights to the agenda. This process had led ICANN to work on a framework for the implementation of human rights, currently under discussion. This framework for human rights intends to reaffirm ICANN's existing obligations within its core values.

Ms Charlotte Altenhoener-Dion (Administrator, Council of Europe) addressed the enforcement of human rights online based on recent developments at the level of the Council of Europe (CoE). From a legal perspective, the doctrine of the European Court of Human Rights (ECHR) is characterised by its dynamic interpretation of human rights, and thus follows the evolutions of society by taking into account ongoing technological developments. In addition to the ECHR’s jurisprudence, the CoE is also active at policy level to ensure the enforcement of human rights online. It formulates non-binding soft-law standards in order to help governments and other stakeholders to apply human rights online. These standards offer guidelines on specific issues, such as media pluralism and the service value of the Internet. In this context, standards set by the CoE usually differentiate obligations for states in terms of human rights with the responsibilities of intermediaries to respect human rights in the online space.

Mr Matthias C. Kettemann (Researcher, University of Frankfurt) then presented more specifically the work of the CoE with regard to human rights enforcement online by presenting its recommendation on Internet intermediaries. Still in its draft version, this recommendation aims to help intermediaries realise human rights online on the basis of existing law. For Kettemann, there is no need to reinvent the law to ensure that intermediaries adequately deal with human rights challenges. Intermediaries need to increase transparency and accountability, and conduct due diligence assessment with regard to human rights. For instance, terms of service must be in line with human rights standards, as is not systematically the case. Respecting the rights of users requires avoiding general untargeted filtering systems of users’ content and adopting the least restrictive means possible to control content. Finally, effective remedies need to be accessible for users and intermediaries should engage in age and gender-sensitive efforts to promote the awareness of users of their rights and freedom online. At the level of states, it is of great importance that states apply the principle of legality, meaning that demands addressed to intermediaries must be prescribed by law.

The 2017 United Nations Office at Geneva (UNOG) and the Geneva Centre for the Democratic Control of Armed Forces (DCAF) seminar discussed the topic of Violent Extremism Online – A Challenge to Peace and Security. The three-hour session started with an introduction by Mr Michael Møller, Director General of UNOG concerning the importance of eradicating violent extremism online as a challenge for peace and security. As he indicated, the risk to further violence arises and the Internet needs to be protected from terrorist attacks. He also mentioned the crucial role of the next Internet Governance Forum (IGF), to be held in Geneva in December 2017, in the fight against violent extremism online which would be, as he stated, ‘a major opportunity to tackle the issue in the International Geneva’.

Mr Adam Deen, Senior Researcher and Head of Outreach at the Quilliam Foundation, the first speaker of the session, focused his presentation on the ideology and the underlying reasons which led to the creation of the Islamic State (ISIS). As a former member of an Islamist extremist organisation himself who utilised universities for recruitment, he perceives the creation of ISIS as a logical result of 20 years of hidden groupings all over the world which today broadly use the Internet for the recruiting process. He also considers that the use of the Internet for recruitment purposes is a strong advantage for terrorists, given its anonymity, its interactivity which spreads contagious ideas faster, its accessibility, and, most importantly, its inexpensive fees.

Deen underlined the strong power of online interactivity which helps terrorists to easily provide their own religious instruction, reports from battles, interpersonal communications, threats against western countries, and pictures of the daily life of a terrorist with the aim of normalising them and creating a sense of belonging and camaraderie. According to research carried out by the Quilliam Foundation, approximately 1000 pieces of media content are provided each month by ISIS. He added that most of the content focuses on mercy, redemption, and camaraderie, notions that are already strongly present within the Muslim community and exploited by ISIS through personal grievances used to manipulate the recruits and increase the sense of belonging. He regrets that the interactivity as such also contributes to a form of clustered discourse which leads to extremism, since there is no time given for debate and for ideas to evolve.

One of the main highlights of Deen’s speech concerned the dehumanisation of the victims which, as he stated, is also part of the ideology supported by ISIS. He explained that the ideology as such creates a barrier between believers and non-believers and rationalises the violence. In his opinion, this facilitates the preparation of attacks and eradicates a possible mutual coexistence between believers and non-believers since the recruits do not see themselves as part of a society as a whole but as part of a transnational community that stands out from the rest of the world.

Deen’s speech also focused on the concept of pre-propaganda, which in his opinion forms the root of the extremism we face today and the main reason behind the creation of ISIS. In his own words, ‘ISIS did not create extremism, extremism created ISIS.’ He said we cannot count on the disappearance of ISIS to put an end to the ideology. In his opinion, the ideology as such needs to be made irrelevant or obsolete.

For the second part of the session, the panel on Violent Extremism Online was moderated by Ms Anne-Marie Buzatu, Deputy Head of Public-Private Partnerships Division at DCAF, who underlined the importance of practical solutions to put an end to the development of ISIS and violent extremism online.

Ambassador Kok Jwee Foo from the Permanent Mission of Singapore to Geneva stated that we live in a fragmented world which also allows the establishment of sophisticated and violent transnational communities such as ISIS to propagate a message and pursue a political goal. He added that Singapore has also been confronted by recruits willing to join ISIS and underlined that the battle against ISIS concerns everyone and needs to be addressed by multiple stakeholders. Part of his speech focused on the diversity of Singapore and the need to establish concrete policies to preserve the common space and to ensure an openness to all religions. He stressed that efforts at deepening multi-racial and multi-religious harmony is a never-ending endeavour.

In an effort to ensure inclusion and counter extremism, two policies have been established in Singapore. The Religious Rehabilitation Group (RGG) was launched in April 2003 by the Muslim community and academics to combat misinterpretations promoted by self-radicalised individuals and those in support of ISIS through media content. SG Secure is an initiative put in place by the Ministry of Home Affairs to promote community vigilance, cohesion, and resilience against global terrorism on the rise and to apply concrete measures. One of these measures consists of visiting every single home in Singapore to raise awareness of security and to encourage families to participate in this programme. Ambassador Foo concluded by underlining the importance of such policies and the need to find the right balance between security, freedom of expression, and international cohesion.

The second panellist, Mr Adam Hadley, Project researcher and associate at the ICT4Peace Foundation, presented an overview of the foundation’s activities, findings, and recommendations on counter terrorism. As part of its activities in 2016, phase one analysed threats regarding the use of technology by terrorists and scoped out practical measures. Three global workshops were organised to include various stakeholders from the private and public sectors. The outcome report, published in December 2016, entitled Private Sector Engagement in Responding to the Use of the Internet and ICT for Terrorist Purposes, provides an overview of the current threat assessments, emerging or potential threats, and responses from technology companies involved in several initiatives such as the Global Network Initiative (GNI) based on United Nation and human rights principles. The initiative targets four areas in particular: development of guidance systems, building of training capability and legal teams, cooperation with Internet referral units (IRUs), and investment in counter narrative to support civil society.

Another important point in Hadley’s speech concerned the active role of technology companies such as Facebook, Microsoft, and Twitter which publish transparency reports and deliver information about requests for the takedown of online content from governments all around the world. He also stressed the urgent need to create frameworks respecting human rights and mentioned some concerns about the legitimacy of the private sector and the capacity of small companies to develop policies to challenge the use of the Internet by terrorists.

Several recommendations have been established by the ICT4Peace Foundation including the will to build on existing initiatives, to support dialogue regarding a normative framework through a multistakeholder approach, to encourage coordination, to establish global knowledge sharing and a capacity-building platform focused on policy and practice, to build the capacity of small tech companies, to support data-driven research on effectiveness, and to promote digital literacy. The conclusion of the speech focused on the foundation’s plans for 2017 which provide the inclusion of more multistakeholders in the fight against violent extremism online and the establishment of a platform which aims to share global knowledge on emerging practices, norms, standards, and policies that have been developed on the subject matter.

The final speaker, Mr Mark Stephens, International Human Rights Advocate, CBE, and Independent Chair of the Board of Directors of the GNI, presented the work of the GNI which brings together ICT companies and investors willing to forge a common approach to freedom of expression online. The GNI focuses on two elementary human rights - freedom of expression and the right to privacy - principles that are designed to protect citizens and to prevent any serious consequences of a breach of these rights. Stephens added that one of the GNI’s main concerns is the impact of laws which would tend towards improper protection of freedom of expression. This concern led to the development of various recommendations from the GNI regarding consistency with human rights norms that governments should respect, including the fact that human rights’ restrictions should be established in a clear and precise law that is proportionate and necessary. He added that governments should not impose liability on intermediaries.

In the second part of his speech, Stephens stressed the role of ICT companies and the fact that most of them are more restrictive and efficient in their policies than parliaments are in their laws. He concluded by stating that the true challenge is that the issue at stake is larger than companies or governments; this also underlines a need for international cooperation between stakeholders in the protection of essential rights such as freedom of expression and the right to privacy.

The panel discussion was followed by a Q&A on the proper use of terms such as ‘Islamic’ which can be misused, the role of different stakeholders in the fight against ISIS, and the importance of tackling the issue with concrete measures to promote tolerance and coexistence between religions.

Other resources

PEN America’s Online Harassment Field Manual was created to give people resources, tools, and tips to help them respond safely and effectively to incidents of online harassment and hateful speech, and to encourage them to stay online, to keep speaking out, and to keep writing. Featuring first-person interviews with writers and journalists who have been targeted online and who refuse to be silenced along with comprehensive information about how to enhance cybersecurity, establish supportive online communities, confront online abuse head-on, practice self-care, and engage law enforcement during severe episodes of harassment, the Field Manual also offers best practices to allies of writers and journalists as well as the institutions that employ them. We invite you to explore everything the Field Manual has to offer, and to share this information widely with your social and professional networks. Source

The Internet Legislation Atlas identifies the main legal instruments in relation to Internet rights in the following seven countries: Egypt, Iran, Iraq, Jordan, Lebanon, Syria, and Tunisia. The atlas also provides an indication of how human rights are protected, or neglected in the Internet environment in those countries.

Freedom of expression was a cross cutting topics across various sessions focusing on human rights. The Freedom Online Coalition Open Forum discussed, among other issues, about the negative consequences that activities such as cyber surveillance and illegal interception of communications have on the right to freedom of expression.The implications of online extremisms and whether there should be limits to the right to freedom of expression to tackle this challenges were also discussed, and it was underlined that there is a need to to promote a counter-narrative strategy to prevent radicalisation and extremism online (Free Expression & Extremism: An Internet Governance Challenge - WS96).

The impact of content control policies on freedom of expression and other human rights was also discussed (Sex and Freedom of Expression Online - WS164). As was underlined in several sessions, delicate balances need to be achieved between protecting the public interest (a concept whose under- standing varies across cultures) and preserving the right to freedom of expression.

Cooperation among different stakeholders was emphasised as crucial support for a strategy to implementation of Action Line C9 (Media). During one of the Moderated High Level Policy Sessions (session 223) also looed into the problem of freedom online, and it was said that in many countries civil society representatives have less freedom to do their work since their calls are being intercepted without their knowledge, or journalists are not being able to trust that their sources will be protected.

IGF 2015 Report

Freedom of expression is a recurrent issue at IGFs, and this year’s IGF also served to revisit well-known challenges. Yet, the discussion has evolved over the years: what was previously a debate in favour of declaring online freedom of speech a right, has become a discussion on how to ensure that the right is truly respected – both online and offline.

The UN resolution which proclaimed that the same rights which people have offline must also be protected online marked the turning point in the debate. It was preceded by another important instrument: former Special Rapporteur Frank La Rue’s 2012 report and the three-part cumulative test, which has become a litmus test for the protection of freedom of speech.

Several workshops made reference to La Rue’s report, with discussions on various aspects related to implementing his recommendations. The workshop on Freedom of Expression online: Gaps in policy and practice (WS 153), for example, brought together groups to share their countries’ experiences of the implementation (or otherwise) of the cumulative tests and indicators mentioned in La Rue’s report.

Yet, various challenges still persist. While on one hand technology has increased the user’s freedom of expression, on the other there are several challenges ahead in adopting a framework for freedom of expression that can apply globally, which many are calling for. And in the open microphone and taking stock session, the need to prevent the Internet from becoming a tool of repression was once again emphasised.

In the main session on Internet Economy and Sustainable Development, access to information was discussed in the context of the Sustainable Development Goals (SDGs). In referring to Goal 16.10, to ‘ensure public access to information and protect fundamental freedoms, in accordance with national legislation and international agreements’, it was agreed that in order to achieve a holistic approach to the importance of ICTs and the Internet in reaching the SDGs, the social, cultural, and educational components must also be addressed.

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee

GIP Digital Watch is operated by

GIP Digital Watch

Submit Content

The GIP Digital Watch observatory reflects on a wide variety of themes and actors involved in global digital policy and Internet governance. We welcome information and documents from your organisations. Submitted content will be reviewed and published by our team of knowledge curators.
You can submit your content at digitalwatch@diplomacy.edu