Other human rights

Quick links

Updates

In the view of the permeability of algorithmic technics and automated data processing in all aspects of the contemporary life, the Committee of Ministers of the CoE has drafted recommendation to member states to evaluate the impacts of the application of algorithmic systems in public and private spheres on the exercise of human rights and fundamental freedoms. The document outlines that the misuse of algorithmic systems can jeopardise the rights to privacy, freedom of expression and prohibition of discrimination provided by the European Convention for the Protection of Human Rights and Fundamental Freedoms. Although public and private sector initiatives to develop ethical guidelines for the design, development and deployment of algorithmic systems are welcome, they do not substitute the duty of member States to guarantee that human rights obligation are embedded into all steps of their algorithmic operations. In addition, member States should ensure appropriated regulatory frameworks to promote human rights-respecting technological innovation by all actors. The guidelines for States on actions to address the use of algorithmic system include data quality and modelling standards; principles of transparency and contestability; provision of effective judicial and non-judicial remedies to review algorithmic decisions; the implementation of precautionary measures to maintain control over the use of algorithmic systems; and empowerment through research and public awareness. The document also engages responsibilities for private actors that member States should ensure, including guidelines on data quality and modelling. ​

The Committee of Ministersdrafted a Declarationto draw attention to the member States to the rights of all human beings to take decisions and form opinions independently of automated systems. The document underlines the risks of using massive amounts of personal and non-personal data to sort and micro-target people, to identify vulnerabilities, and to reshape social environments to achieve specific goals and vested interests. The draft encourages member States (1) to consider additional protective frameworks to address the impacts of the targeted use of data on the exercise of human rights; (2) to initiate inclusive public debates on permissible forms of persuasion and unacceptable manipulation; (3) to empower users by promoting digital literacy on how much data are generated and used for commercial purposes.

In his introduction to the report, Fake news, data collection, and the challenge to democracy, Adrian Shahbaz said 'Events this year have confirmed that the internet can be used to disrupt democracies as surely as it can destabilize dictatorships' [...] 'With or without malign intent, the internet and social media in particular can push citizens into polarized echo chambers and pull at the social fabric of a country, fueling hostility between different communities.'

Pat Didomenco asks Is illegal bias lurking in your online job ad? when writing about bias in online employment ads, highlighting the recent American Civil Liberties Union (ACLU) Equal Employment Opportunity Commission complaint against Facebook and 10 employers that post ads on Facebook. The complaint alleges that Facebook used its ad-targeting features to target men, while not showing online ads for police officers, construction workers, truck drivers, and sales staff to women. Didomenco also points out discriminatory practices in age discrimination, and how to identify bias.

The European Court of Human Rights (ECHR) in Strasbourg, in the case of ‘Big Brother Watch and Others v. The United Kingdom’ ruled that the UK’s programme of mass surveillance, the so-called Tempora, revealed by whistleblower Edward Snowden, violated the right to privacy of those targeted. Three aspects of digital surveillance were considered by the judges: bulk interception of communications, intelligence sharing, and obtaining of communications data from communications service providers. By a majority of five to two votes, the judges found that UK’s Government Communications Headquarters’ (GCHQ) bulk interception regime violated article 8 of the European Convention on Human Rights. However, the court found that the GCHQ’s regime for sharing digital intelligence with foreign governments did not violate article 8 or article 10, therefore was not illegal. This means that sharing with foreign governments did not violate either the right to a private and family life, or to free speech. The judgment also says that not enough protection was given to journalistic sources and that the bulk interception violated the right to freedom of information. The claims were brought by a coalition of 14 human rights groups, privacy organisations and journalists, which include Amnesty International, Liberty, Privacy International, and Big Brother Watch, among others, upon the revelations made by Snowden in 2013, which showed GCHQ’s programme Tempora secretly intercepting, processing, and storing data about millions of people’s private communications, even those who were of no intelligence interest. The judgment cites: ‘The United Kingdom authorities have neither confirmed nor denied the existence of an operation codenamed Tempora.’ The director of the Big Brother Watch, Silkie Carlo, stated: ‘This landmark judgment confirming that the UK’s mass spying breached fundamental rights vindicates Mr Snowden’s courageous whistleblowing and the tireless work of Big Brother Watch and others in our pursuit for justice. This judgment is a vital step towards protecting millions of law-abiding citizens from unjustified intrusion. However, since the new Investigatory Powers Act arguably poses an ever greater threat to civil liberties, our work is far from over.’

The same rights that people have offline must also be protected online is the underlying principle for human rights on the Internet, and has been firmly established by the UN General Assembly and UN Human Rights Council resolutions.

In addition to main instruments on human rights (see each issue for a list of relative instruments), the Internet Rights and Principles Dynamic Coalition, the Internet Rights & Principles Charter, and the APC Internet Rights Charter include human rights specifically related to the effects of the Internet on human rights. Other human rights documents and statements are listed under 'Instruments'.

All human rights issues are cross-cutting and interdependent. For example, the freedom of expression and information is related to access to the Internet and net neutrality. Protection of minority rights is influenced by multilingualism and promotion of cultural diversity. Children’s rights have a strong security element. Ensuring the protection of privacy is important in dealing with cybersecurity.

Bringing human rights into focus, the Snowden revelations of mass surveillance triggered the diplomatic process on online privacy within the UN General Assembly and the UN Human Rights Council. and probably influenced the decision to appoint a UN Special Rapporteur on the Right to Privacy in the Digital Age. In 2015, the Nobel Peace Prize was awarded to the Tunisian National Dialogue Quartet 'for its decisive contribution to the building of a pluralistic democracy in Tunisia in the wake of the Jasmine Revolution of 2011' [also known as the Arab Spring], where social media and online communication played an important role; this also highlights the importance being given to human rights on the world stage. These developments underline a trend to recognise human rights as a priority for global digital policy. Freedom of expression, content policy and other human rights are now appearing on digital agendas, and will continue to gain in importance.

Children’s digital rights

When it comes to promoting the benefits of technology for children while at the same time fostering a safe and secure online environment, stakeholders need to strike a careful balance between the need to safeguard children against inappropriate content and risky behaviour, and the need to respect children’s digital rights, including the right to access information and freedom of speech.

Child online protection tends to focus on the protective aspect of children's use of technology. In fact, many argue that the Internet and technology have increased the risks for children, and therefore, children can reap the benefits only if the risks are mitigated. However, policies which focus exclusively on online risks can sideline the Internet's potential to empower children.

A rights-based approach, based on children’s rights as enshrined in legal instruments such as the United Nations Convention on the Rights of the Child, aims at maximising the opportunities of the digital world for children and young people while protecting them from risks. Since this approach strikes a more careful balance between children’s digital rights and their need for protection, it is increasingly favoured by experts.

UNESCO facilitates global advocacy and discussions on freedom of expression and relevant issues including privacy at the WSIS and the Internet Governance Forum. It further explores freedom of expression online in-depth through its flagship publication of Internet Freedom. UNESCO also defines key indicators to help stakeholders assess the local situation. Media development indicators are an analytic tool designed to assess the state of the media and measure the impact of media development programmes. Internet Universality Indicators aims to build a framework of indicators through which to assess levels of achievement, in individual countries and internationally, on four fundamental principles: human rights, openness, accessibility and multistakeholderism.

Access Now is well known for its campaign against Internet shutdowns, #KeepItOn. The campaign raises awareness on instances of Internet shutdowns and actions being taken against this. Access also organises the annual RightsCon Summit that brings together digital rights activists from around the world. In 2017, RightsCon had a track dedicated to Internet shutdowns where participants learnt and shared about different aspects of the problem, including the role of telecommunication companies. The organisation also engages with UN mechanisms, such as the Human Rights Council.

Privacy and data protection online has been the subject of many UNHRC resolutions.

...

Privacy and data protection online has been the subject of many UNHRC resolutions. General resolutions on the promotion and protection of human rights on the Internet have underlined the need for states ensure a balance between cybersecurity measures and the protection of privacy online. The Council has also adopted specific resolutions on the right to privacy in the digital age, emphasising the fact that individuals should not be subjected to arbitrary of unlawful interference with their privacy, either online or offline. The UNHRC has also mandated the Special Rapporteur on the right to privacy to address the issue of online privacy in his reports.

The coalition, which is committed to advancing Internet freedom, had formed multistakeholder working groups: A

...

The coalition, which is committed to advancing Internet freedom, had formed multistakeholder working groups: An Internet Free and Secure; Digital Development and Openness; and Privacy and Transparency online. While all working groups worked on different aspects of Internet freedom, the Digital Development and Openness considered human rights online especially criminalisation of speech. The mandate of the working groups came to an end in May 2017 and was not renewed. In 2014, the coalition issued a statement on restriction on access to social media and in April 2017, one another condemning Internet shutdowns.

Conventions

Judgements

In December 2008, Romanian national Mr Bogdan Mihai Bărbulescu lodged a complaint (Application no. 61496/08) with the European Court of Human Rights, alleging that his former employer’s decision to terminate his contract had been based on a breach of his right to respect for his private life and correspondence. He also claimed that the domestic courts had failed to protect his right.

The case arose shortly after Mr Bărbulescu’s dismissal from work, as an engineer in charge of sales with a private company, in 2007. Contrary to internal regulations which strictly prohibited employees from using the company’s computers and resources for personal purposes, the employee used a Yahoo Messenger account not only for responding to clients’ enquiries but also for personal purposes. His employer had monitored the private communications, and presented transcripts in Court. The Bucharest County Court dismissed the employee’s complaint; the Bucharest Court of Appeal dismissed the employee’s appeal.

The ECHR had to determine mainly whether the employer acted in breach of Article 8 of the Convention for the Protection of Human Rights and Fundamental Freedoms (on the right to privacy). Could the employee retain a reasonable expectation that his communications would not be monitored, in view of the general prohibition imposed by the employer? And did the State, in the context of its positive obligations under Article 8, strike a fair balance between the employee’s right to respect for his private life and correspondence, and his employer’s interests? Was the employee given prior notice that his communications could have been monitored?

By six votes to one, the ECHR concluded that:

The employer acted within its disciplinary powers since, as the domestic courts found, it had accessed the Yahoo Messenger account on the assumption that the information in question had been related to professional activities and that such access had therefore been legitimate.

It does not consider it unreasonable for an employer to want to verify that the employees are completing their professional tasks during working hours.

The domestic courts relied on the transcript only to the extent that it proved the applicant’s disciplinary breach, namely that he had used the company’s computer for personal purposes during working hours. The content of the communications was not a decisive element in the domestic courts’ findings.

There was nothing to indicate that the domestic authorities failed to strike a fair balance, within their margin of appreciation, between the applicant’s right to respect for his private life under Article 8 and his employer’s interests.

Therefore, there was no violation of Article 8 of the Convention.

In a partly dissenting opinion, Judge Pinto of Albuquerque noted that the ECHR overlooked crucial features of the case, such as clearly establishing whether an Internet surveillance policy was duly implemented and enforced by the employer, and whether the employee explicitly agreed to such policy. Even if these were confirmed, the termination of employment was not justified. It was also unjustified for the employer to access and make transcripts of communication clearly demarked as personal, against the applicant’s explicit will and without a court order. The Judge opined that the employer’s interference went far beyond what was necessary.

Internet surveillance in the workplace is not at the employer’s discretionary power, even where there exist suspicions of cyberslacking, damage to the employer’s IT systems, disclosure of the employer’s trade secrets, etc. Strict limits should apply, and any interference must be justified by the protection of certain specific interests covered by the Convention.

A comprehensive Internet usage policy in the workplace must be put in place, and employees must consent to it explicitly. In event of alleged breaches, opportunity should be given to them to respond to such claims in a fair procedure, with judicial oversight.

The Judge also stated that ‘a blanket ban on personal use of the Internet by employees is inadmissible.’ In view of rulings by the Court’s Grand Chamber, and the French Constitutional Council, the Judge concluded that States have a positive obligation to promote and facilitate universal Internet access, including the creation of the infrastructure necessary for Internet connectivity.

The implications of this judgment are far-reaching. Insofar as digital policy is concerned, they affect: (a) the extent to which employers may surveil private communications; (b) the elements which an Internet usage policy must include (or not include); and (c) the manner in which a usage policy is enforced. The fact that the Court considers it reasonable for an employer to want to verify that the employees are completing their tasks, without delineating the extent to which communications, clearly marked as personal, can be surveilled, may shift the balance between the employee’s right to respect for his private life and correspondence, and his employer’s interests.

In 2010, Spaniard Mario Costeja González requested the national Data Protection Authority (Agencia Española de Protección de Datos) the removal of content from a website, and the removal of links from Google search results. While the first request was turned down, the second was upheld.

Proceedings were initiated by Google. During the proceedings, the opinion of the Advocate General was sought as to whether the company could be considered a data controller; this clarification would eventually form the basis for remedy. AG Niilo Jääskinen opined that despite the EU’s Data Protection Directive (95/46/EC) being applicable to Google, the company could not be considered a data controller.

In reaching its judgment, the Court of Justice of the European Union took into consideration three issues (in addition to the AG’s opinion and to those of the governments of Austria, Greece, Italy, Spain, and Poland):

Applicability. Was the Data Protection Directive applicable to this particular case, i.e., could Google Inc. be regarded as being subject to the European Union’s acquis? The Court decided that the directive was applicable to the company, agreeing with the AG’s opinion.

Definitions. What was Google’s capacity in this particular case: the controller, the processor, or the recipient? Disagreeing with the AG’s opinion, the Court decided that Google was a controller.

Remedy. If the directive was applicable to Google, and if under said directive Google was considered a controller, what should have Google done? The Court concluded that search engines, including Google, should remove search results, under specific guidelines set out in the judgment.

Subject to certain exceptions, a person’s right to be forgotten are said to override not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information resulting from a search related to that specific person.

The implications of this judgment were unparalleled. Search engines were required to facilitate requests for removal of results, and currently contend with thousands of requests every month. In parallel, the judgment triggered heated debates as to whether the right to be forgotten curtailed freedom of expression, freedom of speech, and freedom of the press.

Global developments have taken place at a fast pace, despite criticism by Google and other companies such as Facebook. Early in 2015, Google’s attempt to limit the applicability of the judgment to European websites, failed to gain broader support. A few months later, the French data regulator (Commission Nationale de l’Informatique et des Libertés)rejected Google’s appeal against the global enforcement of the right to be forgotten, requesting Google to comply with the formal notice with immediate effect. In the UK, the Information Commissioner ordered Google to remove links to news stories about content which was originally removed under the right to be forgotten.

The IPU Resolution (21 October 2015) provides a broad framework for digital policy. It introduces two new developments in the global digital policy:

- Operational paragraph 3 underscores that legislation regarding surveillance must be based, inter alia, on the principles of necessity and proportionality. It is the first example when a global state institutions adopts these principles officially.

- Operational paragraph 4 calls for upholding the principle of net neutrality as one of the first explicit references to the princple of net neutrality of a global state institution.

Resources

Multimedia

DiploFoundation’s comic book (second edition) presents an illustrated short story of a child who feels distressed as he accesses a harassing video online, and the reactions and roles of his parents, civil society, and other stakeholders.

Publications

The latest edition of glossary, compiled by DiploFoundation, contains explanations of over 130 acronyms, initialisms, and abbreviations used in IG parlance. In addition to the complete term, most entries include a concise explanation and a link for further information.

The book, now in its sixth edition, provides a comprehensive overview of the main issues and actors in the field of Internet governance and digital policy through a practical framework for analysis, discussion, and resolution of significant issues. It has been translated into many languages.

Reports

The meeting 'Rule of Law and Democracy in the Digital Society: Challenges and Opportunities for Europe' was chaired by Mr Daniel Gros (Director, Centre for European Policy Studies).

Ms Věra Jourová (European Commissioner for Justice, Consumers and Gender Equality) took the opportunity to discuss key challenges and possible solutions for addressing the impact of the digital revolution on the rule of law and democracy.

Jourová said that policymakers and companies must place people at the heart of the digital revolution. Though it appears to be a ‘fantastic sphere for direct democracy’, according to her, the Internet has become highly vulnerable and a highway to hate speech. Citing Jürgen Habermas, Jourová argued that if social media could destabilise authoritarian countries, it also has the capacity to erode public spheres in democracies, with the phenomenon of algorithm bubbles and the polarisation of political expression.

She insisted that a debate on the liability and accountability of tech companies was needed, while being very cautious in assessing the impact of regulations on innovation and values. Now that large tech companies gathered unprecedented power, they also need to accept their responsibilities.

In recognising that for EU institutions and member states, communication through these platforms is unavoidable and has become necessary, Jourová reiterated that the mantra of the European Commission remains that ‘what is illegal offline must be illegal online’. Recent measures and policies initiated by the European Commission demonstrate its ambition to address these challenges with both soft and more legally binding tools, including the 2016 Code of Conduct for IT companies on illegal hate speech and its 2018 proposal for the rapid detection and removal of terrorist content online. After being accused repeatedly in the media of censoring the Internet as part of her fight against hate speech and fake news, Jourová reminded the audience she had lived in a regime (Czechoslovakia) where public institutions were once the arbiter of what was considered true or false, and was thus deeply aware of the need to prevent censorship online while tackling these emerging threats.

Addressing the issue of foreign interference in the next European elections, Jourová argued that member states need to take greater actions in order to curb the risk of manipulation and disinformation campaigns. In the wake of the Cambridge Analytica scandal, the European Commission has developed guidance for enforcing data protection rules in the context of electoral processes and recommended greater transparency in online political advertisements and targeting.

Finally, Jourová referred to the possibility of taxing large IT companies in order to allow a fairer redistribution of the wealth generated by our data. New fiscal revenues should be allocated in particular to education and media literacy programmes for citizens.

The report, prepared by the Global Commission on Internet Governance, outlines a series of recommendations to policy makers, private industry, the technical community and other stakeholders on modalities for maintaining a ‘healthy Internet’. It tackles aspects such as: the promotion of a safe, open and secure Internet, human rights for digital citizens, the responsibilities of the private sector, safeguarding the stability and resiliency of the Internet’s core infrastructure, and improving multistakeholder Internet governance.

The report explored the connection between encryption and human rights, and argues that individuals should be able to encrypt their communications and personal data as an essential protection of their rights to privacy and freedom of expression.

Dedicated to developments in 2015, the report lists and describes over 350 high-level cases related to jurisdiction and the Internet. The report, published every year, facilitates an understanding of emerging trends amid a patchwork of rules across different jurisdictions.

This report addresses internet freedom around the world and measures Internet freedom in different countries. Categories used are 'obstacles to access', 'limits on content', and 'violations of user rights'.

GIP event reports

The session was organised by BSR and The B Team, and co-moderated by Mr Rajiv Joshi (Managing Director, The B Team) and Ms Margaret Jungk (Managing Director, Human Rights, BSR). The event addressed the role of businesses in the framework of fostering and promoting the respect of human rights. More specifically, the event identified risks and opportunities for companies to improve the protection for human rights, and tried to explore holistic corporate advocacy approaches for the respect of human rights.

Jungk introduced the topic under discussion. For a long time, companies have lobbied governments for business-related interests. The question that the panel tried to address was to create a business lobby for fostering and implementing the protection of human rights, in order words, a lobby for better human rights public policy. Moreover, she underlined that ethical issues and considerations that should be taken into account while addressing the topic.

Mr Steve Crown (Vice-President & Deputy-General Counsel, Microsoft) talked about the role of companies in advancing human rights policy. In order to take on such a role, companies need to be trusted by citizens and customers. Microsoft has started to engage with the public sector for better public trust building, with initiatives such as their engagement in privacy security with the US government, their global initiative for the LGBT community, their Cybersecurity TechAccord, and their initiative on artificial intelligence (AI) and human rights. The representative stressed that Microsoft’s policy for improving customers’ experience keeps in mind the principle of boosting human rights and advocating for the rule of law.

Ms Shelly Heald Han (Director of Civil Society Engagement, Fair Labor Association) stated that companies have a crucial role to play in improving labour conditions; however, there is still a long way to go. Companies are mistaken in thinking that they need to be neutral with regards to government relations: they need to take a stand, and push for better protection of human rights. Moreover, the role of the civil society needs to be reinforced: the goal is to balance the power of civil society and make it comparable to that of businesses.

Ms Paloma Munoz Quick (Quick Director, Investor Alliance for Human Rights, ICCR) argued that corporate respect for human rights creates long term economic benefits. Investors need to identify human rights risks related to their business relationships. Companies need to do whatever it takes to protect human rights; moreover, they have the duty to leverage their power to push governments to create better human rights policy. Companies’ efforts on public policy need to be in line with their guiding principles. A concrete example of this can be seen by some airline companies not accepting to fly immigrant children separated from their parents.

Mr Bennet Freeman (Senior Advisor, BSR) argued that we live in a time of geopolitical disruptions in which companies should advocate for fundamental freedom and respect of civil rights. Nonetheless, there are examples of companies undertaking important actions and pushing better public policies, such as Microsoft and Siemens.

Mr Dunstan Allison-Hope (Managing-Director, Business for Social Responsibility (BSR)) spoke about the importance of focusing on the users that are the most exposed to risks in the information and communications techonology (ICT) sector, such as human rights defenders. He noted that while conducting human rights assessments, the perspective of risks and what was perceived as due diligence obligations varied from one place to another.

In terms of risks, Allison-Hope spoke about the challenges in approaching people who might be under surveillance without exposing them. He further mentioned the risk of exposure when asking companies to be transparent, given that certain types of information might be accessed by parties with malicious intent. Another issue he mentioned was the perception of companies operating in certain countries and withdrawing from them due to uncertain or worsening political conditions. While the public in the Global-North might welcome these steps as a stand against government, it might be a blow for the people within the country, relying on these networks and platforms. Allison-Hope also noted that 'we haven’t fully grappled how to do due diligence in an environment that is changing and uncertain.'

Mr Alex Warofka (Product Policy Manager, Human Rights and Freedom of Expression at Facebook) explained that Facebook tries to implement due diligence processes and monitors them through direct user research on user experience on the platform, asking users about their products. The results are then interpreted not only through human rights due diligence, but also a product lens to optimise privacy and human rights protection by design.

Regarding the risks, Warofka mentioned an example in which Facebook had taken down accounts of senior military officials in Denmark who had participated in human rights abuses offline. They faced public backlash because they had not consulted with human rights defenders on the ground. However, the decision was conscious so as to not endanger the defenders and risk exposing them to backlash from the military for denouncing the actions of the military officials.

According to Warofka, it is almost impossible to create new software and tools that fully comply with due diligence standards in all the markets it penetrates. For this reason, all stakeholders must work together to point out flaws and find sustainable solutions. Additionally, Warofka mentioned the difficulties of conducting human rights assessments on a global scale due to country specific regulations, which is why Facebook is implementing due diligence in its processes by design.

Ms Nicole Karlbach (Global Head, Business & Human Rights, Oath) said that when Yahoo! acquired the social media platform Tumblr, the company recognised the importance of implementing human rights due diligence mechanisms.

The company’s business and human rights team, which is part of the legal team, has since been growing. The team works in close contact with the other departments and engages with different partners to monitor due diligence and their results are publicly available.

Karlbach noted that Yahoo!’s participation in the GNI also reinforced the company’s commitment to its principles.

Ms Meg Roggensack (Interim Executive Director, International Corporate Accountability Roundtable (ICAR)) pointed out that technical equipment also has to be considered along with the other ICT related services. She also reiterated the important role of multistakeholder approaches given that today's challenges are a shared problem and require a range of combined solution strategies. Multistakeholderism can help to identify international frameworks and adapt them to different sectors.

The session was organised by the German Institute for Human Rights and moderated by Mr Christopher Schuller (German Institute for Human Rights). The forum was not a panel discussion, but a debating exercise in which the speakers did not represent their own ideas nor those of their organisations. It followed the British parliamentary debate format: Four speakers took the floor - two in support of - and two in opposition to - the question, 'Are tech companies a threat to human rights?'.

Ms Isabel Ebert (University of St. Gallen) and Ms Coraline Ada Ehmke (Software engineer and creator, Contributor Covenant) both agreed that tech companies are a threat to human rights, while Mr Faris Natour (Co-Founder and Principal, Article One) and Mr Luis Neves (Managing Director and CEO, Global Enabling Sustainability Initiative (GeSI)) argued against this statement, saying that tech companies are not a threat to human rights.

The argument supporting the idea that tech companies are a threat to human rights identified tech companies as the perseverance of capitalism at the expense of the protection of human rights. The argument states that through active or passive actions, tech companies are perpetuating the framework of power and oppression. To this extent, states are not capable of regulating them and their services effectively. Indeed, an example of this is shown by algorithms which are not neutral, but maintain human biases in search engines. Moreover, it was argued that tech platforms are guilty of not preventing the abuse of technology that has influenced elections during the last few years. Furthermore, there is an ignorance of the power that comes with the so called data-driven management. Human rights are defined and limited by tech companies, abusing transparency and fairness, and reinforcing human biases through technology. Finally, the Surveillance Innovation Complex is the most disruptive and dangerous threat to human rights, posed by the activities of tech companies.

The argument opposing the idea that tech companies are a threat to human rights identifies four levels of responsibility: political, corporate, external, and individual. Moreover, technology can help address the following areas of concern. For instance, the fight against climate change can benefit from new technologies and their ability to create new means for collaboration and engagement without making people travel from one place to another. The management of resources represents another aspect that should be considered through the lens of the positive impact of technology, enabling engagement at low to no price. Moreover, with regard to the increase of inequalities, it should be noted that it is not only companies’ responsibility. Finally, it was noted that a major challenge faced by tech companies and human rights is related to artificial intelligence (AI). Despite the last concerns, technology has been a good turnpoint for human rights: new digital tools implement connectivity and engagement; AI has improved education and health; and crimes such as slavery and human trafficking can be better identified through the use of information and communications technology (ICT). Thus, it was argued that it is crucial to understand the benefits of technology and re-stress the importance of digital trust and responsibility. There is a need to make sure that principles are respected and that people trust the technology.

The sessions was organised by the Business and Human Rights Resource Centre and the International Corporate Accountability Roundtable (ICAR). The moderator, Mr Phil Bloomer (Executive Director, Business & Human Rights Resource Centre), noted that we live in a world of extreme inequality in which power is not only attributed to the wealthiest, but also translated into forms of ownership and knowledge of data. He said that in facing the challenge of overcoming this inequality, the question about which direction we will take with regards to technology and data is essential. Knowing that there are two prominent views of the impact of technology on human rights: the dystopian vision of technology reinforcing the rift in society; and the utopian vision that sees technology empowering the most vulnerable populations.

Ms Meg Roggensack (Interim Executive Director, International Corporate Accountability Roundtable (ICAR)) explained that her work focuses mostly on the bottom of the supply chain, given that automation is accelerating so quickly that it increases social divides to the detriment of workers and weakens their bargaining positions.

Ms Abby Meaders-Henderson (Legal & Policy Fellow, ICAR) said that her research focuses on low-skilled labourers and pointed out that a lot of discussions about the impact of automation are focused on workers in the Globa- North. However, in her view, the impact of technology on low-skilled and manual labour are likely to be felt much earlier and to a greater degree in the Global-South, which puts these workers at a more immediate risk. Given that automation strongly affects repetitive, assembly line type of tasks that companies usually outsource to countries with weaker labour protection frameworks, it is important to strengthen workers’ positions by giving them the resources to strengthen their bargaining positions and help them protect themselves from the impact of potential job losses and shifting resource allocations. She further noted that vulnerable populations such as women, young workers, and migrants are disproportionately affected by these developments.

Meaders-Henderson pointed out that each stakeholder has a role to play in making sure that the transition to increasing automation happens equitably.

Mr Rob Johnston (Assistant Secretary-General of the ITF, International Transport Workers' Federation) explained that workers in the transportation sector are not necessarily against automation, but rather that they need to be convinced of its benefits and reassured of their role regarding automated tasks.

Johnston also noted that automation is not inevitable but that it is a policy choice and therefore its consequences need to be thought through. He recognised that technology has been of great value to increase the safety of transportation workers and that the desire for automation is strong in certain sectors. However, he also said that the perceived threat of automation is very context specific and dependent of the type of job that technology might replace.

Mr Philippe-André Rodriguez (Senior Advisor, Global Affairs Canada’s Center for International Digital Policy) noted that one of the main things that governments can do in the automation debate is to listen to, and engage with, the stakeholders. They need to reach out to vulnerable populations and find solutions to the impact that automation will have on their livelihoods.

Rodriguez introduced Canada’s Open Government Partnership which recognises the importance of setting the right culture in the discussions by making it clear that the debate of automation and human rights needs to be a race to the top in terms of rights protection. Governments therein can lead by example and adopt policies that recognise the importance of this premise. As a result of these open consultations, Canada has already drafted a Directive on Automated Decision-Making which includes a framework providing governmental oversight and audit procedures.

Ms Padmini Ranganathan (Global Vice President, Products & Innovation, SAP Ariba) mentioned the Skill Shift: Automation and the future of workforce discussion paper published by McKensey Global Institute according to which companies will increase their gains through worker displacement, wherein manual jobs are most at risk. The replacement of workers and wage saving will be turned into profits for the companies.

According to Ranganathan, brands and original equipment manufacturers (OEM) must identify risk factors through available data and apply it countermeasures throughout their supply chain. Companies must develop strategies for job shifts in collaboration with governments and civil society.

New ways must be found to redistribute the profits from wage savings in the from of community trainings and other measures. All stakeholders must do their part in reducing the potential conflicts arising from income polarisation and societal divides as well as finding mechanisms to reward businesses who are leading efforts for human rights due diligence in the context of automation.

Forecasting and announcing planned workforce changes early by making a company’s intention to automate processes clear and providing details to the workforce,

Committing to training and supporting educational programmes for the workers,

Providing for workers who are displaced, and

Encouraging public policies to modernise social safety nets.

However, it must be recognised that training programmes need to take labourers' skills into account and not simply focus on providing lessons.

Mr Yousuf Aftab (Principal, Enodo Rights) introduced the Just Transition framework. He noted that the UN Guiding Principles for Business and Human Rights are not clear on whether automation and the relocation of workers is a useful practice. Aftab also noted that the impacts of automation.

The Just Transition framework takes into account that the impacts of automation are systemic. While it was developed to tackle environmental challenges its concepts can also be applied to automation and human rights due diligence. The framework contains provisions to: assess, and engage with and empower workers; to regulate; to create new social safety nets; foster community and encourage sound investment as deliberate policy choices.

Another panellist spoke about the lessons to be learned from the dissolved Multifiber Arrangement which ended apparel quotas.

The panellist explained that there are two types of retrenchments which need to be distinguished despite having similar impacts on workers. Factories are upgrading and opting for automation and workers might lose jobs. However, companies that are going out of business for not having been able to keep up with technological trends might also cause job losses. In the latter cases, expectations for companies must be set - that they still have to provide for their workers - but governments must also consider adapting their social protection systems. Most importantly, companies going out of business must give workers:

The session was organised by Business for Social Responsibility (BSR) and Article One. It was co-moderated by Mr Dunstan Allison-Hope (Managing Director, Business for Social Responsibility (BSR)) and Mr Faris Natour (Co-Founder and Principal, Article One). In his opening remarks, Allison-Hope mentioned the risks related to artificial intelligence (AI) in relation to the complexity of the deployed technology. He further noted the speed with which developments take place in the field, the uncertainty connected to AI developments, and how human rights protection mechanisms can be implemented within these evolutions. Natour highlighted that we are already in the middle of AI driven transformations and that AI is already being used in many sectors and in tools such as translation apps, face recognition, and search engine algorithms.

Ms Hibah Kamal-Grayson (Public Policy Manager, Human Rights and Internet Governance, Google) said that at its core, AI is an adaptive technology which is already used in various fields, such as spam detection, as well as for more complex issues such as wildfire predictions.

She highlighted that Google abides by the existing laws, but that also respects the principles set out by the Global Network Initiative and its internal AI principles published in June 2018. Kamal-Grayson recognised that while Google works according to its own principles, they can be expanded upon and that they are a good starting point. She further pointed out that core tensions arise from the difficulty of developing all encompassing due diligence standards, while designing them in a way to make them enforceable. 'No one stakeholder, no one sector will be able to figure out this challenge by itself'.

Ms Eimear Farrell (Advocate and Advisor, Technology and Human Rights, Amnesty Tech, Amnesty International) spoke about the UN Global Compact's - Project Breakthrough which aims to analyse how AI can help to achieve the sustainable development goals (SDG).

Farrell identified the increasing implementation of human rights language in AI principles rather than the referencing of ethical standards as a very positive development in the field. According to her, the UN Guiding Principles on Business and Human Rights needs to adapt to, and incorporate, technological developments of AI. Keeping in mind the Declaration of Human Rights, Farrell mentioned that it was very innovative when it was first adopted, and that new frameworks for AI also need to be bold. Therein, she also saw a role for civil society to engage with companies and not just call them out on human rights abuses.

Mr Steve Crown (Deputy General Counsel, Human Rights, Microsoft) said that ethics and human rights language are both included in Microsoft’s guiding principles, but that which classification was highlighted most also depended on the audience.

Through human rights impact assessments, Microsoft has tried to figure out where its responsibility lies when thinking about its entire value chain. He mentioned the need to give AI users guidance, similar to the instructions for use of medicine, in order to inform them about the appropriate uses and limitations of the technology. Given the adaptive nature of AI, simply accessing the source code of certain algorithms is insufficient to understand certain processes. It is therefore important to inform the users about the potential uses of the technology they are using.

Ms Sabrina Rau (Senior Research Officer, Big Data and Technology Project, School of Law Human Rights Centre, University of Essex) explained that while AI learns and adapts through algorithms, data and big data are the elements fueling the technology and should thus be given more attention. According to Rau, due diligence for human rights must be respected at all stages of the value chains given that wrongful or skewed data can lead to unwanted outcomes. Data must therefore also be monitored and managed along the value chain. Rau also mentioned the importance of due diligence of business relationships and ensuring transparency throughout the processes.

Ms Kelli Schlegel (Manager, Human Rights, Intel) said that in order to implement privacy by design, it needs to be a core concern for companies. As technology is increasingly widespread, more issues are coming to light. Developers must therefore be trained in human rights protection and due diligence mechanisms in order to understand how their creations can impact them and how to develop technology that operates within the boundaries of due diligence. Schlegel also mentioned that implementing due diligence is often easier in existing processes than when designing new applications. However, once the diligence phase is in place, having a review board or a way for employees to raise concerns regarding the respect of human rights need to be implemented.

Mr Minwoo Kim (Research Professor, Korea University Human Rights Center) explained that AI carries many risks, as it amplifies privacy issues due to the data it collects and requires to function. The trend of decentralisation only increases these difficulties. Kim further noted that privacy by design only provides protection for one human right, but that due diligence should take the entirety of the human rights framework into account.

Ms Olga DiPretoro (Program Officer, Winrock International) spoke about developments in which the due diligence role for companies has been reinforced, and mentioned the example of the US Tariff Act which requires companies to prove continuous monitoring of their value chains.

According to DiPretoro, data checks need to be enhanced in order to reinforce due diligence mechanisms. Companies’ data has so far been assessed individually and the data and the results of its analyses are not being shared within the industry. She pointed to duplications of efforts because audit results were not shared, thereby limiting companies’ abilities to make improvements. She urged for an improvement of business and civil society collaboration on information sharing that will allow consistent analysis.

A representative from SAP mentioned that AI is good for discovering patterns and making improvements to processes. According to him, human rights should be viewed as a business process that starts with the fundamental commitment of the UN’s Guiding Principles on Business and Human Rights. While the commitment to these principles does not involve AI directly it is a crucial step for the respect of due diligence. He noted that AI can provide help in assessing actual and potential impacts of businesses and human rights by predicting risks or visualise relationships between suppliers and customers. Therein, AI can be used to monitor value chain processes, ongoing interactions of communities and their application of AI technology, and analysing contracts and other legal documents to identify weak human rights protection mechanisms. This information can then be used in a collaborative framework and used to benchmark businesses’ performances.

The session was organised by the UN Working Group on Business and Human Rights and was divided in two main parts. Moderated by Mr Dante Pesce (Chairperson UN Working Group on Business and Human Rights), the first part of the session covered government perspectives on the implementation of the guiding principles of business and human rights. The experiences and challenges from France, Liberia, and Thailand were addressed in a panel discussion, while additional contributions from the floor were added by ambassadors and state representatives from Colombia, the Netherlands, the United Kingdom, Greece, the European Union, Brazil, Slovenia, Switzerland, Luxembourg, Indonesia, India, and Belgium.

The first speaker, Mr Somn Promaros (Director-General of the Rights and Liberties Protection Department, Ministry of Justice, Thailand), talked about the implementation of the UN Guiding Principles on Business and Human Rights in Thailand. He explained the steps undertook by the Thai government for the national action plan following an oriented and substantive pathway, covering areas such as – but not limited to – labour, national resources and community rights. Moreover, the implementation of the sustainable development goals (SDGs) is interlinked and reinforced in the framework of human rights. Finally, he stressed the importance of youth and the principle of leading by example in implementing the guidelines.

Ms Lorena Recabarren (Sub-Secretary of Human Rights, Government of Chile), speaking from a governmental position, explained the efforts taken by Chile so far. In order to achieve the fulfillment of the guiding principles, collaborative efforts are needed: therefore, in the case of Chile, strengthening the department of human rights is a priority in public policy. Nonetheless, coordinating the human rights-based approach in the implementation of public policy still represents a challenge on topics such as – but not limited to – transitional justice, indigenous people, women, and children’s rights. Chile adopted its first national plan, which will be entering into force in 2019, featured by the leading principle of business as responsible for the protection of human rights in their activities and practices.

Ms Maylis Souque (Secretary-General French NCP Responsible Business Conduct, Ministry of Economy, France) talked about the leading experience of France in the implementation of mandatory due diligence through the complementary role of instruments of soft and hard law. With this regard, Souque highlighted the relevance of the French law on vigilance, characterised by a reporting feature, as well as a concrete impact on workers covering all dimensions of due diligence, including the supply chain dimension of the processes, as well as other topics such as security and sustainability. Such an approach is meant to have a cascade effect on small and medium sized enterprises (SMEs) on both national and international levels. These imply concrete solutions and remedies when a violation of human rights occurs. Finally, she highlighted France's focus on the implementation of circular economy as well as social-solidarity economy.

Mr Meo Beyan (Assistant Minister for Economic Affairs, Ministry for Economic Affairs, Liberia) explained the efforts Liberia is implementing to make sure that corporate actors ensure the protection of human rights in their activities and practices. The process is a non-exhaustive one and it should be supported by less bureaucratic mechanisms on the national level, as well as through the creation of unions with the mandate of persevering the protection of human rights.

The session then featured comments from the floor, adding additional perspectives from the experiences of Colombia, the Netherlands, the United Kingdom, Greece, the European Union, Brazil, Slovenia, Switzerland, Luxembourg, Indonesia, and Belgium.

Colombia stressed the fundamental role of involving different stakeholders, as well as creating inter-agency groups, in addressing the implementation of the guidelines. The Netherlands highlighted the Dutch pioneer practice of concluding agreements between policy units, business units and NGOs. To date, six agreements have been signed to ensure responsible business conduct. The EU highlighted the achievements made so far by the development and implementation of regulations on the disclosure of financial information, as well as on the timber regulation. Moreover, the EU is implementing a soft law initiative on ensuring sustainable economic growth. Switzerland stressed that more visibility is needed for more effectiveness on the protection of human rights.

The second part of the session focused on the implementation of national actions plans (NAPs) on business and human rights. It was introduced by Mr Daniel Morris (Adviser, Human Rights and Business, The Danish Institute for Human Rights), who explored the findings of the current state of NAPs. As in the first part of the session, a panel discussion addressed the experiences and challenges from Sweden, Japan, Italy, Germany, and Kenya, and was followed by additional comments on national implementations.

Mr Fabrizio Petri (President of the Italian Inter-ministerial Committee for Human Rights, The Government of Italy BHR, HUMAN RIGHTS, LGBTI RIGHTS, ATHEISM) talked about the pathway of the Italian National Action Plan. The plan represents a strong commitment for the protection of the rights of less-represented categories, which after the mid-term review, will address the more effective protection of the rights of media managers and journalists. The Italian NAP was built in three moments. First, the steering committee started the work on an institutional level. Second, a multistakeholder dialogue was put in place and involved more than eighty entities. Third, an online consultation meant to reach a broader audience involved ten additional entities. Finally, reinforcing human rights and sustainable development is the key to putting human dignity at the centre of the attention again.

Mr Jakob Kiefer (CSR Ambassador, Ministry of Foreign Affairs of Sweden) explained that the NAP was approved in 2016. The policies are based on the Global Compact principle, as well as gender, sustainability, and taxation. One of the biggest challenges is to assist and have a dialogue with firms in the field, as well as the eventual lack of competence in the area of CSR. 'The state has to lead and show by example' and a human rights perspective has to come with a top-down approach in all companies. In terms of challenges, the issue of procurement should be taken into consideration, as well as the need to assess a more effective location of funds. Moreover, while the discussions on having compulsory mechanisms for due diligence might sound premature, there is a need to reconsider and achieve first and foremost the principle of 'do no harm' as a guiding one for companies.

Ms Irene Maria Plank (Head of Division 'Business and Human Rights', Federal Foreign Office, Germany) focused on multistakeholder involvement and on the achievement reached on the responsible and sustainable measurement of supply and value chain. Moreover, Germany has created a pilot project meant to help German firms operating in foreign countries to follow the principle of due diligence. The mandate is to create a collection of information that German firms can refer to, evaluate the takeaways, and eventually expand the project worldwide.

Ms Stella Wangechi (Senior Human Rights Officer, Kenya National Commission on Human Rights, Kenya The NAP development process in Kenya) said that Africa does not currently have a NAP; however, Kenya is finalising its NAP, addressing priority areas such as land, labour, environment, revenue management, and access to remedies. In countries like Kenya, where the business landscape is mainly made up of small enterprises, it is difficult to stream due diligence into the practice of such companies. This represented the main challenge in achieving the NAP in Kenya.

The final speaker, Mr Ken Okaniwa (Ambassador, Permanent Mission of Japan in Geneva), shared Japan’s experience in developing its NAP. Japan included the NAP into the expanded SDGs national plan, and in the growth strategy for the country. Nonetheless, access to remedy was one of the main issues discussed in the multistakeholders consultations which requires further attention and efforts.

The side event was organised by Privacy International and the International Network of Civil Liberties Organizations, with the support of the governments of Austria, Brazil, Liechtenstein, Mexico and Germany. The event focused on recent developments on the international and national levels, with regards to the protection of the confidentiality of communications and of personal data.

The event started with opening remarks from the Permanent Mission of Brazil to the United Nations and the Permanent Mission of Germany to the United Nations, and was followed by a panel discussion, moderated by Ms Elizabeth Farries (International Network of Civil Liberties Organizations).

The first panellist, Ms Nathalie Prouvez (Chief, Rule of Law and Democracy Section, OHCHR), focused her speech on the Report of the UN High Commissioner for Human Rights on ‘The right to privacy in the digital age’, published in August 2018. Prouvez explained the topics covered in the report and discussed the responsibility of states in particular. She argued that significant obstacles continue to be in the way of the right to privacy in the digital age, and that states should create more robust and comprehensive legislation, keeping in mind the human rights laws. All victims of violations must have access to remedy mechanisms, including cross-boarder ones. Moreover, mechanisms for independent oversights should be implemented to ensure the legality and proportionality of surveillance measures. Finally, she encouraged businesses to enhance their activity in compliance with human rights laws, as well as for a collaborative effort with the public sector.

The second panellist, Mr Martin Ray Taban Mavenjina (Kenya Human Rights Commission), discussed the current issues of intelligence sharing. He structured his speech around the findings of the report ‘Unanswered Questions: international Information Sharing’. He argued that there are: insufficient laws governing how intelligence sharing agreements are formed and operate; insufficient government oversights and review of agency agreements; and finally, insufficient transparency and access to information about the agreements.

The third panellist, Mr Leandro Ucciferri (Asociación por los Derechos Civiles), talked about the implications of Biometrics for human rights. He argued that biometric data plays a huge role in the way we represent our identity with technologies. Furthermore, biometric data features two main characteristics: it is public and cannot be changed. Biometric data, such as finger prints, the pictures we upload online, the way we talk or walk, is indeed public and accessible to all. Governments are creating massive database containing the biometric data of their citizens and it represents a security concern because these databases are centralised and represent a point of entry that could be subject to attacks. The collection of large amount of personal data also affects the concept of presumption of innocence: indeed, when needed for analysis, all the data available is scanned for comparison. However, biometric data should also be analysed from another perspective: the way technologies are developed from the private sector. Indeed, the collection and analysis of data is affected by those who develop the systems we use in our daily lives, inevitably perpetuating biases.

The fourth panellist, Ms Elonnai Hickok (Centre for Internet and Society), talked about the intersection between artificial intelligence (AI) and privacy. AI has been defined in different ways according to the degree of narrowness with which experts look at it. It has challenged the perception of the online and offline world as two separate entities. Moreover, she underlined that AI is increasingly taking intelligent and autonomous decisions, raising safety concerns and considerations about the unexpected harm these decisions might create.

The final panellist, Ms Ailidh Callander (Privacy International), talked about data protection principles. Considering that more and more data is available in our digital world, the exploitation of such data could have an impact on the protection of human rights; indeed, it could threaten democracy. Data protection regulations (there are more than 100 different regulations worldwide) need to ensure transparency and include the perspective of civil society. Principles, rights and obligations on data protection should be the ground floor for the use of personal data and not the ceiling.

The meeting started with a general debate about item 4. on the agenda of the Human Rights Council – Human rights situations that require the Councils attention. Representatives from non-governmental organisations (NGOs) and advocacy groups raised concerns about human rights situations that require the Council’s attention, such as but not limited to, refugee situations, the rights of children and indigenous peoples, the protection of children and refugees from torture and violence, violence and discrimination against women and girls, freedom of the press, humanitarian law violations, separation of immigrant children from their parents, and human rights violations in Myanmar, Rwanda, Kashmir, Bahrain, Tibet, Egypt, Vietnam, South Sudan, the Sub-Saharan region, Venezuela, Sri Lanka, Pakistan, India, and Nicaragua. Underlining the violation of human rights in digital space, Article 19 expressed concerns about the new shape of online censorship and the regressive legislation that allows for extra-judicial blocking, filtering and 24-hour take-down notices, putting pressure on media outlets to self-censor.

The session then focused on item number 5 of the agenda Human rights bodies and mechanisms and addressed the report of the Working Group on the issue of human rights and transnational corporations and other business enterprises. The report was on on the issue of human rights and transnational corporations and other business enterprises on the 6th session of the Forum on Business and Human Rights (A/HRC/38/49). Ms Anita Ramasastry, Chair of the Working Group,

Since 2012, the Forum on Business and Human Rights has become the world’s biggest event on business and human rights. It was established by the Human Rights Council with resolution 17/4, in which it also endorsed the Guiding Principles on Business and Human Rights, implementing the United Nations ‘Protect, Respect and Remedy’. The mandate of the forum is to implement the guiding principles, to promote dialogue and co-operation on topics related to business and human rights. It must be noted that the forum attracts multiple stakeholder groups from state delegations, the business sector, civil society, academia, national human rights institutions, and international organsations. The theme of the 2017 forum was ‘Realizing access to effective remedy’, and aimed to tackle the gaps and shortcomings in current efforts, and promote emerging good practices and innovations to ensure access to effective remedy for victims of adverse human rights impacts of business-related activities. The multistakeholder discussion underlined the 3rd pillar of the Guiding Principles on Business and Human Rights: State-based judicial mechanisms, state-based non-judicial grievance mechanisms, and non-state-based grievance mechanisms. The Chair then navigated the discussion to issues related to human rights defenders and the role of business; to the connection between the ‘Protect, Respect and Remedy’ pillars and the sustainable development goals (SDGs); to the implementation of gender balance in both sate and business practice; and the interplay between new technology and business and human rights.

Effective remedies against business-related human rights abuses are often hampered by existing barriers. Thus, the Working Group stressed the importance of improving the third pillar through the following recommendations:

The state should put in place effective remedial mechanisms and tackle the barriers hindering access to those mechanisms;

The rights holders should be the focus and centre of the remedy process;

The freedom of fear from victimisation should be addressed;

Efforts should be effective in the process and in the outcomes; and

With regards to cross-border cases of human rights abuses, enforcement efforts such as anti-bribery and environmental protection should be implemented.

Discussions in the forum raised the need for greater effectiveness of domestic public law regimes. Moreover, crucial gaps, challenges and discrimination against rights holders or indigenous people across regions and industries still exist. There is a need to change the mindset of businesses and the state economy policy framework, from a race to the bottom’ to ’a race to the top’. Furthermore, it must be noted that states should not look at human rights as ’speed breakers to economic development’. The forum has become a means for dialogue across all stakeholder groups and a platform to launch initiatives and engage partners with them. The 2018 forum will take place from 26 to 28 November and it will focus on ‘Business respect for human rights – building on what works’.

This session addressed the need for quality capacity development in digital policy and its resources. It pointed to the mismatch between the calls for capacity development in speeches and official documents, and the missing practical and concrete discussion on resources and responsibilities. It attempted to answer the question of who should be responsible for the funding of the capacity development programmes.

The session was moderated by Dr Tereza Horejsova (DiploFoundation and the Geneva Internet Platform) who introduced the main points of the need for the capacity development, such as the availability of financial, technical and human resources and the sustainability efforts. She invited the speakers and audience to engage in a discussion of practical solutions and raised the question of who should be funding capacity development.

Ms Sarah Gaffney (Director of Partnerships for the Regulatory Capacity Building Programme at the GSMA) highlighted the problem of getting the right people in the room. She also remarked that it was difficult to find funding for their education, travel, and work. She mentioned the US Telecommunications Training Institute as one of the organisations which selects and funds training for individuals who will impact the work of their institutions. In particular, Gaffney addressed the need for funding localised content of the courses provided by the GSMA – such as translations and adjusting the content to a local setting. As an example of a practical solution, she cited the training of local sources to content translation. She further stressed the need for an actual impact of the capacity development to make an actual impact, since that in turn would attract the funders.

Ms Livia Walpen (Advisor International Relations, Swiss Federal Office of Communications (OFCOM)) presented the position of the Swiss government and the reasons why it funds capacity building in global digital policy. She stressed the need of capacity development as a precondition for successful policy, leading to a better understanding of global digital issues, resulting in better implemented policy. She then introduced the Geneva Internet Platform as a neutral platform run by DiploFoundation which provided capacity development training for about 6000 alumni.

Mr Alberto Cerda (Global Program Officer for Internet Rights & Access, Internet Freedom Program at the Ford Foundation) presented the challenges faced by funders when it comes to funding capacity development. He introduced the Internet Freedom Program, the Ford Foundation’s work with young organisations in 15 countries on digital issues. He stressed that all of the organisations they are working with are experts in their respective subjects, but there is a need for capacity development in the policy making sector to enhance their policy engagement capacity on an international level. Cerda further explained that there are vast regional differences in funding partners, with the funding of individual projects being especially hard in unstable and fast paced environments. He stressed the importance of core funding, as well as the responsibility of donors for the capacity building or their partners. Cerda further echoed the observations of Gaffney and using the partners to expand the capacity of others.

Mr Dustin Phillips (Co-Executive Director at ICANNWiki) focused on the sustainability of capacity development funding. He touched upon the gap between capacity building and the availability of capacity, stating that some of the work in the sector is not competitive, but could work to expand each others potential. He emphasised the need for long term funding, including in capacity development efforts, since the effects take years to show. He further touched upon the need of community driven processes which would include the training of people to train others.

Ms Evelyn Namara (Global Community Engagement Manager, Internet Society) stressed the importance of being aware of the opportunities to gain capacity, as well as governmental involvement in funding capacity building. She has mentioned problems faced by persons who are both qualified and interested in making an impact, such as geographical distance and visas, a setback for people from the developing countries. Some solutions she mentioned would include moving the events from developed and often expensive countries to more financially and geographically accessible locations to ease participation, as well as awareness raising of governments and private companies.

In the discussion which followed, the speakers addressed the pitfalls of two year funding cycles, the need for trust between the funders and their partners, challenges in defining impact in diverse environments, the importance of local communities and policy makers, as well as the need for partnerships not based on finances, but enhancing each others work.

As a follow-up, the moderator promised to prepare an action plan with points for efficient capacity development, and suggested continuing this discussion at the next Internet Governance Forum.

This session took place on 3 May 2018 at the United Nations in Geneva, Switzerland, and was moderated by Ms Alessandra Vellucci (Director, UN Information Service, Geneva). The event commemorated the 25th anniversary of the United Nations’ World Press Freedom Day. The moderator expressed her regret about the journalists killed exercising their profession in war-torn countries, and used this as a reminder of the importance of protecting and promoting free press.

Mr António Guterres (UN Secretary-General) gave the welcoming remarks by noting the importance of uncensored free press for democratic participation. Mr Abdulaziz Almuzaini, (Director of the United Nations Educational, Scientific and Cultural Organisation (UNESCO) Geneva Liaison Office) commented on the media’s role in increasing transparency and restraining power in the public and private sectors.

Mr Noel Curran (Director-General of the European Broadcasting Union) said that this event serves as a reminder for governments to uphold the freedom of expression in the name of democracy, citing Article 19 of the UN’s Universal Declaration of Human Rights. He continued by criticising the fact that many modern politicians have begun to publicly discredit the modern media, labelling journalists as ‘enemies of the people.’ To Curran, resisting this type of censorship means educating the public about episodes of media abuse, avoiding knee-jerk reactions against ‘fake news,’ and calling on media companies to protect their reporters.

Ms Elizabeth Laurin (Permanent Representative of France to the UN in Geneva), spoke next. She connected the freedom of the press to the UN sustainable development goal (SDG) #16, which aims to promote peaceful and inclusive societies through sustainable development. She praised digitalisation for democratising access to information, but cautioned against the speed at which fake news can travel through digital outlets. She added that educating the youth on how to seek verifiable news sources is important to counteracting the power of fake news.

Ms Nina Larson (President of the Association of the Accredited Correspondents to the UN) opened by noting that more than 30 journalists have been killed since the beginning of 2018. Larson encouraged the private sector to take steps to minimise the time required to correct the spread of misinformation online. Furthermore, she detailed the dangers of advanced audiovisual editing software that can be used to distort past events to benefit a particular agenda.

Mr Walid Doudech (Permanent Representative of Tunisia to the UN Office in Geneva) acknowledged that Tunisia has made great leaps forward in terms of press freedom since the 2011 Arab Spring, but noted that his country still needs to establish institutions that coordinate and monitor freedom of expression. In particular, Doudech called for bodies tasked with regulating access to information and personal data to end avoiding their economic and political misuse. He also posed the question of whether there should exist an international convention aimed at protecting journalists.

Ms Nathalie Prouvez (Chief of the Rule of Law and Democracy Section of the Office of the UN High Commissioner for Human Rights (OHCHR)), spoke specifically about measures states can undertake to guarantee freedom of the press. She noted that governments cannot wait for the development of new ‘instruments’ for targeting press freedom, but rather, must act now with the present set of ‘tools.’ To that end, Prouvez noted that the OHCHR is developing a way of measuring violations committed against journalists, which will help to target SDG #16. She encouraged governments to use this tool and their judicial systems to ensure that all cases of media censorship are duly investigated and prosecuted.

The session was then opened to questions and comments. Among the points raised was a question about the particular challenges that female journalists face, and a comment that social media has played a particularly important role in disseminating fake news, to which Curran added by noting that the social media platforms contribute to an unhealthy singularity of perspective in news coverage.

Larson re-joined the conversation by encouraging the audience to voice opposition to media attacks and to embrace education as the key to deconstructing censorship. Moreover, she noted that new algorithms designed to flag and hide clearly inaccurate news are under development in the digital world.

The session concluded as Doudech stressed the importance of involving journalists’ perspectives in the efforts to protect the sanctity of accurate media worldwide.

The 37th Session of the Human Rights Council was opened by introductory statements recalling the 70th anniversary of the Universal Declaration of Human Rights, the international achievements made to protect human rights, and the need of implementation on the national level.

It was generally agreed that further steps and efforts must be taken to strengthen the role of the Human Rights Council in its normative and tangible function, as well as to further enhance the co-operation among United Nations agencies regarding human rights. The UN Secretary General, H.E. Mr António Guterres, made a statement in which he recalled the resolution approved by the Security Council to suspend hostilities in Syria – whose war-zones he described as ‘hell on earth’ – and to allow for the provision of humanitarian aid. The focus on the violations of human rights in the armed conflict that affects Syria was further underlined by the UN High Commissioner for Human Rights, Mr Zeid Ra’ad Al Hussein. He stated that ‘the responsibility for the continuation of so much pain lies with the five permanent members of the UN Security Council’. A clear statement emerged from the introductory panel: human rights protection is not in conflict with national sovereignty and its implementation can help in preventing conflict and implementing development in the achievement of the sustainable development goals (SDGs), as well as with specific initiatives on sensitive issues such as migration and the Global Compact for Migration.

High-level segment (HLS)

The segment was opened with a statement by the host country, delivered by the Federal Councilor and Head of the Federal Department of Foreign Affairs (FDFA) of Switzerland, H.E. Mr Ignazio Cassis, on the importance of political and economic stability to guarantee the promotion of human rights. The High-Level Statement Panel then brought to the floor the addressment of human rights through both national implementation and international co-operation.

Statements by member states reiterated the rule of law on the national and international levels to promote human rights and prevent humanitarian violations. The conflicts in Syria and Yemen are among some of the issues that were raised. Furthermore, the role of information and information technology was highlighted by representatives during their statements. Norway raised the importance of information for development: better access to information can drive to further developments, and more resources need to be deployed. The role of information for better policy-development was further stressed by Angola, with an additional level to the role of civil society in providing and improving information. Following the statements, Uzbekistan recalled their national implementations made in the field of the Internet use for the economy, which give a voice to civil society, designed as a new ‘public movement’ through these new means. Finally, Internet-related issues were raised and can be summarised in the argument by Brazil, that rights in the digital age will increasingly define our time.

Panel discussion: The promotion and protection of human rights in the light of the universal periodic review mechanism: challenges and opportunities

The annual panel discussion of the Human Rights Council was opened by the President of the General Assembly, Mr Miroslav Lajčák. This year’s main topic was the role and importance of the Universal Periodic Review (UPR). The discussion revolved around four main questions:

How can the UPR better support a stronger coordination of the implementation efforts at the national level?

How can the UPR mechanism contribute to international co-operation in the field of human rights and ensure the full enjoyment of all human rights by all persons without discrimination?

How can the secretary-general and the high commissioner for human rights provide better support to member states to strengthen the national human rights protection system?

How can the donor community better leverage the UPR in order to support the efforts of member states in the follow-up to UPR’s accepted recommendations?

The Minister of Foreign Affairs and Human Mobility of Ecuador, Ms María Fernanda Espinosa Garcés, introduced the national reforms in which particular attention was given to the digital development of governmental services. The minister presented a basic tool for inter-institutional co-operation for entering reports on human rights violations, as well as recommendations for other provisions (for more information see: http://www.justicia.gob.ec/ministerio-de-justicia-presento-la-plataforma-siderechos/).

Digital and technical support of the UPR was requested by numerous parties, especially from those in developing states. Questions were raised about the possibility of enhancing technology for the implementation of UPR recommendations. In line with this, civil society representatives showcased previous recommendations that had been made regarding the creation of an online tool for the monitoring of the UPR’s implementation.

This side event of the Human Rights Council’s 36th session, organised by the Permanent Observer of the Holy See Mission to the UN and other international organisations in Geneva and the Permanent Mission of the Principality of Liechtenstein in Geneva, discussed the potential impact of artificial intelligence (AI) on justice systems and human rights.

The panel was opened by Mr Eric Salobir, President of OPTIC, who emphasised that the link between justice and AI is not just found in science fiction, but has already been tested and employed in judicial systems.

In his opening remarks, H.E. Archishop Ivan Jurkovic, Permanent Observer of the Holy See Mission, spoke about the importance of considering human dignity in discussions on AI, as well as the risk of machines substituting humans in certain key areas, such as education. H. E. Ambassador Peter Matt, Permanent Representative of Liechtenstein, explained that AI encompasses both opportunities and threats, especially related to the human rights to privacy and non-discrimination. He added that addressing these challenges effectively requires multistakeholder engagement.

Next, Prof. Pierre Vandergheynst, Professor at Ecole Polytecnique Fédérale de Lausanne, provided an introduction to AI and the way it could be applied to the judicial system. Although it is not a new concept, AI is mostly understood today as machine learning, powered by algorithms, which are based on data. Ultimately, ‘whoever controls data, controls AI’. AI’s predictive power comes from its ability to model the reasoning from the raw data to the final outcome.

There are several examples of AI being reasonably accurate in predicting verdicts and risk assessments. Yet, decisions based on AI cannot be easily disputed, as the patterns discovered by AI cannot be interpreted and clarified. If AI decisions are based on biased data, rooted in human judgement (such as previous verdicts), they risk disproportionally and negatively affecting certain population groups.

Prof. Louis Assier Andrieu, Professor at the Law School of SciencesPo in Paris, and Research Professor at the National Center of Scientific Research, provided a more in-depth analysis of the interplay between AI and legal traditions. According to him, both common and civil law are based on fictions, that would be internalised by AI. Common law’s fiction is based on its assumption that legal decisions can be based on previous cases; yet, ‘one never enters the same river twice’. Civil law assumes that laws and codes encompass every imaginable case, and that abstract rules can be applied to a variety of cases. To address these fictions, it could be useful to look at more communal, non-Western forms of justice.

Assier Andrieu highlighted the fact that France is already experimenting with predictive justice using big data, to make institutions more rational and less dependent on human bias. However, judgement ultimately needs trust. With 93% of the private practitioners in the USA fearing to be replaced by robots, ‘where is the trust in the making of algorithms and the predefinitions used?’ Can we trust AI to decide something as important as legal judgement? Salobir added that we need to consider whether AI makes judgements based on consequence or correlation, and whether it judges the individual or the group to which it belongs.

Prof. Lorna McGregor, Professor and Director of the Human Rights Centre, University of Essex, concluded the panel discussion by relating AI to human rights. She explained that it is ‘crucial’ to understand our current and future environment, to make sense of their human rights implications. AI could provide opportunities in making progress towards the sustainable development goals by creating efficiency, cost-effectiveness, and improvements through disaggregated data. It can help allocate resources and predict crime.

AI can also generate risks for human rights, not only by creating privacy threats and facilitating surveillance, but also by creating inequalities and discrimination. While the big data on which AI is based is extensive, it is neither complete nor perfect. This imperfect data feeds algorithms and AI, and can ‘bake discrimination into algorithms’. As a result, human bias is ‘accentuated, rather than resolved’. Echoing Vandergheynst, she repeated that AI decisions cannot easily be challenged, and that judges and lawyers might not be sufficiently equipped to understand the accuracy of these decisions.

McGregor concluded that international human rights law could provide a framework to address the risks posed by AI. We also need to consider the responsibility of states and business actors, as well as identifying red lines when the risks look too great to proceed.

This session presented the European Commission and the Internet Society’s respective perspectives on the Internet of tomorrow.

The first keynote speech was given by Mr Pearse O'Donohue (Acting Director for the Future Networks Directorate of DG CONNECT, European Commission) on the vision and projects of the European Commission with regard to the Next Generation Internet. O'Donohue started by acknowledging that the future of the Internet has become central to international discussions on Internet governance and requires the inclusion of all stakeholders.

He introduced the work of the European Commission on the future of the Internet. The European Commission recently conducted a public consultation on the Next Generation Internet to engage with all stakeholders on this issue. The results of this consultation highlight the widely shared concerns among stakeholders regarding the online world. Users increasingly limit their online activities due to challenges in terms of security, inclusiveness, and overall trust. O'Donohue argued that there needs to be an acknowledgment of these issues in order to prevent the digital divide further widening and becoming a permanent reality.

At the level of the European Union (EU), several initiatives and legislative proposals have been recently launched to strengthen the digital single market. In May, the European Commission conducted a mid-term review and its results show that more needs to be done. With regard to privacy for instance, many non-regulatory steps still need to be taken, so that the regulation of privacy becomes systematically embedded in the way businesses operate. More generally, the EU, when addressing the future of the Internet, needs to build more on its core values, such as inclusiveness and solidarity.

To do so, the European Commission recently launched its Next Generation Internet Initiative to rebuild trust in the Internet by identifying key areas of future developments. This initiative adopts a human-centric approach to address the growing disconnect between individuals and technology. At the same time, this initiative acknowledges that European Commission cannot act alone in the Internet ecosystem and strongly relies on the multistakeholder model. This is why this new initiative also aims to engage with more stakeholders, including start-ups and groups that are usually less represented in Internet governance debates.

Ms Sally Shipman Wentworth (Vice President of Global Policy Development, Internet Society) then presented the recent work conducted by the Internet Society on the future of the Internet. If it is impossible to predict the future, she argued that studying several patterns and trends at the community level could allow us to better imagine the Internet of tomorrow. Therefore, the main goals of this research are to understand the current forces of change in order to make recommendations on how to shape the future of the Internet.

Though this research project is still ongoing,Wentworth presented its preliminary findings. They build on 2500 survey responses from over 160 countries, 130 expert interviews conducted across the world, as well as 15 roundtable discussions. The final outcome of this research will be presented as part of the 2017 Internet Society Global Internet Report in September 2017.

Overall, the Internet Society’s community identified six areas as driving forces for the future of the Internet: the interaction of the Internet and the physical role, the evolution of the Internet economy, the role of governments in the online world, cyber-threats, artificial intelligence, networks standards and operability. The community also looked at the interactions between these drivers by focusing on three topics: personal rights and freedoms; the digital divide; and issues related to media, culture, and society. In terms of recommendations, interviews from the community indicated the strong need for amplifying civil society and end-user voices as part of the multistakeholder model.

Other resources

The Feminist Principles of the Internet are a set of statements that together provide a framework for women's movements to articulate and explore issues related to technology. They offer a gender and sexual rights lens on critical internet-related rights.

The principles were drafted at the first "Imagine a Feminist Internet" meeting that took place in Malaysia in April 2014. The meeting was organised by the Association for Progressive Communications and brought together 50 activists and advocates working in sexual rights, women’s rights, VAW, and internet rights.

PEN America’s Online Harassment Field Manual was created to give people resources, tools, and tips to help them respond safely and effectively to incidents of online harassment and hateful speech, and to encourage them to stay online, to keep speaking out, and to keep writing. Featuring first-person interviews with writers and journalists who have been targeted online and who refuse to be silenced along with comprehensive information about how to enhance cybersecurity, establish supportive online communities, confront online abuse head-on, practice self-care, and engage law enforcement during severe episodes of harassment, the Field Manual also offers best practices to allies of writers and journalists as well as the institutions that employ them. We invite you to explore everything the Field Manual has to offer, and to share this information widely with your social and professional networks. Source

WSIS Forum 2016 Report

Several sessions at the WSIS Forum discussed issues related to media freedom, and the rights of children in the online space. Session 114 - Action Line C9 (Media): Promote Media Freedom and Internet Universality at the Heart of Achieving SDG Target 16.10 highlighted the fact that freedom of media and the safety of journalists are important foundations to achieve the goal to ‘ensure public access to information and protect fundamental freedoms, in accordance with national legislation and international agreements’.Cooperation among different stakeholders was emphasised as crucial support for a strategy to implementation of thw WSIS Action Line C9 (Media). Global Kids Online - Children’s Rights in the Digital Age (session 145) showcased the perspectives of UNICEF’s Global Kids Online (GKO) project, research in Latin America (Risks and Safety on the In- ternet: Comparing Brazilian and European Children (2013) and Net Children Go Mobile (2014) ), ITU statistics on children’s use of the Internet, and policy initiatives to give practical examples of how research can contribute to global and national policies in support of children’s rights.

Both workshops discussed the risk that the RTBF is affecting other human rights including the right to memory and the flow of ideas, the right to know the truth, and freedom of the press. These essential rights to democracy could be threatened by the RTBF. In fact, the representative from the United Nations Commission for Human Rights commented that the RTBF contrasts with the right to know the truth, which is a distinct right. The erasure of information could impact the right to truth, and thus create a need for due process.

Among the practical implications is the fact that different jurisdictions have ruled or legislated on the RTBF. These include a judgment by the Constitutional Court of Colombia; new legislation in Chile, Nicaragua, and Russia; and data authorities’ rulings on search engines. The CJEU ruling has therefore created a ripple effect, extending the European cyberlaw footprint to a global level.

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee

GIP Digital Watch is operated by

GIP Digital Watch

Submit Content

The GIP Digital Watch observatory reflects on a wide variety of themes and actors involved in global digital policy and Internet governance. We welcome information and documents from your organisations. Submitted content will be reviewed and published by our team of knowledge curators.
You can submit your content at digitalwatch@diplomacy.edu