Sam Jungyun Choi – Inside Privacyhttps://www.insideprivacy.com
Updates on developments in data privacy and cybersecurityThu, 12 Sep 2019 15:17:46 +0000en-UShourly1https://wordpress.org/?v=4.9.11ICO publishes blog post on AI and trade-offs between data protection principleshttps://www.insideprivacy.com/artificial-intelligence/ico-publishes-blog-post-on-ai-and-trade-offs-between-data-protection-principles/
Tue, 06 Aug 2019 11:06:08 +0000https://www.insideprivacy.com/?p=10059Continue Reading]]>On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”). The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions. The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views. This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI. See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.

The ICO identifies that the following trade-offs may arise in AI projects:

Accuracy vs. privacy. Large amounts of data are needed to improve the accuracy of AI systems but this may impact the privacy rights of the individuals involved.

Fairness vs. accuracy. Certain factors need to be removed from AI algorithms to ensure that AI systems are fair and do not discriminate individuals on the basis of any protected characteristics (as well as known proxies, such as postcode as a proxy for race). However, this may impact the accuracy of the AI system.

Fairness vs. privacy. In order to test whether an AI system is discriminatory, it needs to be tested using data labelled by protected characteristics, but this may be restricted under privacy law (i.e., under the rules on processing special category personal data).

Explainability vs. accuracy. For complex AI systems, it may be difficult to explain the logic of the system in an easy-to-understand way that is also accurate. The ICO considers, however, that this trade-off between explainability and accuracy is often a false dichotomy. See our previous blog post on the ICO’s separate report on explaining AI for more on the topic.

Explainability vs. security. Providing detailed explanations about the logic of an AI system may lead to inadvertently disclosing information in the process that can be used to infer private information about the individuals whose personal data was used to build the AI system. The ICO recognizes that this area is under active research, and the full extent of the risks are not yet known.

The ICO recommends that organizations take the following steps in order to manage these trade-offs that may arise:

Identify and assess existing or potential trade-offs;

Consider available technical means to minimize trade-offs;

Have clear criteria and lines of accountability for making trade-off decisions, including a “robust, risk-based and independent approval process”;

Organizations should document decisions to an “auditable standard”, including, where required, by performing a Data Protection Impact Assessment. Such documentation should: (i) consider the risks to individuals’ personal data, (ii) use a methodology to identify and assess trade-offs; (iii) provide a rational for final decisions; and (iv) explain how the decision aligns with the organization’s risk appetite.

When outsourcing AI solutions, assessing trade-offs should form part of organizations’ due diligence of third parties. Organizations should ensure they can request that solutions be modified to strike the right balance between the trade-offs identified above.

In the final section of the blog, the ICO offers some worked examples demonstrating mathematical approaches to help organizations visualize and make decisions to balance the trade-offs. Although elements of trade-offs can be precisely quantified in some cases, the ICO recognizes that not all aspects of privacy and fairness can be fully quantified. The ICO therefore recommends that such methods should “always be supplemented with a more holistic approach”.

The ICO has published a separate blog post on the use of fully automated decision making AI systems and the right to human intervention under the GDPR. The ICO provides practical advice for organizations on how to ensure compliance with the GDPR, such as: (i) consider necessary requirements to support a meaningful human review; (ii) provide training for human reviewers; and (iii) support and incentivize staff to escalate concerns raised by data subjects. For more information, read the ICO’s blog here.

The ICO intends to publish a formal consultation paper on the framework for auditing AI in January 2020, followed by the final AI Auditing Framework in the spring. In the meantime, the ICO welcomes feedback on its current thinking, and has provided a dedicated email address to obtain views (available at the bottom of the blog). We will continue to monitor the ICO’s developments in this area and will keep you apprised on this blog.

]]>ICO Launches Public Consultation on New Data Sharing Code of Practicehttps://www.insideprivacy.com/uncategorized/ico-launches-public-consultation-on-new-data-sharing-code-of-practice/
Thu, 01 Aug 2019 08:00:56 +0000https://www.insideprivacy.com/?p=10046Continue Reading]]>On July 16, 2019, the UK’s Information Commissioner’s Office (“ICO”) released a new draft Data sharing code of practice (“draft Code”), which provides practical guidance for organizations on how to share personal data in a manner that complies with data protection laws. The draft Code focuses on the sharing of personal data between controllers, with a section referring to other ICO guidance on engaging processors. The draft Code reiterates a number of legal requirements from the GDPR and DPA, while also including good practice recommendations to encourage compliance. The draft Code is currently open for public consultation until September 9, 2019, and once finalized, it will replace the existing Data sharing code of practice (“existing Code”).

Key practical points from the draft Code are

As a first step to embarking on data sharing, organizations should decide whether to carry out a Data Protection Impact Assessment (DPIA). Organizations should also take into account various factors (such as the purposes of the data sharing, whether anonymization is possible, what risks may be posed to individuals, and so forth) before deciding to share personal data. A list of suggested questions to consider is provided in pp. 22-23 of the draft Code.

It is good practice for organizations sharing personal data to put in place a data sharing agreement. Data sharing agreements should set out the purpose of the data sharing, cover what happens to the data at each stage, set standards, and clarify the roles of the parties involved. A list of suggested issues that should be addressed in a data sharing agreement is provided in pp. 26-29 of the draft Code. Organizations are also advised to keep data sharing agreements under review as a project progresses.

In order to ensure compliance with the accountability principle, organizations should maintain records as required by data protection law. These include records of processing activities, records of privacy notices provided, records of consent obtained (where applicable), records of lawful basis for processing, and records of personal data breaches.

When deciding to share personal data, organizations should also check to ensure they comply with any other applicable laws (e.g., human rights law, rules on public sector data sharing, and others) and consider whether it is ethical to share the data.

While the draft Code builds on the existing Code, it provides quite a bit of new information, including placeholders where additional content will be added before the document is finalized (e.g., a section on sharing data outside of the European Economic Area, as well as updated data sharing checklists and new template for data sharing request & decision forms). The draft Code includes several new sections on specific topics of interest, such as data sharing and children, data sharing in the context of M&A deals, sharing of databases and lists, data ethics and data trusts, and law enforcement processing. While checklists and other forms in Annex A and B are still forthcoming, Annex D provides a number of useful case studies applying the content of the draft Code to real-life scenarios.

After the public consultation period, which ends on September 9, 2019, the draft Code will be approved by Parliament before it becomes a statutory code of practice. Although failure to comply with the Code will not of itself be a cause of action, processing personal data in breach of the Code will usually result in a breach of the GDPR or the DPA. Also, the Code can be used as evidence in legal proceedings, and the ICO, courts and tribunals are required to take into account the provisions in the Code where relevant.

]]>CJEU rules that Facebook and website operators are joint controllers if the website embeds Facebook’s “Like” buttonhttps://www.insideprivacy.com/international/european-union/cjeu-rules-that-facebook-and-website-operators-are-joint-controllers-if-the-website-embeds-facebooks-like-button/
Wed, 31 Jul 2019 17:21:16 +0000https://www.insideprivacy.com/?p=10044Continue Reading]]>On July 29, 2019, the Court of Justice of the European Union (“CJEU”) handed down its judgment in the Fashion ID case (Case C-40/17). The CJEU found that when a website operator embeds Facebook’s “Like” button on its website, Facebook and the website operator become joint controllers. The case clarifies the relationship between website operators and social networking sites whose plug-ins are embedded into websites for user tracking and online marketing purposes. The ruling is expected to influence the contractual terms that companies will need to have in place when embedding such social plug-ins to their websites, and may also have ramifications for adtech practices more generally.

The Fashion ID case arose out of a 2015 complaint made by a German consumer protection association, Verbraucherzentrale NRW, against an online clothes retailer, Fashion ID, which embedded Facebook’s “Like” button on its website. Facebook’s “Like” button is a social plug-in that allows website users to click the “Like” button to show on their Facebook profile that they “like” a certain product or service. Websites use this plug-in to optimize their advertising on Facebook so that targeted ads can be shown to people who “like” their products.

Websites with the “Like” button collect information (e.g., IP addresses and browser string data) about not only the people who click the “Like” button, but also other website users who do not click the button, as well as those that do not have a Facebook account. This data is then transferred to Facebook.

The complaint filed by Verbraucherzentrale NRW alleged that Fashion ID’s use of the Facebook “Like” button breached EU data protection law because Fashion ID failed to appropriately inform users and obtain their consent to transfer personal data to Facebook. The complainant sought an injunction by the court to order Fashion ID to stop using the functionality.

The Oberlandesgericht Düsseldorf (Higher Regional Court, Düsseldorf,

Germany) referred the matter up to the CJEU, asking a number of questions seeking clarification as to several provisions of the Data Protection Directive 95/46/EC (which continue to have relevance under the EU’s General Data Protection Regulation), most notably:

Can Member State laws implementing the Data Protection Directive allow consumer protection organisations to lodge data protection claims on behalf of affected individuals?

The CJEU decided that the provisions of the Data Protection Directive on “judicial remedies, liability and sanctions” give Member States the freedom to determine the “appropriate means” to ensure their application, which could extend to allowing consumer protection organizations to act on behalf of individuals whose data privacy rights have been impinged. The CJEU also mentioned that this redress mechanism is now explicitly provided for under Art. 80 of the GDPR.

Is the website (i.e., Fashion ID) a “joint controller” in relation to the data that Facebook collects about users?

Significantly, the CJEU decided that Fashion ID and Facebook are “joint controllers” in relation to Facebook’s collection and sharing of personal data. According to the CJEU, by embedding the plug-in on its website, Fashion ID is “influencing” the collection and sharing of data and is “at least tacitly” consenting to it. The CJEU decided that Fashion ID’s responsibility is most apparent in situations where users do not have an account with Facebook, but their data is nonetheless shared with Facebook as a result of accessing Fashion ID’s website. The CJEU also determined that Fashion ID’s lack of access to the data is irrelevant when assessing “joint controllership” (consistent with earlier CJEU cases C-210/16 and C-25/17).

However, the CJEU clarified that although the term “controller” should be given a broad interpretation, an organization cannot be held responsible for upstream or downstream processing operations in the processing chain for which it does not determine the purpose or the means of processing. In this regard, the CJEU held that Facebook (not Fashion ID) is the controller for the processing that takes place after the personal data related to the “Like” plug-in has been transferred to Facebook.

Can Fashion ID and Facebook rely on their legitimate interests to collect and share personal data?

The CJEU did not give a clear answer to this question, but merely stated that both Fashion ID and Facebook would need to establish a legitimate interest, if they were intending to rely on this legal basis.

Who has responsibility to (i) provide notice to users about how the data is collected and used and (ii) to collect consent from the users?

The CJEU decided that it is the website operator’s responsibility to provide notice to users and to obtain their consent. However, the website operator only needs to inform users and obtain their consent for processing operations for which it is a “joint controller”.

This ruling mirrors the court’s findings in the Wirtschaftsakademie case (Case C-210/16), where the CJEU found that Wirtschaftsakademie, which offers educational services through a fan page hosted on Facebook, was a joint controller with Facebook for the processing of user website usage data through the “Facebook Insights” tool. The CJEU’s reasoning in both cases provides useful guidance on how the court identifies “controllers” and “joint controllers” in data sharing relationships. The CJEU’s findings suggest that companies using third party tools (e.g., cookies, plug-ins and other website analytics tools) to increase their online visibility may need to ramp up their disclosures to website users and strengthen the contractual terms they have in place with their advertising partners.

]]>Two new developments from the EU High-Level Working Group on AI: launch of pilot phase of Ethics Guidelines and publication of Policy and Investment Recommendations for Trustworthy AIhttps://www.insideprivacy.com/artificial-intelligence/two-new-developments-from-the-eu-high-level-working-group-on-ai-launch-of-pilot-phase-of-ethics-guidelines-and-publication-of-policy-and-investment-recommendations-for-trustworthy-ai/
Tue, 02 Jul 2019 21:19:48 +0000https://www.insideprivacy.com/?p=9986Continue Reading]]>On June 26, 2019, the EU High-Level Expert Group on Artificial Intelligence (AI HLEG) announced two important developments: (1) the launch of the pilot phase of the assessment list in its Ethics Guidelines for Trustworthy AI (the “Ethics Guidelines”); and (2) the publication of its Policy and Investment Recommendations for Trustworthy AI (the “Recommendations”).

The AI HLEG is an independent expert group established by the European Commission in June 2018. The Recommendations are the second deliverable of the AI HLEG; the first was the Group’s Ethics Guidelines of April 2019, which defined the contours of “Trustworthy AI” (see our previous blog post here). The Recommendations are addressed to policymakers and call for 33 actions to ensure the EU, together with its Member States, enable, develop, and build “Trustworthy AI” – that is, AI systems and technologies that reflect the AI HLEG’s now-established ethics guidelines. Neither the Ethics Guidelines nor the Recommendations are binding, but together they provide significant insight into how the EU or Member States might regulate AI in the future.

Throughout the remainder of 2019, the AI HLEG will undertake a number of sectoral analyses of “enabling AI ecosystems” — i.e., networks of companies, research institutions and policymakers — to identify the concrete actions that will be most impactful in those sectors where AI can play a strategic role.

Pilot phase of Assessment List of Ethics Guidelines

The Ethics Guidelines of April 2019 included a checklist for stakeholders to use when assessing whether an AI system is “Trustworthy.” In the current pilot phase, stakeholders are invited to test this assessment list and provide feedback through the European AI Alliance via an online survey that will be available until December 1, 2019. The AI HLEG will use this feedback, along with information collected from interviews with selected representatives from the private and public sectors, to prepare a revised version of the assessment list that it will present in early 2020 to the Commission.

Recommendations for Trustworthy AI

The Recommendations urge policy-makers both at European and national level to promote the development and use of “Trustworthy AI” in Europe through adoption of 33 actions. The Recommendations are divided into two Chapters, which are summarized below:

Chapter I sets out recommendations for policy-makers to ensure AI has a positive impact in Europe. Each recommendation seeks to promote a human-centric approach to AI, in line with the Ethics Guidelines. These recommendations also seek to foster cooperation between stakeholders and cross-sector collaboration, and note the importance of stakeholder consultation in particular in the context of harmonizing and standardizing regulations.

Some of the key recommendations laid out in Chapter I are as follows:

Boost the uptake of AI technology and services across sectors in Europe. Policy-makers should enable and foster the digitization of companies by earmarking investments in AI, fostering AI skills development through education, training and financial support, and providing technical know-how and support for SMEs.

Set up public-private partnerships to foster sectoral AI ecosystems. Policymakers should conduct an analysis of several selected AI ecosystems in the short term and, in the medium term, set up Sectoral Multi-Stakeholder Alliances (SMUHAs) for strategic sectors in Europe.

Approach government as a platform, catalyzing AI development in Europe. For contracts between a public sector organization and a company, consider introducing a requirement that data which is not proprietary to the company, and which is of general public interest, should be handed back to the public sector, allowing its reuse for beneficial innovation.

Increase and streamline funding for fundamental and purpose-driven research. Create incentives for interdisciplinary and multi-stakeholder research, including through the funding of AI business incubators, research labs and hackathons.

Promote an approach to AI centered on humans, society and the protecting the environment. For instance the recommendations call on policymakers to refrain from using AI to engage in disproportionate and mass surveillance of individuals (either for commercial or government purposes), require AI systems to disclose that they are non-human when interacting with individuals, to introduce a duty of care on suppliers of consumer-oriented AI systems to ensure the accessibility of services, encourage the development of tools that protect vulnerable demographics, and foster collaborative AI-human systems that promote safety and empower humans at work.

Chapter II puts forward recommendations to develop the skills, infrastructure, governance and investment necessary to deliver on the Trustworthy AI concept in the EU. Some of the key recommendations laid out in Chapter II are as follows:

Develop legally compliant and ethical data management and sharing initiatives in Europe. Policymakers should support research and development of industrial solutions for fast, secure and legally compliant data sharing (e.g., encryption) and common standards that promote the interoperability of datasets. A data donor scheme, allowing individuals to donate data for specific purposes, should also be considered.

Develop and support AI-specific cyber-security infrastructures. The EU should build upon the Cybersecurity Act adopted by the EU in spring 2019 to protect networks, data and users from risks.

Evaluate and potentially revise EU laws, starting with the most relevant legal domains. Policymakers should conduct systemic mapping and evaluation of all existing laws that are particularly relevant to AI systems. In particular, the AI HLEG recommends considering whether data protection rules are, on the one hand, overly rigid with respect to access to public data for research purposes, and, on the other hand, under-protective by excluding non-personal data from transparency and explainability requirements.

Consider the need for new regulation to ensure adequate protection from adverse impacts. For AI systems with the potential to have a significant impact on human lives, policymakers should consider introducing a mandatory obligation to conduct a Trustworthy AI assessment.

Establish governance mechanisms for a Single Market for Trustworthy AI in Europe. Policymakers should harmonize regulation, and establish a comprehensive strategy for Member State cooperation.

The Recommendations call for significant new investment in, and resources dedicated to, transforming the regulatory and investment environment for Trustworthy AI in Europe. Both private sector and public sector organizations developing, implementing or managing AI technologies in Europe should review these Recommendations and plan for the potential opportunities and challenges on the horizon.

]]>ICO’s Call for Input on Bias and Discrimination in AI systemshttps://www.insideprivacy.com/artificial-intelligence/icos-call-for-input-on-bias-and-discrimination-in-ai-systems/
Thu, 27 Jun 2019 09:25:16 +0000https://www.insideprivacy.com/?p=9976Continue Reading]]>On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human bias and discrimination, set out below.

The ICO explains in its blog how flaws in training data can result in algorithms that perpetuate or magnify unfair biases. The ICO identifies three broad approaches to mitigate this risk in machine learning models:

Anti-classification: making sure that algorithms do not make judgments based on protected characteristics such as sex, race or age, or on proxies for protected characteristics (e.g., occupation or post code);

Outcome and error parity: comparing how the model treats different groups. Outcome parity means all groups should have equal numbers of positive and negative outcomes. Error parity means all groups should have equal numbers of errors (such as false positives or negatives). A model is fair if it achieves outcome parity and error parity across members of different protected groups.

Equal calibration: comparing the model’s estimate of the likelihood of an event and the actual frequency of said event for different groups. A model is fair if it is equally calibrated between members of different protected groups.

The guidance stresses the importance of appropriate governance measures to manage the risks of discrimination in AI systems. Organizations may take different approaches depending on the purpose of the algorithm, but they should document the approach adopted from start to finish. The ICO also recommends that organizations adopt clear, effective policies and practices for collecting representative training data to reduce discrimination risk; that organizations’ governing bodies should be involved in approving anti-discrimination approaches; and that organizations continually monitor algorithms by testing them regularly to identify unfair biases. Organizations should also consider using a diverse team when implementing AI systems, which can provide additional perspectives that may help to spot areas of potential discrimination.

The ICO seeks input from industry stakeholders on two questions:

If your organisation is already applying measures to detect and prevent discrimination in AI, what measures are you using or have you considered using?

In some cases, if an organisation wishes to test the performance of their ML model on different protected groups, it may need access to test data containing labels for protected characteristics. In these cases, what are the best practices for balancing non-discrimination and privacy requirements?

]]>UK Government’s Guide to Using AI in the Public Sectorhttps://www.insideprivacy.com/artificial-intelligence/uk-governments-guide-to-using-ai-in-the-public-sector/
Thu, 27 Jun 2019 09:12:11 +0000https://www.insideprivacy.com/?p=9972Continue Reading]]>On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”). The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems. Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.

Understanding AI

The introductory section of the Guidance on understanding AI defines AI as “the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.” The Guidance provides that AI systems must comply with applicable laws, calling out in particular the GDPR, and specifically the obligations on automated decision-making. (As discussed in our earlier blog post, the ICO has previously highlighted the relevance of Article 22 of the GDPR on automated decision-making in their Interim Report on Project ExplAIn.)

The Guidance also explains that the UK Government has created three new bodies and two new funds to help integrate AI into the private and public sectors. The three new bodies are the AI Council, the Office for AI, and the Centre for Data Ethics and Innovation; the two funds are the Gov-Tech Catalyst and the Regulator’s Pioneer Fund.

Assessing, Planning and Managing AI

When assessing AI systems, and in particular how to build or buy them, the Guidance recommends that public sector organizations should:

Assess which AI technology is suitable for the situation. The Guidance describes, at a high-level, several types of common machine learning techniques and applications of machine learning;

Obtain approval from the Government Digital Services by carrying out discovery to show feasibility. Most AI solutions are categorized as ‘novel’, and therefore requiring further scrutiny;

Discovery. In this phase, organizations must assess whether AI is right for their needs. If it is, they will prepare their data and will build an AI implementation team (normally comprised of a data scientist, data engineer, data architect, and ethicist). Data should be made secure in accordance with guidance from the National Cyber Security Centre (“NCSC”) and by complying with applicable data protection law.

Alpha Phase. Data is divided into a training set, a validation set and a test set. A base model is used as a benchmark and more complex models are created to suit the client’s problem. The best of these models is tested and evaluated economically, ethically and socially.

Beta Phase. The chosen model is integrated and performance tested. The product is continually evaluated and improved versions are created and deployed – a specialist team is maintained to carry out these improvements.

The Guidance stresses the importance of having appropriate governance in place in order to manage the risks that arise from the implementation of AI systems. The section on managing AI projects outlines a number of factors that organizations should consider when running AI projects, and provides a table of common risks that arise in AI projects along with recommended mitigation measures.

Using AI ethically and Safely

The section of the Guidance on using AI ethically and safely is addressed to all parties involved in the design, production, and deployment of AI projects, including data scientists, data engineers, domain experts, delivery managers and departmental leads. The Guidance summarizes the Alan Turing Institute’s detailed guidance, published as part of their public policy programme, and is designed to work within the UK Government’s August 2018 Data Ethics Framework.

The Guidance focuses heavily on the need for a human-centric approach to AI systems. This aligns with positions of other forums (such as the European Commission’s High Level Working Group’s Ethics Guidelines for Trustworthy AI – see our blog here). The Guidance stresses the importance of building a culture of responsible innovation, and recommends that the governance architecture of AI systems should consist of: (1) a framework of ethical values; (2) a set of actionable principles; and (3) a process-based governance framework.

Organizations should pursue these ethical values through four “FAST Track principles”, which are:

Fairness (being unbiased and using fair data);

Accountability (having a clear chain of accountability and system of review);

Sustainability (making sure the project is safe and has longevity); and

Transparency (decisions should be explained and justified).

Organizations should bring these values and principles together in an integrated process-based governance framework, which should encompass:

the relevant team members and roles involved in each governance action;

the relevant stages of the workflow in which intervention and targeted consideration are necessary to meet governance goals;

explicit timeframes for any evaluations, follow-up actions, re-assessments, and continuous monitoring; and

clear and well-defined protocols for logging activity and for implementing mechanisms to support end-to-end auditability.

Governance and ethics of AI systems is currently a hot topic, with a number of different guidelines and approaches emerging in the UK, the EU and other jurisdictions. Organizations developing AI technologies or adopting AI solutions should keep abreast of the evolving landscape in this field, and consider providing input to policymakers.

]]>Privacy Shield Ombudsperson Confirmed by the Senatehttps://www.insideprivacy.com/cross-border-transfers/privacy-shield-ombudsperson-confirmed-by-the-senate/
Tue, 25 Jun 2019 18:48:09 +0000https://www.insideprivacy.com/?p=9970Continue Reading]]>On June 20, 2019, Keith Krach was confirmed by the U.S. Senate to become the Trump administration’s first permanent Privacy Shield Ombudsperson at the State Department. The role of the Privacy Shield Ombudsperson is to act as an additional redress avenue for all EU data subjects whose data is transferred from the EU or Switzerland to the U.S. under the EU-U.S. and the Swiss-U.S. Privacy Shield Framework, respectively.

As Ombudsperson, Krach will be responsible for dealing with complaints and requests from individuals in the EU and Switzerland, including in relation to U.S. national security access to data transmitted from the EU or Switzerland to the U.S. The Ombudsperson works with other Government officials and independent oversight bodies to review and respond to requests. Krach’s role as Ombudsperson forms part of his duties as the Under Secretary for Economic Growth, Energy and the Environment. The Under Secretary is independent from the intelligence services and reports directly to the Secretary of State.

The formal approval of a permanent Privacy Shield Ombudsperson will be welcomed at EU level. As we have previously reported, the European Data Protection Board praised the appointment of a permanent Ombudsperson in its January report regarding the second annual review of the Privacy Shield. In addition, the Commission has emphasized that the Ombudsperson is “an important mechanism that ensures complaints concerning access to personal data by U.S. authorities are addressed.” This appointment comes at a time when both the EU-U.S. Privacy Shield and the Standard Contractual Clauses are under scrutiny in the European courts.

]]>ICO’s Interim Report on Explaining AIhttps://www.insideprivacy.com/artificial-intelligence/icos-interim-report-on-explaining-ai/
Fri, 07 Jun 2019 22:00:12 +0000https://www.insideprivacy.com/?p=9913Continue Reading]]>On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.

The Interim Report summarizes the results of recent engagements with public and industry stakeholders to obtain views on how best to explain AI decision-making, which in turn will inform the ICO’s development of guidance on this issue. The research was carried out by using a “citizen’s jury” method to find out public perception on the issues and holding roundtables with industry stakeholders represented by data scientists, researchers, Chief Data Officers, C-suite executives, Data Protection Officers, lawyers and consultants.

Following the results of the research, the Interim Report provides three key findings:

the importance of context in providing the right type of explanations for AI;

the need for greater education and awareness of AI systems; and

the challenges of providing explanations (such as cost, commercial sensitivities, and lack of internal accountability within organizations).

In relation to context, the Institute’s engagement with members of the public found that the type and usefulness of AI explanations was highly context-dependent. For instance, interviews with members of the public found that most jurors felt it was less important to receive an explanation of the AI system in the healthcare sector, but that such explanations were more important when AI is used to make decisions about recruitment and criminal justice. Participants also felt that the importance of an explanation of an AI decision is also likely to vary depending on the person it is given to. For instance, in a healthcare setting, it may be more important for a healthcare professional to receive an explanation of a decision than the patient. Some participants also expressed the view that in some situations (such as in the healthcare or criminal justice scenarios), explanations of AI decisions may be too complex, or delivered at a time when individuals would not understand the rationale.

Industry stakeholders presented similar but nuanced views, highlighting that using explanations to identify and address underlying system bias was a key consideration. While some industry stakeholders agreed with the jurors that explanations of AI decisions should be context-specific and reflect the way in which human decision-makers provide explanations, others argued that AI decisions should be held to higher standards. Besides the risk that such explanations of AI may be too complex, industry stakeholders also identified several additional risks with AI explanations that are too detailed, such as the risks of potential disclosure of commercially sensitive material or allowing the system to be gamed. The Interim Report provides a list of contextual factors that the research found may be relevant when considering the importance, purpose and explanations of AI decision-making (see p.23).

In terms of next steps, the ICO plans to publish a first draft of its guidance over the summer, which will be subject to public consultation. Following the consultation, the ICO plans to publish the final guidance later in the autumn. The Interim Report concluded three possible implications for the development of the guidance:

there is no one-size-fits-all approach for explaining AI decisions;

the need for board-level buy-in on explaining AI decisions; and

the value in a standardized approach to internal accountability to help assign responsibility for explainable AI decision-systems.

The Interim Report provides a taster of what’s to come by providing the current planned format and content for the guidance, which focuses on three key principles: (i) transparency; (ii) context; and (iii) accountability. It will also provide guidance on organizational controls (such as roles, policies, procedures, and documentation), technical controls (such as on data collection, model selection and explanation extraction), and on delivery of explanations. The ICO will also finalize its AI Auditing Framework in 2020, which will also address the data protection risks arising from AI systems.

]]>ICO issues draft code of practice on designing online services for childrenhttps://www.insideprivacy.com/international/united-kingdom/ico-issues-draft-code-of-practice-on-designing-online-services-for-children/
Mon, 29 Apr 2019 13:59:14 +0000https://www.insideprivacy.com/?p=9865Continue Reading]]>Earlier this month, the UK’s Information Commissioner’s Office published a draft code of practice (“Code”) on designing online services for children. The Code is now open for public consultation until May 31, 2019. The Code sets out 16 standards of “age appropriate design” with which online service providers should comply when designing online services (such as apps, connected toys, social media platforms, online games, educational websites and streaming services) that children under the age of 18 are likely to access. The standards are based on data protection law principles, and are legally enforceable under the GDPR and UK Data Protection Act 2018. The Code also provides further guidance on collecting consent from children and the legal basis for processing children’s personal data (see Annex A and B of the Code). The Code should be read in conjunction with the ICO’s current guidance on children and the GDPR.

The 16 standards set out in the Code are as follows:

Best interests of the child. The best interests of the child should be the primary consideration when developing and designing online services that children are likely to access. This includes consideration for children’s online safety, physical and mental well-being, as well as development.

Age-appropriate application. Online service providers should consider the age-range of users of the online service, including the needs and capabilities of children of different ages. Annex A of the Code provides some helpful guidance on key considerations at different ages, including the types of online services that children may encounter at different ages, their capacity to understand privacy information and ability to make meaningful decisions about their personal data.

Transparency. Privacy information, policies and community standards provided to children must be concise, prominent and use clear language in an age-appropriate manner. ‘Bite-sized’ explanations should also be provided about how the personal data is used at the point that the child starts to use the service, with further age-appropriate prompts to speak with an adult before providing their data or not to proceed if uncertain.

Detrimental use of data. Online service providers should refrain from using children’s personal data in ways that have been shown to be detrimental to their well-being, or that go against industry codes of practice, other regulatory provisions or Government advice. Examples of codes or advice that are likely to be relevant includes guidance from the Committee of Advertising Practice (CAP) that publishes guidance about online behavioural advertising which covers children.

Policies and community standards. Online service providers should uphold their published terms, policies and community standards (including, but not limited to, privacy policies, age restriction, behaviour rules and content policies).

Default Settings. ‘High privacy’ settings should be provided by default (unless the online service provider can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child), thereby limiting visibility and accessibility of children’s personal data.

Data minimisation. Online service providers should collect and retain only the minimum amount of personal data necessary to provide the elements of the service in which a child is actively and knowingly engaged. Children should be provided with as much choice as possible over which elements of the service they wish to use and how much data they provide. This choice includes whether they wish their personal data to be used for (each) additional purpose or service enhancement.

Data sharing. Children’s personal data should not be shared or disclosed with third parties unless there is a compelling reason to do so, taking account of the best interests of the child. Due diligence checks should be conducted on any third party recipients of children’s data, and assurances should be obtained to ensure that sharing will not be detrimental to the well-being of the child.

Geolocation. Geolocation options should be turned off by default unless there is a compelling reason otherwise, again taking account of the best interests of the child. Online service providers should ensure that the service clearly indicates to child users when location tracking is active. Options which make a child’s location visible to others must default back to “off” at the end of each session.

Parental controls. Age-appropriate information should be provided to the child about parental controls, where provided. If the service allows a parent or caregiver to monitor their child’s online activity or track their location, such monitoring should be made clear to the child through the use of obvious signs. Audio or video materials should also be provided to children and parents about children’s rights to privacy.

Profling. Profiling options must be turned off by default, unless there is a compelling reason for profiling, taking account of the best interests of the child. Profiling is only allowed if there are appropriate measures in place to protect the child from any harmful effects (in particular, being shown content that is detrimental to their health or well-being).

Nudge techniques. Design features that suggest or encourage children to make a particular decision to provide unnecessary personal data, weaken or turn off their privacy protections, or extend their use, should not to be used. By contrast, pro-privacy nudges are permitted, where appropriate.

Connected toys and devices. The Code applies to connected toys and devices, such as talking teddy bears, fitness bands or ‘home hub’ interactive speakers. Providers should provide clear, transparent information about who is processing the personal data and what their responsibilities are at the point of purchase and set up. Connected toys and devices should avoid passive collection of personal data (e.g., when in an inactive “listening mode” listening for key words that could wake the device).

Online tools. Online service providers should provide prominent, age-appropriate and accessible tools to help children exercise their data protection rights and report concerns. The tools should also include methods for tracking the progress of complaints or requests, with clear information provided on response timescales.

Data protection impact assessments (DPIAs). Online service providers that provide services that children may access should undertake a DPIA specifically to assess and mitigate risks to children. Annex C of the Code provides a template DPIA that modifies the ICO’s standard template DPIA to include a section for online service providers to consider each of the 16 standards in the Code.

Governance and accountability. Online service providers should ensure that they have policies and procedures in place that demonstrate how providers comply with data protection obligations and the Code, including data protection training for all staff involved in the design and development of online services likely to be accessed by children.

]]>EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AIhttps://www.insideprivacy.com/artificial-intelligence/eu-high-level-working-group-publishes-ethics-guidelines-for-trustworthy-ai/
Tue, 09 Apr 2019 18:57:21 +0000https://www.insideprivacy.com/?p=9832Continue Reading]]>On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”). This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance). The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III. The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.

The guidance is not binding, but stakeholders can voluntarily use the guidance as a way to operationalise their commitment to achieving “Trustworthy AI,” which is the AI HLEG’s term for the gold standard of an ethical approach to AI. According to the AI HLEG, Trustworthy AI consists of the following three components:

Lawful. It should comply with all applicable laws and regulations;

Ethical. It should comply with ethical principles and values; and

Robust. It should be robust from both a technical and social perspective.

Each component is considered “necessary but not sufficient for the achievement of Trustworthy AI,” and as such all three should “work in harmony and overlap.” The introduction of “lawfulness” as a component of Trustworthy AI is one of the key changes in the final version of the guidance as compared to the draft. The guidance recognizes that AI systems do not operate in a legal vacuum, and that AI systems are subject to a number of existing laws, including (but not limited to) the General Data Protection Regulation (GDPR), the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination legislation, consumer law, and sector-specific laws (such as Medical Devices Regulation in the healthcare sector). The guidance confirms that organizations developing, deploying and using AI systems should comply with such existing laws, to the extent that they apply. The guidance does not discuss the legal obligations that apply to AI systems in further detail, but focuses on the latter two components – that AI systems should be “ethical” and “robust”.

Chapter I of the guidance outlines the four ethical principles that should apply to AI systems, which are: (1) respect for human autonomy; (2) prevention of harm; (3) fairness; and (4) explicability. The guidance frames these as “ethical imperatives” that AI practitioners should always try to adhere to. Yet, the guidance recognizes that tensions may arise between these principles, for which there is no fixed solution. For instance, there may be a situation where prevention of harm (such as terrorism) may conflict with respect for human autonomy (such as privacy). As such, the guidance notes that while the four ethical principles offer some guidance towards solutions, they remain abstract prescriptions, and AI practitioners should approach ethical dilemmas “via reasoned, evidence-based reflection rather than intuition or random discretion.”

Chapter II of the guidance sets out the following seven key requirements to achieve Trustworthy AI that apply in the life-cycle of the development, deployment and use of AI systems:

Technical robustness and safety. Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility.

Privacy and data governance. Including respect for privacy, quality and integrity of data, and access to data.

Transparency. Including traceability, explainability and communication.

Diversity, non-discrimination and fairness. Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation.

Societal and environmental wellbeing. Including sustainability and environmental friendliness, social impact, society and democracy.

Accountability. Including auditability, minimization and reporting of negative impact, trade-offs and redress.

Chapter II also recommends both technical and non-technical measures to achieve Trustworthy AI. Technical measures include architectures for Trustworthy AI, ethics and rule of law by design, explanation methods, testing and validating, and quality of service indicators. Non-technical measures include regulation, codes of conduct, standardization, certification, accountability via governance frameworks, education and awareness to foster an ethical mindset, stakeholder participation and social dialogue and diversity and inclusive design teams.

On regulation as a non-technical measure to achieve Trustworthy AI, the guidance again confirms that existing legislation already supports the trustworthiness of AI systems. On the face of this guidance, it is not apparent that the AI HLEG supports specific further regulation of AI at this stage, but the guidance notes that the AI HLEG will soon issue “AI Policy and Investment Recommendations,” which will address whether existing regulation may need to be revised, adapted or introduced in this space.

Chapter III of the guidance provides a Trustworthy AI assessment list (the “assessment list”), which acts as a checklist for stakeholders to ensure that AI systems and applications meet the ethical principles and Trustworthy AI requirements set out above. A notable addition to this section includes guidance on the roles of individuals within an organisation for implementing the assessment list (including the Management and Board, Compliance/Legal/Corporate responsibility departments, Product and Service development teams, Quality Assurance, HR, Procurement, and developers and project managers in their day-to-day roles). The guidance recommends engaging individuals at all levels of the organization, including those from the operational level all the way up to management.

The guidance includes additional instructions for using the assessment list, which recommends taking a proportionate approach and paying close attention to both areas of concern and questions that cannot be (easily) answered. It gives an example of an organization that is unable to ensure diversity when developing and testing the AI system, due to the lack of diversity in the development team. In this situation, the guidance recommends involving other stakeholders either inside or outside the organization to satisfy this requirement.

The guidance stresses that the assessment list will need to be adapted to the particular application of an AI system at issue. It notes that “different situations raise different challenges,” for example, an AI system involving music recommendations will raise different ethical considerations to an AI system that proposes critical medical treatments. Greater importance is given to AI systems that directly or indirectly affect individuals. Further to this, the guidance suggests that additional sectoral guidance may be necessary to deal with the different ethical challenges raised in different sectors.

The final section of Chapter III gives examples of opportunities and critical concerns raised by AI, as follows:

Examples of opportunities: Using AI to tackle climate action and sustainable infrastructure, improve health and well-being, improving the quality of education, and achieving digital transformation;

In the areas of “critical concern”, the guidance calls for a proportionate approach that takes into account the fundamental human rights of the individuals concerned. When organizations use AI systems that involve these critical concerns, they will need to undergo a careful ethical (as well as legal) assessment.

Next Steps

As noted above, the guidance will now enter a “piloting phase” where interested stakeholders can provide feedback on implementing the guidance and the assessment list in real projects. Based on this feedback, the AI HLEG will update the guidance in early 2020.

In the meantime, according to the Communication the Commission will work towards a set of international AI ethics guidelines that brings the European approach to the global stage. The Commission intends to cooperate with “like-minded partners” by finding convergence with other countries’ AI ethics guidelines and building an international group for broader discussion. It will also continue to “play an active role in international discussions and initiatives,” such as contributing to the G7 and G20 summits on this issue.

Finally, the Commission announced in its Communication the following plans, to be implemented by the third quarter of 2019:

To launch networks of AI research excellence centers;

To launch networks of digital innovation hubs (focusing on AI in manufacturing and big data);

To start discussions with Member States and stakeholders to “develop and implement a model for data sharing and making best use of common data spaces”;

To continue work on its draft report identifying the challenges with the use of AI in the product liability space; and

For the European High-Performance Computing Joint Undertaking to develop next generation supercomputers which the Commission considers “essential for processing data and training AI.”

These plans further build on the Commission’s broader European AI Strategy, aimed at boosting Europe’s competitiveness in the field of AI.