Tag Archives: European Data Protection Supervisor

The nature of the digital economy is as such that it will force the creation of multi-competent supervisory authorities sooner rather than later. What if the European Data Protection Board would become in the next 10 to 15 years an EU Digital Regulator, looking at matters concerning data protection, consumer protection and competition law, having “personal data” as common thread? This is the vision Giovanni Buttarelli, the European Data Protection Supervisor, laid out last week in a conversation we had at the IAPP Data Protection Congress in Brussels.

The conversation was a one hour session in front of an over-crowded room in The Arc, a cozy amphitheater-like venue inducing bold ideas being expressed in a stimulating exchange.

To begin with, I reminded the Supervisor that at the very beginning of his mandate, in early 2015, he published the 5-year strategy of the EDPS. At that time the GDPR wasn’t adopted yet and the Internet of Things was taking off. Big Data had been a big thing for a while and questions about the feasibility and effectiveness of a legal regime that is centered around each data item that can be traced back to an individual were popping up. The Supervisor wrote in his Strategy that the benefits brought by new technologies should not happen at the expense of the fundamental rights of individuals and their dignity in the digital society.

“Big data will need equally big data protection“, he wrote then, suggesting thus that the answer to Big Data is not less data protection, but enhanced data protection.

I asked the Supervisor if he thinks that the GDPR is the “big data protection” he was expecting or whether we need something more than what the GDPR provides for. And the answer was that “the GDPR is only one piece of the puzzle”. Another piece of the puzzle will be the ePrivacy reform, and another one will be the reform of the regulation that provides data protection rules for the EU institutions and that creates the legal basis for the functioning of the EDPS. I also understood from our exchange that a big part of the puzzle will be effective enforcement of these rules.

The curious fate of the European Data Protection Board

One centerpiece of enforcement is the future European Data Protection Board, which is currently being set up in Brussels so as to be functional on 25 May 2018, when the GDPR becomes applicable. The European Data Protection Board will be a unique EU body, as it will have a European nature, being funded by the EU budget, but it will be composed of commissioners from national data protection authorities who will adopt decisions, that will rely for the day-to-day activity on a European Secretariat. The Secretariat of the Board will be ensured by dedicated staff of the European Data Protection Supervisor.

The Supervisor told the audience that he either already hired or plans to hire a total of “17 geeks” adding to his staff, most of whom will be part of the European Data Protection Board Secretariat. The EDPB will be functional from Day 1 and, apparently, there are plans for some sort of inauguration of the EDPB celebrated at midnight on the 24th to the 25th of May next year.

These are my thoughts here: the nature of the EDPB is as unique as the nature of the EU (those of you who studied EU Law certainly remember from the law school days how we were told that the EU is a sui generis type of economical and political organisation). In fact, the EDPB may very well serve as test model for ensuring supervision and enforcement of other EU policy areas. The European Commission could test the waters to see whether such a mixt national/European enforcement mechanism is feasible.

There is a lot of pressure on effective enforcement when it comes to the GDPR. We dwelled on enforcement, and one question that inevitably appeared was about the trend that starts to shape up in Europe, of having competition authorities and consumer protection authorities engaging in investigations together with, or in parallel with data protection authorities (see here – here and here).

“It’s time for a big change, and time for the EU to have a global approach“, the Supervisor said. And a change that will require some legislative action. “I’m not saying we will need an European FTC (US Federal Trade Commission – n), but we will need a Digital EU Regulator“, he added. This Digital Regulator would have the powers to also look into competition and consumer protection issues raised by processing of personal data (so, therefore, in addition to data protection issues). Acknowledging that these days there is a legislative fatigue in Brussels surrounding privacy and data protection, the Supervisor said he will not bring this idea to the attention of the EU legislator right now. But he certainly plans to do so, maybe even as soon as next year. The Supervisor thinks that the EDPB could morph into this kind of Digital Regulator sometime in the future.

Enhanced global enforcement initiatives

Another question that had to be asked on enforcement was whether we should expect more concentrated and coordinated action of privacy commissioners on a global scale, in GPEN-like structures. The Supervisor revealed that the privacy commissioners that meet for the annual International Conference are “trying to complete an exercise about our future”. They are currently analyzing the idea of creating an entity with legal personality that will look into global enforcement cases.

Ethics comes on top of legal compliance

Another topic the conversation went to was “ethics”. The EDPS has been on the forefront of including the ethics approach in privacy and data protection law debates, by creating the Ethics Advisory Group at the beginning of 2016. I asked the Supervisor whether there is a danger that, by bringing such a volatile concept into the realm of data protection, companies would look at this as an opportunity to circumvent strict compliance and rely on sufficient self-assessments that their uses of data are ethical.

“Ethics comes on top of data protection law implementation”, the Supervisor explained. According to my understanding, ethics is brought into the data protection realm only after a controller or processor is already compliant with the law and, if they have to take equally legal decisions, they should rely on ethics to take the right decision.

We did discuss about other things during this session, including the 2018 International Conference of Privacy Commissioners that will take place in Brussels, and the Supervisor received some interesting questions from the public at the end, including about the Privacy Shield. But a blog can only be this long.

Note: The Supervisor’s quotes are so short in this blog because, as the moderator, I did my best to follow the discussion and steer it rather than take notes. So the quotes come from the brief notes I managed to take during this conversion.

While the guidelines are addressed to the EU bodies that provide mobile apps to interact with citizens (considering the mandate of the EDPS is to supervise how EU bodies process data), the guidance is just as valuable to all controllers processing data via mobile apps.

The Guidelines acknowledge that “mobile applications use the specific functions of smart mobile devices like portability, variety of sensors (camera, microphone, location detector…) and increase their functionality to provide great value to their users. However, their use entails specific data protection risks due to the easiness of collecting great quantities of personal data and a potential lack of data protection safeguards.”

Managing consent

One of the most difficult data protection issues that controllers of processing operations through mobile apps face is complying with the consent requirements. The Guidelines provide valuable guidance on how to obtain valid consent (see paragraphs 25 to 29).

Adequately inform users and obtain their consent before installing any application on user’s smart mobile device

Users have to be given the option to change their wishes and revoke their decision at any time.

Consent needs to be collected before any reading or storing of information from/onto the smart mobile device is done.

An essential element of consent is the information provided to the user. The type and accuracy of the information provided needs to be such as to put users in control of the data on their smart mobile device to protect their own privacy.

The consent should be specific (highlighting the type of data collected), expressed through active choice, freely given (users should be given the opportunity to make a real choice).

The apps must provide users with real choices on personal data processing: the mobile application must ask for granular consent for every category of personal data it processes and every relevant use. If the OS does not allow a granular choice, the mobile application itself must implement this.

The mobile application must feature functionalities to revoke users’ consent for each category of personal data processed and each relevant use. The mobile application must also provide functionalities to delete users’ personal data where appropriate.

The Guidelines invite controllers to “analyse the compliance of its intended processing before implementing the mobile application during the feasibility check, business case design or an equivalent early definition stage of the project”. The controller “should take decisions on the design and operation of the planned mobile application based on an information security risk assessment”.

Other recommendations concern:

data minimisation – “the mobile application must collect only those data that are strictly necessary to perform the lawful functionalities as identified and planned”.

third party components or services – “Assess the data processing features of a third party component or of a third party service before integrating it into a mobile application”.

secure development, operation and testing – “The EU institution should have documented secure development policies and processes for mobile applications, including operation and security testing procedures following best practices”.

vulnerability management – “Adopt and implement a vulnerability management process appropriate to the development and distribution of mobile applications” (paragraphs 47 to 51).

protection of personal data in transit and at rest – “Personal data needs to be protected when stored in the smart mobile device, e.g. through effective encryption of the personal data”.

In summary, it seems to me that the AG’s message is: “if you do it unambiguously and transparently, under independent supervision, and without sensitive data, you can process PNR data of all travellers, creating profiles and targeting persons matching patterns of suspicious behaviour”.

This is problematic for the effectiveness of the right to the protection of personal data and the right to respect for private life. Even though the AG agrees that the scrutiny of an international agreement such as the EU-Canada PNR Agreement should not be looser than that of an ordinary adequacy decision or that of an EU Directive, and considers that both Schrems and Digital Rights Ireland should apply in this case, he doesn’t apply in all instances the rigorous scrutiny the Court uses in those two landmark judgments. One significant way in which he is doing this is by enriching the ‘strict necessity test’ so that it comprises a “fair balance” criterion and an “equivalent effectiveness” threshold (See Section 5).

On another hand, AG Mengozzi is quite strict with the safeguards he sees as essential in order to make PNR agreements such as the one in this case compatible with fundamental rights in the EU.

Data protection authorities have warned time and again that PNR schemes are not strictly necessary to fight terrorism, serious and transnational crimes – they are too invasive and their effectiveness has not yet been proven. The European Data Protection Supervisor – the independent advisor of the EU institutions on all legislation concerning processing of personal data, has issued a long series of Opinions on PNR schemes – be it in the form of international agreements on data transfers, adequacy decisions or EU legislation, always questioning their necessity and proportionality[3]. In the latest Opinion from this series, on the EU PNR Directive, the EDPS clearly states that “the non-targeted and bulk collection and processing of data of the PNR scheme amount to a measure of general surveillance” (§63) and in the lack of appropriate and unambiguous evidence that such a scheme is necessary, the PNR scheme is not compliant with Articles 7, 8 and 52 of the Charter, Article 16 TFEU and Article 8 ECHR (§64).

The Article 29 Working Party also has a long tradition in questioning the idea itself of a PNR system. A good reflection of this is Opinion 7/2010, where the WP states that “the usefulness of large-scale profiling on the basis of passenger data must be questioned thoroughly, based on both scientific elements and recent studies” (p. 4) and declares that it is not satisfied with the evidence for the necessity of such systems.

The European Parliament suspended the procedure to conclude the Agreement and decided to use one of its new powers granted by the Treaty of Lisbon and asked the CJEU to issue an Opinion on the compliance of the Agreement with EU primary law (TFEU and the Charter).

Having the CJEU finally look at PNR schemes is a matter of great interest for all EU travellers, and not only them. Especially at a time like this, when it feels like surveillance is served to the people by states all over the world – from liberal democracies to authoritarian states, as an acceptable social norm.

General remarks: first-timers and wide implications

The AG acknowledges in the introductory part of the Opinion that the questions this case brought before the Court are “unprecedented and delicate” (§5). In fact, the AG observes later on in the Opinion that the “methods” applied to PNR data, once transferred, in order to identify individuals on the basis of patterns of behavior of concern are not at all provided for in the agreement and “seem to be entirely at the discretion of the Canadian authorities” (§164). This is why the AG states that one of the greatest difficulties of this case is that it “entails ascertaining … not merely what the agreement envisaged makes provision for, but also, and above all, what it has failed to make provision for” (§164).

The AG also makes it clear in the beginning of the Opinion that the outcome of this case has implications on the other “PNR” international agreements the EU concluded with Australia and the US and on the EU PNR Directive (§4). A straightforward example of a possible impact on these other international agreements, beyond analyzing their content, is the finding that the legal basis on which they were adopted is incomplete (they must be also based on Article 16 TFEU) and wrong (Article 82(1)(d) TFEU on judicial cooperation is incompatible as legal basis with PNR agreements).

The implications are even wider than the AG acknowledged. For instance, a legal instrument that could be impacted is the EU-US Umbrella Agreement– another international agreement on transfers of personal data from the EU to the US in the law enforcement area, which has both similarities and differences compared to the PNR agreements. In addition, an immediately affected legal process will be the negotiations that the European Commission is currently undertaking with Mexico for a PNR Agreement.

Even if it is not an international agreement, the adequacy decision based on the EU-US Privacy Shield deal could be impacted as well, especially with regard to the findings on the independence of the supervisory authority in the third country where data are transferred (See Section 6 for more on this topic).

Finally, the AG also mentions that this case allows the Court to “break the ice” in two matters:

It will examine for the first time the scope of Article 16(2) TFEU (§6) and

rule for the first time on the compatibility of a draft international agreement with the fundamental rights enshrined in the Charter, and more particularly with those in Article 7 and Article 8 (§7).

Therefore, the complexity and novelty of this case are considerable. And they are also a good opportunity for the CJEU to create solid precedents in such delicate matters.

I structured this post around the main ideas I found notable to look at and summarize, after reading the 328-paragraphs long Opinion. In order to make it easier to read, I’ve split it into 6 Sections, which you can find following the links below.

Last May I had the chance to meet Prof. Tim Berners-Lee and one of the lead researchers in his team at MIT, Andrei Sambra, when I accompanied Giovanni Buttarelli, the European Data Protection Supervisor, in his visit at MIT.

Andrei presented then the SOLID project, and we had the opportunity to discuss about it with Prof. Berners-Lee, who leads the work for SOLID. The project “aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy.” In other words, the researchers want to de-centralise the Internet.

“Solid (derived from “social linked data”) is a proposed set of conventions and tools for building decentralized social applications based on Linked Data principles. Solid is modular and extensible and it relies as much as possible on existing W3C standards and protocols”, as explained on the project’s website.

Andrei explains in a blog post that, in a first step, the project finds solutions “to decouple the applications from the data they produce, and then to decouple the data from the actual storage server.”

“This means that applications and servers are interchangeable, and they can be swapped without impacting the most important part – your data. It’s all about freedom of choice.” (Read the entire explanation in this blog post)

I was so excited to find out about the efforts conducted by Prof. Berners-Lee and his team. At the end of the presentation and the discussion, I asked, just to make sure I understood it correctly: “Are you trying to reinvent the Internet?”. And Prof. Berners-Lee replied, simply: “Yes”. A couple of weeks later I saw this article in the New York Times: “The Web’s creator looks to reinvent it” So I did understand correctly 🙂

But why was I so excited? Because I saw first hand that some of the greatest minds in the world are working to bring back control to the individual on the Internet. Some of the greatest minds in the world are not giving up on privacy, irrespective of how many “Privacy is dead” books and articles are published, irrespective of how public and private policymakers, lobbyists and Courts understand at this moment in history the value of privacy and of what Andrei called “freedom of choice” in the digital world.

I was excited because I found out about a common goal us, the legal privacy bookworms/occasional policymakers, and the IT masterminds have: empower the ‘data subject’, the ‘user’, well, the human being, in the new Digital Age, put them back in control and curtail unnecessary invasions of privacy for all kind of purposes (profit making to security).

In fact, my entire PhD thesis was built on the assumption that the rights of the data subject, as they are provided in EU law (rights to access, to erase, to object, to be informed, to oppose automated decision making) are all prerogatives of the individual that aim to give control to the individual over his or her data. So if technical solutions are developed for this kind of control to be practical and effective, I am indeed excited about it!

I also realised that some of the provisions that survived incredible, multifaceted opposition to make it to the new General Data Protection Regulation are in fact tenable, like the right to data portability (check out Article 20 of the GDPR, here).

This is why, when I saw that today the world celebrates 25 years since the Internet went public, I remembered this moment in May and I wanted to share it with you. Here’s to a decentralised Internet!

Later Edit: The man itself says August 23 is not exactly accurate. Nor 25 years! In any case, it was still a good day for me to think about all of the above and share it with you 🙂

“A man from Italy enters a pharmacy in Athens, Greece, to get some medication. Only, he has no prescription. Oh no!

Fortunately, he has an e-prescription. A what? An e-prescription, an online prescription saved under his name on a server in Italy somewhere.

The pharmacist, with the consent of his client, retrieves the prescription over the Internet via so-called national contact points that convert the Italian drug to its Greek equivalent, and all ends well.

The scene is from a promotional video from epSOS.”

The benefits are obvious. Today, there are big differences across Europe in the kind of patient data collected and the way in which it is stored. In reality, the man from Italy would have had to visit a doctor in Athens first to get a Greek prescription.

But what about the drawbacks? Does an EU-wide system not pose an increased risk to privacy? “It depends,” says Giovanni Buttarelli, assistant European Data Protection Supervisor. It would if it meant the creation of one big central database of personal health records that reveal people’s entire medical history. That would be “simply a monster,” he says. It would be prone to security breaches. “Security is something you can look for, but not ensure.”

Better is to introduce access to data on a need-to-know basis – a general practitioner would not have the same access as, say, a neurosurgeon – and to spread out the data over a network of local repositories.

“The portability of health data is a necessity for the current world,” says Buttarelli. “You cannot simply say: Okay, let’s go back to paper.”

Wordpress.com uses cookies on this blog. I've limited them as much as customization allows me & I have no access to or control over the personal data they collect. Consent will be recorded after you click the button, and not just by mere scrolling. The widget doesn't provide an "I refuse" button & I'm writing to Wordpress to fix this. In the meantime, see their
Cookie Policy