Tag Archives: Gabriela Zanfir

First, I would like to thank you for making this the most successful year in the 5 years life of pdpEcho (I would especially like to thank those who supported the blog and helped me cover, thus, the cost of renting the blog’s .com name). I started this blog when I was in my first year as a PhD student to gather all information I find interesting related to privacy and data protection. At that time I was trying to convince my classic “civilist” supervisor that data protection is also a matter of civil law. And that I could write a civil law thesis on this subject in Romanian, even though Romanian literature on it only counted one book title from 2004. In the five years that followed another book title was added to it and the blog and I grew together (be it at different paces).

In the recent months it offered me a way to keep myself connected to the field while transitioning from Brussels to the US. But most importantly it reminded me constantly that privacy is really not dead, as it has been claimed numerous times. I cared about it, people that daily found this blog cared about it and as long as we care about privacy, it will never die.

I am writing this end-of-the-year post with some very good news from Europe: you and I are not the only ones that care about privacy. A vast majority of Europeans also does. The European Commission published some days ago a Eurobarometer on ePrivacy, as a step towards the launch of the ePrivacy Directive reform later in January.

The results could not have been clearer:

“More than nine in ten respondents said it is important that personal information (such as their pictures, contact lists, etc.) on their computer, smartphone or tablet can only be accessed with their permission, and that it is important that the confidentiality of their e-mails and online instant messaging is guaranteed (both 92%)” (source, p. 2).

“More than seven in ten think both of these aspects are very important. More than eight in ten (82%) also say it is important that tools for monitoring their activities online (such as cookies) can only be used with their permission (82%), with 56% of the opinion this is very important” (source, p. 2).

Overwhelming support for encryption

Remarkably, 90% of those asked agreed “they should be able to encrypt their messages and calls, so they can only be read by the recipient”. Almost as many (89%) agree the default settings of their browser should stop their information from being shared (source, p. 3).

Respondents thought it is unacceptable to have their online activities monitored in exchange for unrestricted access to a certain website (64%), or to pay in order not to be monitored when using a website (74%). Almost as many (71%) say it is unacceptable for companies to share information about them without their permission (71%), even if it helps companies provide new services they may like(source, p. 4).

Therefore, there is serious cause to believe that our work and energy is well spent in this field.

The new year brings me several publishing projects that I am very much looking forward to, as well as two work projects on this side of the Atlantic. Nevertheless, I hope I will be able to keep up the work on pdpEcho, for which I hope to receive more feedback and even input from you.

In this note, I wish you all a Happy New Year, where all our fundamental rights will be valued and protected!

It is apparent from the Report that the most important issue to be tackled by the industry is data security – it represents also the most important risk to consumers.

While data security enjoys the most attention in the Report and the bigger part of the recommendations for best practices, data minimisation and notice and choice are considered to remain relevant and important in the IoT environment. FTC even provides a list of practical options for the industry to provide notice and choice, admitting that there is no one-size-fits-all solution.

The most welcomed recommendation in the report (at least, by this particular reader) was the one referring to the need of general data security and data privacy legislation – and not such legislation especially tailored for IoT. FTC called the Congress to act on these two topics.

Here is a brief summary of the Report:

The IoT definition from FTC’s point of view

Everyone in the field knows there is no generally accepted definition of what IoT is. It is therefore helpful to know what FTC considers IoT to be for its own activity:

“things” such as devices or sensors – other than computers, smartphones, or tablets – that connect, communicate or transmit information with or between each other through the Internet.

In addition, FTC clarified that, consistent with their mission to protect consumers in the commercial sphere, their discussion of IoT is limited to such devices that are sold to or used by consumers.

Stunning facts and numbers

as of this year, there will be 25 billion connected devices worldwide;

fewer than 10,000 households using one company’s IoT home automation product can “generate 150 million discrete data points a day” or approximately one data point every six seconds for each household.

Data security, the elephant in the house

Most of the recommendations for best practices that FTC made are about ensuring data security. According to the Report, companies:

should implement “security by design” by building security into their devices at the outset, rather than as an afterthought;

must ensure that their personnel practices promote good security; as part of their personnel practices, companies should ensure that product security is addressed at the appropriate level of responsibility within the organization;

must work to ensure that they retain service providers that are capable of maintaining reasonable security, and provide reasonable oversight to ensure that those service providers do so;

should implement a defense-in-depth approach, where security measures are considered at several levels; (…) FTC staff encourages companies to take additional steps to secure information passed over consumers’ home networks;

should consider implementing reasonable access control measures to limit the ability of an unauthorized person to access a consumer’s device, data, or even the consumer’s network;

should continue to monitor products throughout the life cycle and, to the extent feasible, patch known vulnerabilities.

Attention to de-identification!

In the IoT ecosystem, data minimization is challenging, but it remains important.

Companies should examine their data practices and business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data.

To the extent that companies decide they need to collect and maintain data to satisfy a business purpose, they should also consider whether they can do so while maintaining data in deidentified form.

When a company states that it maintains de-identified or anonymous data, the Commission has stated that companies should

take reasonable steps to de-identify the data, including by keeping up with technological developments;

publicly commit not to re-identify the data; and

have enforceable contracts in place with any third parties with whom they share the data, requiring the third parties to commit not to re-identify the data.

Notice and choice – difficult in practice, but still relevant

While the traditional methods of providing consumers with disclosures and choices may need to be modified as new business models continue to emerge, (FTC) staff believes that providing notice and choice remains important, as potential privacy and security risks may be heightened due to the pervasiveness of data collection inherent in the IoT. Notice and choice is particularly important when sensitive data is collected.

Staff believes that providing consumers with the ability to make informed choices remains practicable in the IoT;

Staff acknowledges the practical difficulty of providing choice when there is no consumer interface, and recognizes that there is no one-size-fits-all approach. Some options are enumerated in the report – several of which were discussed by workshop participants: choices at point of sale, tutorials, codes on the device, choices during set-up.

No need for IoT specific legislation, but general data security and data privacy legislation much needed

Staff does not believe that the privacy and security risks, though real, need to be addressed through IoT-specific legislation at this time;

However, while IoT specific-legislation is not needed, the workshop provided further evidence that Congress should enact general data security legislation;

General technology-neutral data security legislation should protect against unauthorized access to both personal information and device functionality itself;

General privacy legislation that provides for greater transparency and choices could help both consumers and businesses by promoting trust in the burgeoning IoT marketplace; In addition, as demonstrated at the workshop, general privacy legislation could ensure that consumers’ data is protected, regardless of who is asking for it.

The organizers of CPDP 2015 made available on their youtube channel some of the panels from this year’s conference, which happened last week in Brussels. This is a wonderful gift for people who weren’t able to attend CPDP this year (like myself). So a big thank you for that!

While all of them seem interesting, I especially recommend the “EU-US interface: Is it possible?” panel. My bet is that the EU privacy legal regime/US privacy legal regime dichotomy and the debates surrounding it will set the framework of “tomorrow”‘s global protection of private life.

Exactly one year ago I wrote a 4 page research proposal for a post-doc position with the title “Finding Neverland: The common ground of the legal systems of privacy protection in the European Union and the United States”. A very brave idea, to say the least, in a general scholarly environment which still widely accepts Whitman’s liberty vs dignity solution as a fundamental “rift” between the American and European privacy cultures.

The idea I wanted to develop is to stop looking at what seems to be fundamental differences and start searching a common ground from which to build new understandings of protecting private life accepted by both systems.

While it is true that, for instance, a socket in Europe is not the same as a socket in the US (as a traveller between the two continents I am well aware of that), fundamental human values do not change while crossing the ocean. Ultimately, I can convert the socket into metaphor and say that even if the continents use two very different sockets, the function of those sockets is the same – they are a means to provide energy so that one’s electronic equipment works. So which is this “energy” of the legal regime that protects private life in Europe and in the US?

My hunch is that this common ground is “free will”, and I have a bit of Hegel’s philosophy to back this idea. My research proposal was rejected (in fact, by the institute which, one year later, organized this panel at CPDP 2015 on the EU-US interface in privacy law). But, who knows? One day I may be able to pursue this idea and make it useful somehow for regulators that will have to find this common ground in the end.

You will discover in this panel some interesting ideas. Margot Kaminski (The Ohio State University Moritz College of Law) brings up the fact that free speech is not absolute in the US constitutional system – “copyright protection can win over the first amendment” she says. This argument is important in the free speech vs privacy debate in the US, because it shows that free speech is not “unbeatable”. It could be a starting point, among others, in finding some common ground.

Pierluigi Perri (University of Milan) and David Thaw (University of Pittsburgh) seem to be the ones that focus the most on the common grounds of the two legal regimes. They say that, even if it seems that one system is more preoccupied with state intrusions in private life and the other with corporate intrusions, both systems share a “feared outcome – the chilling effect on action and speech” of these intrusions. They propose a “supervised market based regulation” model.

Dennis Hirsch (Capital University Law School) speaks about the need of global privacy rules or something approximating them, “because data moves so dynamically in so many different ways today and it does not respect borders”. (I happen to agree with this statement – more details, here). Dennis argues in favour of sector co-regulation, that is regulation by government and industry, to be applied in each sector.

Other contributions are made by Joris van Hoboken, University of Amsterdam/New York University (NL/US) and Eduardo Ustaran, Hogan Lovells International (UK).

The panel is chaired by Frederik Zuiderveen Borgesius, University of Amsterdam and organised by Information Society Project at Yale Law School.

This is the paper I presented at the Harvard Institute for Global Law and Policy 5th Conference, on June 3-4, 2013. I decided to make it available open access on SSRN. I hope you will enjoy it and I will be very pleased if any of the readers would provide comments and ideas. The main argument of the paper is that we need global solutions for regulating cloud computing. It begins with a theoretical overview on global governance, internet governance and territorial scope of laws, and it ends with three probable solutions for global rules envisaging the cloud. Among them, I propose the creation of a “Lex Nubia” (those of you who know Latin will know why 😉 ). My main concern, of course, is related to privacy and data protection in the cloud, but that is not the sole concern I deal with in the paper.

Abstract:

The most common used adjective for cloud computing is “ubiquitous”. This characteristic poses great challenges for law, which might find itself in the need to revise its fundamentals. Regulating a “model” of “ubiquitous network access” which relates to “a shared pool of computing resources” (the NIST definition of cloud computing) is perhaps the most challenging task for regulators worldwide since the appearance of the computer, both procedurally and substantially. Procedurally, because it significantly challenges concepts such as “territorial scope of the law” – what need is there for a territorial scope of a law when regulating a structure which is designed to be “abstracted”, in the sense that nobody knows “where things physically reside” ? Substantially, because the legal implications in connection with cloud computing services are complex and cannot be encompassed by one single branch of law, such as data protection law or competition law. This paper contextualizes the idea of a global legal regime for providing cloud computing services, on one hand by referring to the wider context of global governance and, on the other hand, by pointing out several solutions for such a regime to emerge.

The paper received the “Junior Scholar Award 2014”. “The junior scholar award is a new award at CPDP which is generously supported by Google. The winning paper is selected from the papers written by junior scholars who have already been selected from the general CPDP call for papers. The jury consists of: Ronald Leenes, University of Tilburg (NL), Bert-Jaap Koops, University of Tilburg (NL), Jess Hemerly, Google (US), Mariachiara Tallachini, EC-JRC (IT) and Chris Jay Hoofnagle, UC Berkeley (US). The award recognises outstanding work in the ﬁ eld of privacy and data protection”.

This is an incredible honor! Thank you, CPDP!

***

I will present the paper Tracing the right to be forgotten in the short history of data protection law: The “new clothes” of an old right at the Computers, Privacy and Data Protection conference, next week in Brussels. I am scheduled on Wednesday, 22 January, from 15.30, at La Maison des Arts, within the “Academic/PhD session. The right to be forgotten”.

The session will be chaired by Bert-Jaap Koops, from Tilburg University (TILT).

The other papers from the session are:

Ten Reasons Why the ‘Right to be Forgotten’ should be Forgotten by Christiana Markou.

Information Privacy and the “Right to be Forgotten”: An Exploratory Survey of Public Opinion and Attitudes by Clare Doherty and Michael Lang.

Purpose Limitation and Fair Re-use by Merel Koning.

As for my paper, here you have its abstract:

When the European Commission (EC) published its draft Data Protection Regulation (DPR) in early 2012, a swirl of concern hit data controllers regarding the introduction of a sophisticated “right to be forgotten” in the proposal for the future DPR, which was considered to unprecedentedly impact the internet and its economics. Critics and advocates of the right to be forgotten engaged in consistent theoretical debates, doubled by the technical discourse about its (un)feasibility. This paper “decomposes” the right to be forgotten into the tangible prerogatives which are in fact granted to individuals. It shows that those prerogatives already exist to an extended degree in EU law, and have existed in the first data protection laws enforced in Europe. In addition, the controversial obligation to inform third parties about the erasure request is a “duty of best efforts” which pertains to controllers and which is significantly different than a duty to achieve a result. Recourse will be made to private law theory to underline this difference.

Keywords: the right to be forgotten, data protection, privacy, duty of best efforts.

For further information on CPDP 2014, check out the conference web page. It looks like it will be a tremendous get-together of privacy people.

“The data subject” is the “titulaire” (fr.) of the subjective right to the protection of personal data, being identified as such by the transposition law into the Romanian legal system of the Data Protection Directive (Directive 95/46). This study aims to analyze the conditions under which the person can enjoy the system of protection of her personal data. Hence, it will tackle the problem of the “quality” of the data subject – can the data subject ever be a legal person, or must it always be a natural person? It will also analyze the concepts of anonymization and pseudonymization, having regard to both the national and European legal provisions, as well as to the EU data protection reform package. The conclusions will show, on the one hand, that the legal person can have its private data protected under exceptionally situations and only in certain fields, and on the other hand that pseudonymization has the potential to meet both the need of protection of the individual in the digital era and the interests of the data controllers. In order for this to happen, pseudonymization must be rationally regulated in the future European data protection law, which is currently under debate.

Data portability — the ability to move your information between clouds (or in and out of clouds) with relative ease — is a key concern of companies considering a cloud move.

It’s become a truism to say that data is the new gold –but that doesn’t mean there are easy answers about where to store this gold. For now, many corporate customers will hold back on full cloud computing adoption until they’re convinced that they can move their data off a given cloud as easily as they put it there in the first place. Face it: fear of vendor lock-in is not limited to the on-premises IT world and it’s time enlightened vendors get this problem in hand.

The advent of cloud computing should make it easy to mix and match services from multiple vendors within a cloud and to let data flow in and out of parts of the clouds as needed. But that’s not necessarily the reality now.

“When you move to cloud, you should be increasing your choices, not decreasing them. You don’t buy three on-premises apps but you can use three services from three vendors in the cloud,” said Robert Jenkins, co-founder and CTO of Cloud Sigma, the Zurich-based cloud provider.

Bill Gerhardt, director of Cisco Systems’ internet solutions group’s service provider practice, agreed. “We need to sort out data portability. Customers ask: ‘If I give you all this data, how do I retrieve that data if I want to go somewhere else? Many cloud companies don’t have a clear exit route.”

For the opinion that the right to data portability, in reality, hampers competition, see Peter P. Swire and Yanni Lagos, Why the Right to Data Portability Likely Reduces Consumer Welfare: Antitrust and Privacy Critique, available HERE.

For the opinion that the right to data portability adds value both to privacy and competition, see G. Zanfir, The right to data portability in the context of the EU data protection reform, abstract available HERE, full text upon access, HERE.

Big Data promises a great deal: they are inscribed with the potential to transform society, science, governance, business and society as a whole. But the quest for predictability which is at the core of Big Data also raises many questions related to determinism, discrimination, manipulation, conformism, to cite a few.

This panel will address the following issues:

What are the main benefits and risks associated with Big Data and the integration of large, diverse datasets?

What critical social, moral and legal problems are raised by Big Data and what could be the way forward to minimize the risks while not compromising the benefits?

Is the current philosophy of data protection in Europe compatible with Big Data or is it deeply called into question?

Under the proposed European Data Protection Regulation, the data protection authorities of Member States are expected to co-operate with each other and with the Commission, and to achieve consistency in their activities. What are the prospects for this, and what has been their previous experience with joint activities and mutual assistance? The speakers in this panel are well-qualified to consider these and related questions in this subject, which is of great importance to the future of data protection.

Earlier studies, including special Eurobarometer surveys and the report of the European Union Agency for fundamental rights (FRA) on data protection authorities, highlighted that redress mechanisms in the area of data protection are available, but little used. FRA will undertake legal and social fieldwork research on Member States’ redress mechanisms in the area of data protection to offer insights into the reasons why available redress mechanisms in the area of data protection are little used.

The panel will address the following issues:

usage of redress mechanisms in the area of data protection in the EU Member States;

barriers and incentives for using and applying particular redress mechanisms;

observations on need to improve accessibility and effectiveness of redress mechanisms;

The panel aims at presenting an overview of the main issues on the horizon for ICT firms given global developments including legislative and regulatory such as the review of the EU data protection directive, market innovations and growing complexities.

16.45 Data Protection Legislation and Start-Up Companies

hosted by Erik Valgaeren (Stibbe)

panel Yves Baudechon, Social Lab Group (BE), Harri Koponen, Rovio (FI), speakers from start-ups, a Member of the European Parliament [tbc]

The European internet start-up economy is growing fast. Businesses are starting and growing across the European Union led by creative, driven people. More often than not, however, innovation relies on the processing of personal data. While there is no doubt that it is inspiring to see this level of impact in Europe, innovation needs the right regulatory environment in order to thrive. This panel seeks to dissect the apparent contradiction between the perception of bureaucratic burden and an increasingly citizen focused data protection framework and the need for innovation-friendly regulation.

How to offer legally admissible evidence for taking down botnets and being in compliance with the EU data protection regulatory framework which imposes stringent safeguards on the confidentiality of personal communications and their related traffic data? Through an interactive discussion between the speakers and the participants, innovative solutions for detecting, measuring, analysing, mitigating and eliminating botnets taking into account the principle of privacy by design will be presented and debated.

13.00 Lunch

14.00 Onlife Manifesto – Being Human and Making Society in the Digital Age: Privacy in Light of Hannah Arendt

panel Charles Ess, University of Oslo (NO), Luciano Floridi, University of Hertfordshire & University of Oxford (UK), Claire Lobet-Maris, University of Namur – CRIDS (BE)

For Hannah Arendt, politics emerge from the plurality and the public space is the space lying between us, where each of us can experience freedom. “While all aspects of the human condition are somehow related to politics, this plurality is specifically the condition – not only the conditio sine qua non, but the conditio per quam – of all political life” (The Human Condition).

This panel will focus on what matters for the public space, and in particular:

the questions raised by the computing era and the current regulation of privacy;

the means needed to reinvigorate the sense of plurality;

the responses of the “Onlife Manifesto” produced by an interdisciplinary group of experts.

The speakers are members of a scientific group leading a conceptual work called the “Onlife Initiative”. This initiative is part of the Digital Futures project, initiated by the DG Connect: Nicole Dewandre – DG Connect – European Commission (EU).

15.00 Coffee break

15.30 Surveillance and Criminal Law

co-organised by EU PF7 project IRISS and CPDP

hosted by Antonella Galetta (VUB) & Gary T. Marx (MIT)

panel Eric Metcalfe, Monkton Chambers, Representative from the NGO Liberty [tbc] (UK), Representative from the company Omniperception [tbc] (UK), John Vervaele, University of Utrecht (NL).

This panel will look at how surveillance systems are operated in our everyday life and for law enforcement purposes. In particular, it will focus on the presumption of innocence and the impact of surveillance on fundamental rights. It will deal with these issues broadly as well as looking at the most specific contexts of the use and deployment of surveillance measures in pre-trial and post-trial contexts. Specific surveillance technologies and practices will be examined, such as CCTVs and electronic monitoring systems. These issues will be dealt in a comparative perspective considering the European and US experiences.

The main topics of discussion will be:

What are the effects of surveillance on the presumption of innocence?

How the impacts of surveillance on the presumption of innocence are countered by legislation and case law?

How are surveillance systems deployed in prisons and within the criminal justice system?

Surveillance and democracy are intimately intertwined. Every modern polity has developed an elaborate system for identifying constituencies and monitoring and controlling the population. Modern surveillance practices as a means of governance are introduced with a double justification. On the one hand a surveillance regime is required for the distribution of entitlements such as social welfare payments, providing the data for social planning and the allocation of resources. On the other hand, large-scale surveillance is supposed to help identify predators, criminals and terrorists. As well as this, the private sector plays an increasing role, both as complicit in state surveillance, and by forming its own nucleus of surveillance.

The panel will address the following issues in particular:

The non-reciprocal nature of visibility in contemporary “democratic” surveillance societies. Should the watchers should be as transparent as the citizens they surveil?

Is the democratization of surveillance technologies possible? If yes, to what extent?

Why is the legitimacy and social cost of surveillance technologies so often overlooked?

Social data are at the heart of the idea of a knowledge society, where decisions can be taken on the basis of knowledge in these data. Mining technologies enable the extraction of profiles useful to screen people when searching for those with a certain behavior. Profiles are useful in many context, from criminal investigation to marketing, from genetic screening to web site personalization. Profiles can help the categorization people on the bases of their personal and intimate information. Unfortunately, this categorization may lead to unfair discrimination against protected groups. It obvious that discrimination jeopardizes trust therefore inscribing non-discrimination into the knowledge discovery technology by design is becoming indispensable.

This panel aims to examine the concept of privacy by design from several perspectives. Indeed, most current engineering approaches consider security only at the technological level, failing to capture the high-level requirements of trust or privacy. We discuss privacy enhancing mechanisms for future internet services, in particular for mobile devices.

One of the most fascinating challenges of our time is understanding the complexity of the global interconnected society. The big data, originating from the digital breadcrumbs of human activities, promise to let us scrutinize the ground truth of individual and collective behavior. However, the big data revolution is in its infancy, and there are many barriers to set the power of big data free for social mining, so that scientists, and in prospect everybody, can access the knowledge opportunities. One of the most important barriers is the right of each individual to the privacy, i.e., the right to protect the own personal sphere against privacy violations due to uncontrolled intrusions. The key point is: how can the right to access the collective knowledge and the right to individual privacy co-exist?

Social media seem to challenge users’ privacy as the platforms value openness, connecting, and sharing with others. They became the focal point for privacy discussions as EU regulation and consumer organisations advocate privacy.

In a forever increasing connectivity, online advertising has become a key source of income for a wide range of online services. It has become a crucial factor for the growth and expansion of the Internet economy. For digital advertising to continue to grow, it needs the right set of rules. To create growth, also trust needs to be encouraged in emerging technologies, so that consumers feel comfortable using them. How much privacy do users expect on platforms designed to share information?

In this debate we wish to dissect the privacy definitions proposed by social media platforms, advertisers and consumer representatives. We especially wish to focus on the current trade-off made on all social media; users are offered free access to social media, but in the end the advertisers pay through advertising for their free lunch.

In this context the solutions that offer users online anonymity, to take them around the current web 2.0 business models, are also explained. How can the eco-system work and be sustainable? We want to discuss if a perfect fit exists for the three stakeholders: users, social media platforms and advertisers.

Wordpress.com uses cookies on this blog. I've limited them as much as customization allows me & I have no access to or control over the personal data they collect. Consent will be recorded after you click the button, and not just by mere scrolling. The widget doesn't provide an "I refuse" button & I'm writing to Wordpress to fix this. In the meantime, see their
Cookie Policy