Retaining privacy: the EU commission and the right to be forgotten

Do we have a right to be forgotten? That was the question posed to me by BBC Newsnight in the light of the EU Commission's latest draft framework for data protection policies. EU Commissioner for Justice Viviane Reding stated that “The protection of personal data is a fundamental right”, and set out to fix current privacy protection measures in the light of changing technology and globalization. Among other things users should be able to give informed consent to the use of their personal data, and have a "right to be forgotten" when their data is no longer needed or when they want their data deleted.

The EU commission may be too fond of inventing new fundamental rights, but protection of personal data is important. In a knowledge economy personal reputations are important, and personal data has a considerable monetary value.

There is a grand irony in that the commission wants to give us a fundamental right to control our personal information, yet it has also instituted the data retention directive that mandates storing information about us.

However, we do not have an unassailable right to have our personal information forgotten. Obviously it would be wrong to prevent others from recalling or storing our past misdeeds or mistakes just because it would be good for us if they were forgotten. Sometimes it is good to know that your potential business partner is a con-man. Morally, we should forgive and forget where appropriate.

Data retention can be justified on the grounds that it serves to protect other fundamental rights: my right to privacy is limited by other people's right to not suffer crimes on my part, just as my freedom to move is limited by your right not to be hit by me. In practice there are of course fundamental questions about whether data retention is proportional, effective, necessary, used only for justifiable purposes and does not itself infringe important rights and social functions.

The fundamental problem with data retention is that it causes imbalances of information power: certain groups have much information about you, but you do not have any power over them. To make matters worse, many of these groups and their actions are unknown and invisible to you and hence hard to challenge. The data protection framework tries to ameliorate this by giving you power over your data and making information about the processing transparent. However, like most EU policies, it is based on the assumption that the relevant actors are formal structures such as governments, corporations and associations. They are certainly the obvious concentrations of power and can be regulated by formal rules, but they are not the only ones or even the most problematic ones. Thanks to search engines and scripting anybody can amass potentially problematic data, be it people's sexual or political preferences, medical data or surveillance camera footage. As technology advances it puts more ability to concentrate dispersed information into useful forms in the hands of users everywhere.

The transparency issue is fundamentally about ensuring that the ability of exerting data power is two-way. The proposed framework may certainly make the right noises, but the real issue is of course to what extent the formal rules 1) actually can be implemented, 2) in what domains they are sidestepped and 3) how they mesh with real privacy rights, which are social norms rather than formal rules.

As for actual implementation, there are many reasons to suspect it might be problematic. Who is the data protection officer of an informal community? Who is a data controller over distributed data? Can there be true informed consent when the full disclosure of what Facebook does to your information would be a 100-page form ending with a simple choice "I agree/I disagree"? (and it would be updated every time a single advertiser figured out a new marketing trick)

The most interesting aspect is the difference between the formal privacy rights and the privacy norms people actually use. There is no rule against watching or reacting to people picking their nose in the subway, yet most societies and people adopt a policy of "not seeing" it. We have complex and fluid forms of privacy in different social settings – in a pub you can strike up conversations at a bar, but not necessarily at a remote table, we are expected to forgive certain forms of youthful mistakes but not others, the data mining done by OkCupid on people's romantic lives is charming and interesting while other forms of data mining are seen as problematic, in some parts of Europe drawn curtains invite suspicion and interest while in other parts they are a request for privacy, in some countries asking for your salary or political opinion is a conversation starter, in others a very personal question. Different groups and generations construct very different forms of identities and linked concepts of privacy. These are linked to shared norms, but in a global information environment we encounter people having fundamentally different norms. Over time privacy norms also change – from the visibility of the small rural community to the anonymity of the big city to the mix of the Big Brother era. They change through constant negotiation and learning (http://pleaserobme.com/ is an example of both informal data mining that produces a potentially sensitive result, and a social lesson people ought to learn from).

EU privacy regulations will not be able to track the substantive privacy norms people actually construct, since they are too diverse and changing. At best they would protect a few key areas and suggest shared aims. At worst they would attempt to formalize many currently fluid and adaptive social processes, losing their benefits while not giving any privacy in return. The best way to avoid that is of course to increase the accountability and transparency of the political process, just like we should demand accountability and transparency from others holding temporary or permanent power over our data.