Saturday, 28 January 2012

Another chapter of the long-running Phorm saga seems to have come to a close, with the announcement by the European Commission that they have closed the infringement case with the UK about their implementation of rules on privacy in electronic communications. In order to get this closure, the UK had, in the words of the Commission press release

'amended its national legislation so as not to allow interception of users' electronic communications without their explicit consent, and established an additional sanction and supervisory mechanism to deal with breaches of confidentiality in electronic communications.'

This case came about as a result of the big mess that the UK government got into over Phorm - something which I've written about both academically and in blogs on more than one occasion before. In essence, the government decided to back Phorm, a business which privacy advocates and others had been telling them from the very beginning was deeply problematic, and that decision backfired pretty spectacularly. The amount of egg that ended up on government faces as a result of the affair was pretty spectacular. The action of the Commission was a direct result of the admirable work of campaigners like Alexander Hanff at Privacy International, drawing on the excellent investigatory analysis by the University of Cambridge Computer Lab's Richard Clayton and the legal work of Nicholas Bohm for the Foundation for Information Policy Research - work that was effectively in direct opposition to the government. This work led to questions to the commission, upon which the commission acted, as well as, more directly, to the collapse of the Phorm business model as its business allies deserted it and opposition from the public became clearer and clearer.

Phorm's business model was particularly pernicious from a privacy perspective. They took behavioural advertising (which is problematic in most of its forms) to an extreme, monitoring people's entire browsing behaviour by intercepting each and every click made as you browse, in order to build up a profile which they then used to target advertising. All this without real consent from the user, or at least so it appeared, and indeed without the consent of the owners of the websites to whom these intercepted instructions were intended to be sent. As a model it appeared to break not only laws but people's ideas about being under surveillance - Orwellian in the extreme. It failed here - thanks to the resistance noted above - and has since failed again in South Korea, and appears to be failing in Romania (about which I've blogged before) and Brazil, the three places that Phorm's backers have tried it since. In each case, it looks as though people's resistance has been a key....

There are lessons to learn for all concerned:

1) Those of us advocating and campaigning for privacy can take a good deal of heart from the whole affair - essentially, we won, stopping the pernicious Phorm business model and forcing the UK government not just to back down but to change the law in ways that, ultimately, are more 'privacy-friendly'. 'People power' proved too strong for both business and government forces in this case - and it may be possible again. We certainly shouldn't give up!

2) Businesses need to take note: privacy-invasive business models will face opposition, and that opposition is more powerful than you might imagine. From the perspective of the symbiotic web (my underlying theory, more about which can be found here), if a privacy-invasive model is to succeed, it must give something back to those whose privacy is invaded, something of sufficient value to compensate for the privacy that is either lost or compromised. In Phorm's case, there was very little benefit to the people being monitored - the benefit was all for Phorm or Phorm's advertising partners. That sort of model isn't going to succeed nearly as easily as businesses might think - people will fight, and fight well! Businesses would do better to build more privacy-friendly models from the outset...

3) Governments need to understand the needs and abilities of the people - as well as the needs of businesses and business lobby groups. People are getting more and more aware and more and more able to articulate their needs and make their views known - and to wield powers beyond the understanding of most governments. The recent resistance to SOPA and PIPA in the US is perhaps another example - though the fact that people's interests coincided with those of internet powerhouses like Wikipedia and Google may have been even more important.

This last point is perhaps the most important. Governments all over the world seem to be massively underestimating the influence and power of people, particularly people on the internet. People will fight for what they want - and, more often than governments realise, they will find ways to win those fights. There needs to be a significant shift in the attitude of those governments if we are not to have more conflicts of the sort that caused such a mess over Phorm. There are more conflicts already on the horizon - from the judicial review of the Digital Economy Act to the shady agreement that is ACTA. There will be a lot of mess, I suspect, much of which could be avoided if 'authorities' understood what we wanted a bit more. The people of the net are starting to get mad, and they're not going to take it anymore.

Thursday, 26 January 2012

As anyone who pays attention to the world of data - and data privacy in particular - cannot help but be aware, those crazy Europeans are pushing some more of their mad data protection laws (a good summary of which can be found here) including the clearly completely insane 'right to be forgotten'. Reactions have been pretty varied on in Europe, but in the US they seem to have been pretty consistent, and can largely be boiled down to two points:

1) These Europeans are crazy!
2) This will all be a huge imposition on business - No fair!!!

There have been a fair few similar reactions in the UK too, and there will probably be more once the more rabidly anti-European parts of the popular press actually notice what's going on. As I've blogged before, the likes of Ken Clarke have spoken up against this kind of thing before.

So I think we need to ask ourselves one question: why ARE these crazy Europeans doing all this mad stuff?

Well, to be frank, the Internet 'industry' has only got itself to blame. This is an industry that has developed the surreptitious gathering of people's personal data into an art form, yet an industry that can't keep its data safe from hackers and won't keep it safe from government agencies. This is an industry that tracks our every move on the web and gets stroppy if we want to know when it's happening and why. This is an industry that makes privacy policies ridiculously hard to read whilst at the same time working brilliantly on making other aspects of their services more and more user-friendly. Why not do the same to the privacy settings? This is an industry that makes account deletion close to impossible (yes, I'm talking to you, Facebook) and pulls out all the stops to keep us 'logged in' at all times. This is an industry that tells us that WE should be completely transparent while remaining as obscure and opaque as possible themselves. This is an industry that often seems to regard privacy as just a little problem that needs to be sidestepped - or something that is 'no longer a social norm' (and yes, I'm talking to you, Facebook again).....

So.... If the internet 'industry', particularly in the US, doesn't want this kind of regulation, this kind of 'interference' with its business models, the answer's actually really simple: build better business models, models that respect people's privacy! Stop riding rough-shod over what we, particularly in Europe, but certainly in the US too, care deeply about. Use your brilliance in both business and technology to find a better way, rather than just moaning that we're interfering with what you want to do. When fighting against SOPA and PIPA (and I hope ACTA too in the near future), most of the industry champion the people admirably - perhaps because the people's interests coincided with their own. In privacy, the same is actually true, however much it may seem the other way around. In the end, the internet industry will be better off if it takes privacy seriously.

Regulation doesn't happen just because a bunch of faceless Belgian bureaucrats have too much power and too little to do - it happens when there's a real problem to solve. Oh, they may well go over the top, they may well use crude regulatory sledgehammers where delicate rapiers would do the job better, but they do at least try, which seems more than much of the industry does...

So don't blame the crazy Europeans. Take a closer look in the mirror...

Wednesday, 25 January 2012

Privacy is pretty constantly in the news at the moment. People like me can hardly take their eye off the news for a moment. This morning I was trying to do three things at once: follow David Allen Green's evidence at the Leveson inquiry (where amongst other things he was talking about the NightJack story which has significant privacy implications), listen to Viviane Reding talking about the new reforms to the data protection regime in Europe, and discover what was going on in the emerging story of 02's apparent sending of people's mobile numbers to websites visited via their mobile phones....

Big issues... and lots of media coverage... and lots of opportunities for academics, advocates of one position or other, technical experts and so forth to write/talk/tweet/blog etc on the subject. And many of us are taking the opportunity to say our bit, as we like to do. A good thing? Yes, in general - because perhaps the biggest change I've seen over the years I've been researching into the field is that the debate is wider, bringing in more people and more subjects, and getting more public attention - which must, overall, be a good thing. The more the issues are debated and thought about, the more chance there is that we can get better understanding, some sort of consensus, and find better solutions. And yet there are dangers attached to the process - because as well as the people who have valuable things to say and good, strong ethical positions to support their case, there are others with much more questionable agendas, often hidden, who would like to use others for their own purposes. Advocates, academics and experts need to guard against being used by others with very different motives.

There are particular examples happening right now. One subject that particularly interests me, about which I've blogged and written many times before, is the right to be forgotten. Viviane Reding has talked about it in the last few days - and there have been reactions in both directions. Both, it seems to me, need to be wary of their being used in ways that they don't intend:

i)Those who oppose a ‘right to be forgotten’/’right to delete’ need to be careful that they’re not being used as ‘cover’ for those whose business models depend on the holding and using of personal data. The right to delete is a threat to their business models, and they can (and probably will) use all the tools at their disposal to oppose it, including using 'experts' and academics. The valid concerns about censorship/free expression aren't what those people care about - they want to be able to continue to use people's personal data to make money. Advocates for free expression etc need to be careful that they're not being used in that kind of way.

ii)Conversely, those who (like me) advocate for a ‘right to be forgotten’/’right to delete’ need to be careful that they’re not being used by those who wish to censor and control - because there IS a danger that a poorly written and executed right to be forgotten could be set up in that kind of way. I don't believe that's what's intended by the current version, nor to I believe that this is how it would or could be used, but it's certainly possible, and people on 'my' side of the argument need to be vigilant that it doesn't go that way.

Similar arguments can be used in other fields - for example about the question of the right to anonymity. Those who (like me) espouse a right to anonymity need to be careful about not providing unfettered opportunities for those who wish to bully, to defame etc., while those who support the reverse – an internet with real name/identification systems throughout, to control access to age-sensitive sites, to deal with copyright infringement etc – need to be very careful not to be used as an excuse for setting up systems which allow control and ultimately oppression.

So what does this all mean? Should academics and other 'experts' simply keep out of the blogosphere and the media, and leave their musings for academic journals and unreadable books? Certainly not - but we do need to be a little more thoughtful about the agendas of those who might use us, who might misquote us, who might take us out of context and so forth. I suspect that this might have been what happened to Vint Cerf when he wrote a short while ago suggesting that internet access was not a human right. Others might well have been trying to use him... as they might well try to use any of those who write in this kind of a field. However clever we might think we are, we're very often pawns in the game, not players.

Thursday, 19 January 2012

Earlier today, Eastman Kodak filed for Chapter 11 Bankruptcy protection. It might well signal the end for a company which was perhaps the single most important player in an industry that revolutionised the world in many ways: the photographic industry. Kodak has been in existence for 131 years, and in that time the world has changed dramatically in many ways - but perhaps not in as many ways as we might think. Kodak was crucial in the history of photography - but it was also crucial in the history of privacy.

Back in the late 19th century, when Kodak introduced the first hand-held camera, that new technology scared a lot of people - and inspired a whole new phase in the legal understanding of privacy. Amongst those alarmed by it were young lawyers Samuel Warren and Louis Brandeis - who went on to write a seminal piece for the Harvard Law Review: "The Right to Privacy". It was a remarkable piece of work and set into motion a train of legal thought that is still chuffing away to this very day. I remember when I first read it I assumed the date was a misprint: 1890. Surely that must mean 1980? Here's an extract:

“The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasion upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.”

The same debate rages now - and the 'enterprise and invention' that was 'modern' in 1890 is every bit as prevalent now. Have things really changed? Are the attacks on privacy a 'modern' crisis in the 21st century - or are things just the same as they ever were. Here's some more of Warren and Brandeis:

"Gossip is no longer the resource of the idle and the vicious, but has become a trade, which is pursued with industry as well as effrontery. To satisfy a prurient taste the details of sexual relations are spread broadcast in the columns of the daily papers. To occupy the indolent, column upon column is filled with idle gossip, which can only be procured by intrusion upon the domestic circle."

Lord Justice Leveson might well say something very similar when his inquiry into the culture, ethics and practice of the press comes to its conclusion. Phone hacking may be the latest form of 'intrusion upon the domestic circle' but in many ways it's not that different from the tactics that have been used by the press (and others) for well over a century, as Warren and Brandeis made very clear.

So has much changed? Or is this all just human nature, and we need to 'grin and bear it'. Has the technological development of the last 120+ years had a significant effect? Here's a little more of Warren and Brandeis:

"Even gossip apparently harmless, when widely and persistently circulated, is potent for evil."

The internet, by its very nature, gives a far greater opportunity for wide and persistent circulation of gossip - but once again, it's not qualitatively different from what Warren and Brandeis were concerned about. The tools are more efficient, the mechanisms more generally available, and the scale larger, but isn't it the same problem, just writ a bit larger? The other side of the coin, however, is also, in my opinion, true. Privacy isn't a problem that's going away - and it's not, despite the suggestions of the likes of Mark Zuckerberg, something that's no longer a social norm. The ways in which Warren and Brandeis's piece, written more than 120 years ago, seems to fit so well with current practices and current concerns suggests precisely the opposite. Privacy is still an issue - and it will in all likelihood remain an issue forever. They were right to be concerned about it - and right, in my opinion, that we have a right to privacy. We had it then, and we have it now - not an absolute right, not a right that overrides other competing rights such as freedom of expression, but a right that needs to be considered, and needs to be fought for. That fight will go on... as it always has.

Thursday, 12 January 2012

With apologies to William Shakespeare, Elizabeth Barrett Browning, Heath Ledger, Julia Stiles and many more…

10 things I hate about the ICO

I hate the way you ask for teeth but seem afraid to bite
I hate the way you think the press are far too big to fight
I hate the way you always think that business matters most
Leaving all our online rights, our privacy, as toast

I hate the way you keep your fines for councils and their kind
While leaving business all alone, in case the poor dears mind
I hate the way you take the rules that Europe writes quite well
And turn them into nothing much, as far as we can tell

I hate the way that your advice on cookies was so vague
Could it possibly have been, you were a touch afraid?
I hate the way you talked so tough to old ACS Law
But when it came to action, it didn’t hurt for sure

I hate the way it always seems that others take the fore
While you sit back and wait until the interest is no more
I hate that your investigations all stop far too soon
As PlusNet, Google and BT have all found to their boon

I hate the way you tried your best to hide your own report
‘Bury it on a busy day’; a desperate resort!
You should be open, clear and fair, not secretive and poor
We’ll hold you up for all to see – we expect so much more!

I hated how when Google’s cars were taking all our stuff
You hardly seemed to care at all – that wasn’t near’ enough
Even when you knew the truth, you knew not what to do
It took the likes of good PI to show you where to go…

I hated how my bugbears Phorm, didn’t get condemned
Even when their every deed could not help but offend
You let them off with gentle words, ‘must try harder’ you just said
Some of us, who cared a lot, almost wished you dead

You tease us, tempt us, give us hope – then let us down so flat
We think you’re on our side – you’re not – and maybe that is that!
Will all these bad things ever change? We can but hope and dream
That matters at the ICO aren’t quite as they might seem.

We need you, dearest ICO, far more than we should
We’d love you if you only tried to do the job you could
We’d love you if you stood up tall, and faced our common foes
Until you do, sad though it is, then hatred’s how it goes.

P.S. I don’t really hate the ICO at all really.... this is 'poetic' licence!

Wednesday, 11 January 2012

It isn’t often that I find myself disagreeing with something that Vint Cerf, one of the ‘fathers of the internet’ has said, but when I read his much publicised Op Ed piece in the New York Times, I did.

First of all, and perhaps most importantly, I didn’t like the headline, which stated baldly and boldly that ‘Internet Access is not a Human Right’. Regardless of whether you agree or disagree with that statement, the piece said a great deal more than that – indeed, the main thrust of the argument was about the importance of the internet, and of internet access, to human rights. Many people will have just read the headline – or even read the many tweets which stated just that headline and a link – and drawn conclusions very different to those which Cerf might like. The headline, of course, may well have been the choice of the editorial team and the New York Times, rather than Cerf himself, but either he was OK with it or he allowed himself to be led in a particular direction.

Secondly, I think the point that he makes leading to this headline, and to his conclusions, reflects a particularly US perspective on 'human rights' - a minimalist approach which emphasises civil and political rights and downplays (or even denies) economic and social rights amongst others. Most of the rest of the world takes a broader view of human rights: the International Covenant on Economic, Social and Cultural Rights was introduced in 1966, and has been ratified by the vast majority of the members of the UN – but not by the US. The covenant includes such rights as the right to work, the right to social security, rights to family life, right to health, to education and so forth - and it isn't too much of a stretch to see that right to internet access might fit within this spectrum.

That Cerf doesn't see it this way is not surprising given that he is American - but I think his argument is weaker than that. In the piece, Cerf’s gives the example of a man not having a right to a horse. He talks about how a horse was at one time crucial to ‘make a living’, and that means that the ‘human right’ isn’t a right to have a horse, but a right to ‘make a living’. However, even that’s based on assumptions to do with our time and system. Do you ‘need’ to ‘make a living’ if your society isn’t based on capitalism? Non-capitalist societies have existed in the past - and indeed exist on small scales in various places around the world today. Can we really assume that they will never exist in the future? It is a bold assumption to make - but not, I think, one that needs to be made.

We need to be very careful about the assumptions we make about any human right – and that, in practice, many of what we consider to be human rights are instrumental, qualified, or contextual rather than absolute, pure and simple. Another example from the legal field: do we have a ‘right to a free trial’ – or a right to justice? Trial by jury may be the best way we know now of assuring justice, but might there not be other ways?

What does this mean? Well, primarily, to me, it means we need to be less 'purist' about the terms we use, and more pragmatic - and to understand that we live in a particular time, where particular things matter. Moreover, that the language that is currently used in most parts of the world is one in which the term 'human right' has power - and we should not be afraid to use that power. Right now, to flourish in a 'free', developed society, internet access is crucial. Perhaps even more to the point, internet access has shown itself to have a potential for liberation even in places less 'free' and less 'developed. I'm not a cyber-utopian - and I fully acknowledge the strengths of the arguments of Morozov about the potential of the internet for control as much as for liberation - but for me that actually makes it even more important that we look at the internet from a rights perspective: if we have a right to internet access then it's much easier to argue that we have rights (such as privacy rights) while we use the internet, and those rights are critical for supporting the more liberating aspects of the internet.

That's another thing that disappoints me about Cerf's Op Ed piece. He doesn’t mention privacy, he doesn’t mention freedom from censorship, he doesn’t mention freedom from surveillance – I wish he would, because next after access these are the crucial enablers to human rights, to use his terms. I’d put it in stronger terms myself. I’d say we have rights to privacy online, rights to freedom from censorship, and rights to freedom from surveillance. If you don’t want to call them human rights, that’s fine by me – but right now, right here, in the world that we live in, we need these rights. The fact that we need them means that we should claim them, and that governments, businesses and yes, engineers, should be doing what they can to ensure that we get them.

Finally, going back to the headline itself I think Cerf and other seminal figures in the history and development of the internet, have got to be careful about not letting themselves be used by those who'd like to restrict internet access and freedom: there are others with very dubious agendas who would like to push the 'internet access not a human right' point. When one of the fathers of the internet writes that internet access is not a human right, regardless of the details below, there is a significant chance that it will be latched onto by those who would like to restrict our freedoms, whether to enforce copyright, to 'fight' terrorism or online crime, or for other purposes. That is something that we should be careful to avoid.

ADDENDUM (15/1/2012)

There have been a number of other interesting blogs/responses on the subject. Here are links to a few of them:

Thursday, 5 January 2012

I have to admit to following the Republican party's presidential candidate race with some fascination. It's a slightly ghoulish fascination - there's often a touch of fear when I listen to some of the candidates, and there's always the underlying question of 'how low can they go'. There's comedy, tragedy, a bit of historical eccentricity, and often a good deal of farce. It's also, however, revealing of some of the issues that we should take seriously in terms of how our politics, our democratic politics, functions - and in particular, how it might function in the future.

One particular aspect that came to the fore to me in the recent Iowa Caucus - the role of advertising in politics. We haven't developed it to nearly the same degree in the UK as the US, though every successful politician this side of the pond has tried to follow Thatcher's hugely effective use of Saatchi & Saatchi. In the US, though, it's a highly developed art form - and is only likely to become more so. In Iowa, an orchestrated advertising campaign against the surging Newt Gingrich sent him down from first to fourth place (and nearly out of the race) in a matter of days. Advertising works, or at least appears to - and politicians know it, and know it well.

What might this mean for the future? I've written about advertising many times before, both in academic papers and in blogs. The internet is changing advertising - and we need to be aware of how that change might have an impact not only on our commercial behaviour but on our political behaviour: on politics itself. There are two trends in internet advertising that are particularly relevant and worth thinking about here: behavioural profiling and personalisation. People browsing the internet can be (and are) profiled according to their online behaviour, from the search terms they use and the links they follow to the friends they have on social media sites, the music they listen to, movies they watch and so forth. That profiling is generally used to target advertising - advertising more suited to their personal needs and desires. My last blog, Privacy and the Phantom Tollbooth, talked about some of the risks of this kind of thing - but when looked at from a political perspective the risks are even more sinister.

Through profiling, it is possible to make good guesses - sometimes very good guesses - as to which political issues matter to someone and which ones don't. With just a little bit of work, the vast majority of which could be entirely automatic, it could become possible to create tailored political advertisements designed to highlight the policies or features of a particular candidate or party that are of specific interest to an individual - and to omit anything that might detract from their attraction. And, given the US experience in particular, to do the reverse for any opponents - automatically pick out the things that will make a particular voter see them in the most negative light possible.

Taking this a few steps further, these ads could include background music that the advertiser knows that you particularly like, and even voice-overs by an actor that they know you admire - they could even choose the colours, styles and typefaces to suit your 'known' preferences. Of course they wouldn't do this for everyone, at least not at first, but it wouldn't take that much effort to produce a range of options (a handful of different actors, soundtracks etc would do the job) that would cover most of the key, swing voters. Political advertising in its current form is already persuasive - how much more persuasive could it be in this kind of form? And remember that with behavioural targeting in the hands of relatively few advertising organisations, these advertisements can be sent to a vast number of different websites that you visit. They can be sent to you in emails. They can be inserted at the beginnings of videos that you watch online.... the possibilities are endless.

Is this far fetched? A nightmare scenario beyond the realms of possibility? Spend a little time watching US elections and I don't think you'll feel that way. It's just the logical extension of existing advertising and political trends. It is important to remember, too, that this kind of thing requires money - and money already talks enormously in politics. The power of personalised advertising can very easily become just one more tool in the hands of those who already wield excessive power over the political domain.

What can be done? Well, the first thing is a matter of awareness. The impact of behavioural advertising goes beyond the commercial sphere, and we need to understand this. It's not just a matter of deciding which deodorant or drink we choose - potentially it's about our whole lives. We ignore its importance at our peril - so things like 'do not track' really matter, and the European 'Cookie Directive' should not be dismissed as a legalistic impediment to good business. They may not be perfect tools - indeed, it seems clear that they aren't - but they're being pushed for very good reasons. Tracking on the internet should not be the default, accepted without a thought. The risks are far greater than most people realise.

My Website

About Me

Lecturer in Information Technology, Intellectual Property and Media Law at the University of East Anglia - and PhD candidate at the London School of Economics, in the Law Department - with a particular interest in human rights and privacy on the internet.
You can follow me on Twitter: @paulbernalUK