Freedom of expression

The unedifying ‘scuffle’ at Jacob Rees-Mogg’s appearance at the University of the West of England has provoked a great deal of reaction – some of it distinctly over-the-top. Precisely what happened, who started the fight and why, remains a little unclear – and is not the topic of this post. It is Theresa May’s reaction, to suggest a new law to protect MPs against intimidation, that is more interesting for those of us who are interested in freedom of speech – not only in its practice but its purpose.

The need for a new law is at best contentious – there is already plenty of law to deal with threats and intimidation, public order law, law to protect against harassment and much more – and it is entirely possible that nothing will materialise from Theresa May’s pronouncement other than a few headlines in the Daily Mail. The reasons behind the desire for the law, however, reveal a lot about Theresa May and those who share her views. Effectively, though she and they would be very unlikely to use the words, they’re looking for a ‘safe space’ for MPs. This, coming from the same people who have been actively campaigning against ‘safe spaces’ in universities for others, has more than a whiff of hypocrisy about it. It is, however, remarkably familiar. Many – perhaps most – of those who claim to be great champions of free speech are often very keen on protecting the free speech of people like them, or of people who share their views, but far less keen on providing the same protection for those they disagree with.

Safe Spaces can be a good thing

What the supporters of a law to protect MPs from intimidation might understand, if they thought a little further, is that safe spaces can be a good thing. If we want a civilised debate, if we want people not to be intimidated into silence, if we want to encourage those whose voices are rarely heard, then a supportive – or at the very least not threatening – environment really helps. Theresa May understands that for MPs – because she understands MPs, and supports them in that role. That much is easy – making the leap to understand that others need that protection and that safety too seems to be much harder.

Safe Spaces can be a bad thing

On the other hand, if the creation of a ‘safe space’ is to stop particular voices being challenged, it is not so clearly a good thing – and that may well be what happens at times. For debate to function, challenging needs to be possible – banning hecklers and protestors is not always a good thing. Drawing a line is not always easy – as the UWE fracas showed. The initial protest, and indeed Jacob Rees-Mogg’s first response to it, seemed relatively civilised and harmless. Protest is a critical part of freedom of speech – the vehemence with which authoritarian regimes deal with it should at least give pause for thought. The idea that Donald Trump might only visit the UK if Theresa May stops protests is not something we should accept, for example.

Safe Spaces for whom?

What should give us even more pause for thought is who we need to provide safe spaces for, and why – and this is where the idea that we should legislate for safe spaces for MPs whilst actively working against safe spaces for others feels particularly wrong. MPs already have plenty of ‘safe spaces’ to air their views. Parliament itself, for one. The studios of all the TV and radio broadcasters. Columns in major newspapers and magazines. Others – particularly vulnerable or marginalised people and groups – have almost no access to these. They have neither freedom of speech in practice nor safe spaces in which to hear others. They don’t have powerful friends and allies to open doors, provide platforms – or bring in legislation.

That is the thing about rights – and human rights in particular. The main need for those rights is for the relatively weak, to protect them from the relatively strong. People with strength and power already have many means to protect themselves – in free speech terms, they have many ways to express themselves and a ready audience to listen. For others none of that is true – and that is what we need to remember.

Free speech is not simple – it is messy and complicated, nuanced and difficult to find our way through. That complication needs to be taken on board – because free speech is also really important. We should be particularly wary of those proclaiming themselves champions of free speech – what they are championing is often at best an oversimplification, and often a complete distortion. In Theresa May’s case, it may be even worse. The kind of law envisaged would not support free speech – it would support the powerful against the weak. It should be thoroughly resisted.

As is sadly all too common after an act of terrorism, freedom on the internet is also under attack – and almost entirely for spurious reasons. This is not, of course anything new. As the late and much lamented Douglas Adams, who died back in 2001 put it:

“I don’t think anybody would argue now that the Internet isn’t becoming a major factor in our lives. However, it’s very new to us. Newsreaders still feel it is worth a special and rather worrying mention if, for instance, a crime was planned by people ‘over the Internet’.”

The headlines in the aftermath of the Westminster attack were therefore far from unpredictable – though a little more extreme than most. The Daily Mail had:

“Google, the terrorists’ friend”

…and the Times noted that:

“Police search secret texts of terrorist”

…while the Telegraph suggested that:

“Google threatened with web terror law”

The implications are direct: the net is a tool for terrorists, and we need to bring in tough laws to get it under control.

And yet this all misses the key point – the implication of Douglas Adams’ quote. Terrorists use the internet to communicate and to plan because we all use the internet to communicate and plan. Terrorists use the internet to access information because we all use the internet to access information. The internet is a communicative tool, so of course they’ll use it – and as it develops and becomes better at all these things, we’ll all be able to use it in this way. And this applies to all the tools on the net. Yes, terrorists will use Google. Yes, they’ll use Facebook too. And Twitter. And WhatsApp. Why? Because they’re useful tools, systems, platforms, whatever you want to call them – and because they’re what we all use. Just as we use hire cars and kitchen knives.

Useful tools…

That’s the real point. The internet is something we all use – and it’s immensely useful. Yes, Google is a really good way to find out information – that’s why we all use it. The Mail seems shocked by this – not that it’s particularly difficult to know how a car might be used to drive somewhere and to crash into people. It’s not specifically the ‘terrorists’ friend, but a useful tool for all of us.

The same is true about WhatsApp – and indeed other forms of communication. Yes, they can be used by ‘bad guys’, and in ways that are bad – but they are also excellent tools for the rest of us. If you do something to ban ‘secret texts’ (effectively by undermining encryption), then actually you’re banning private and confidential communications – both of which are crucial for pretty much all of us.

The same is true of privacy itself. We all need it. Undermining it – for example by building in backdoors to services like WhatsApp – undermines us all. Further, calls for mass surveillance damage us all – and attacks like that at Westminster absolutely do not help build the case for more of it. Precisely the opposite. To the surprise of no-one who works in privacy, it turns out that the attacker was already known to the authorities – so did not need to be found by mass surveillance. The same has been true of the perpetrators of all the major terrorist attacks in the West in recent years. The murderers of Lee Rigby. The Boston Bombers. The Charlie Hebdo shooters. The Sydney siege perpetrators. The Bataclan killers. None of these attacks needed identifying through mass surveillance. At a time when resources are short, to spend time, money, effort and expertise on mass surveillance rather than improving targeted intelligence, putting more human intelligence into place – more police, more investigators rather than more millions into the hands of IT contractors – is hard to defend.

More responsible journalism…

What is also hard to defend is the kind of journalism that produces headlines like that in the mail, or indeed in the Times. Journalists should know better. They should know all too well the importance of privacy and confidentiality – they know when they need to protect their own sources, and get rightfully up in arms when the police monitor their communications and endanger their sources. They should know that ‘blocking terror websites’ is a short step away from political censorship, and potentially highly damaging to freedom of expression – and freedom of the press in particular.

They should know that they’re scaremongering or distracting with their stories, their headlines and their ‘angles’. At a time when good, responsible journalism is needed more than ever – to counter the ‘fake news’ phenomenon amongst other things, and to keep people informed at a time of political turmoil all over the world – this kind of an approach is deeply disappointing.

Back in 2015, Andrew Parker, the head of MI5, called for a ‘mature debate’ on surveillance – in advance of the Investigatory Powers Bill, the surveillance law which has now almost finished making its way through parliament, and will almost certainly become law in a few months time. Though there has been, at least in some ways, a better debate over this bill than over previous attempts to update the UK’s surveillance law, it still seems as though the debate in both politics and the media remains distinctly superficial and indeed often deeply misleading.

It is in this context that I have a new academic paper out: “Data gathering, surveillance and human rights: recasting the debate”, in a new journal, the Journal of Cyber Policy. It is an academic piece, and access, sadly, is relatively restricted, so I wanted to say a little about the piece here, in a blog which is freely accessible to all – at least in places where censorship of the internet has not yet taken full hold.

The essence of the argument in the paper is relatively straightforward. The debate over surveillance is simplified and miscast in a number of ways, and those ways in general tend to make surveillance seem more positive and effective that it is, and with less broad and significant an impact on ordinary people than it might have. The rights that it impinges are underplayed, and the side-effects of the surveillance are barely mentioned, making surveillance seem much more attractive than should be – and hence decisions are made that might not have been made if the debate had been better informed. If the debate is improved, then the decisions will be improved – and we might have both better law and better surveillance practices.

Perhaps the most important way in which the debate needs to be improved is to understand that surveillance does not just impact upon what is portrayed as a kind of selfish, individual privacy – privacy that it is implied does not matter for those who ‘have nothing to hide’ – but upon a wide range of what are generally described as ‘civil liberties’. It has a big impact on freedom of speech – an impact that been empirically evidenced in the last year – and upon freedom of association and assembly, both online and in the ‘real’ world. One of the main reasons for this – a reason largely missed by those who advocate for more surveillance – is that we use the internet for so many more things than we ever used telephones and letters, or even email. We work, play, romance and research our health. We organise our social lives, find entertainment, shop, discuss politics, do our finances and much, much more. There is pretty much no element of our lives that does not have a very significant online element – and that means that surveillance touches all aspects of our lives, and any chilling effect doesn’t just chill speech or invade selfish privacy, but almost everything.

This, and much more, is discussed in my paper – which I hope will contribute to the debate, and indeed stimulate debate. Some of it is contentious – the role of commercial surveillance the interaction between it and state surveillance – but that too is intentional. Contentious issues need to be discussed.

There is one particular point that often gets missed – the question of when surveillance occurs. Is it when data is gathered, when it is algorithmically analysed, or when human eyes finally look at it. In the end, this may be a semantic point – what technically counts as ‘surveillance’ is less important than what actually has an impact on people, which begins at the data gathering stage. In my conclusion, I bring out that point by quoting our new Prime Minister, from her time as Home Secretary and chief instigator of our current manifestation of surveillance law. This is how I put it in the paper:

“Statements such as Theresa May’s that ‘the UK does not engage in mass surveillance’ though semantically arguable, are in effect deeply unhelpful. A more accurate statement would be that:

‘the UK engages in bulk data gathering that interferes not only with privacy but with freedom of expression, association and assembly, the right to a free trial and the prohibition of discrimination, and which puts people at a wide variety of unacknowledged and unquantified risks.’”

It is only when we can have clearer debate, acknowledging the real risks, that we can come to appropriate conclusions. We are probably too late for that to happen in relation to the Investigatory Powers Bill, but given that the bill includes measures such as the contentious Internet Connection Records that seem likely to fail, in expensive and probably farcical ways, the debate will be returned to again and again. Next time, perhaps it might be a better debate.

I am one of the signatories on an open letter to the governments of the world that has been released today. The letter has been organised by Access Now and there are 195 signatories – companies, organisations and individuals from around the world.

The letter itself can be found here. The key demands are the following

It’s an important letter, and one that Should be shared as widely as possible. Encryption matters, and not just for technical reasons and not just for ‘technical’ people. Even more than that, the arguments over encryption are a manifestation of a bigger argument – and, I would argue, a massive misunderstanding that needs to be addressed: the idea that privacy and security are somehow ‘alternatives’ or at the very least that privacy is something that needs to be ‘sacrificed’ for security. The opposite is the case: privacy and security are not alternatives, they’re critical partners. Privacy needs security and security needs privacy.

The famous (and much misused) saying often attributed (probably erroneously) to Benjamin Franklin, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety” is not, in this context at least, strong enough. In relation to the internet, those who would give up essential privacy to purchase a little temporary security will get neither. It isn’t a question of what they ‘deserve’ – we all deserve both security and privacy – but that by weakening privacy on the internet we weaken security.

The conflict over encryption exemplifies this. Build in backdoors, weaken encryption, prevent or limit the ways in which people can use it, and you both reduce their privacy and their security. The backdoors, the weaknesses, the vulnerabilities that are provided for the ‘good guys’ can and will be used by the ‘bad guys’. Ordinary people will be more vulnerable to criminals and scammers, oppressive regimes will be able to use them against dissidents, overreaching authorities against whistleblowers, abusive spouses against their targets and so forth. People may think they have ‘nothing to hide’ from the police and intelligence agencies – but that is to fundamentally miss the point. Apart from everything else, it is never just the police and the intelligence agencies that our information needs protection from.

What is just as important is that there is no reason (nor evidence) to suggest that building backdoors or undermining encryption helps even in the terms suggested by those advocating it. None examples have been provided – and whenever they are suggested (as in the aftermath of the Paris terrorist attacks) they quickly dissolve when examined. From a practical perspective it makes sense. ‘Tech-savvy’ terrorists will find their own way around these approaches – DIY encryption, at their own ends, for example – while non-tech savvy terrorists (the Paris attackers seem to have used unencrypted SMSs) can be caught in different ways, if we use different ways and a more intelligent approach. Undermining or ‘back-dooring’ encryption puts us all at risk without even helping. The superficial attractiveness of the idea is just that: superficial.

The best protection for us all is a strong, secure, robust and ‘privacy-friendly’ infrastructure, and those who see the bigger picture understand this. This is why companies such as Apple, Google, Microsoft, Yahoo, Facebook and Twitter have all submitted evidence to the UK Parliament’s Committee investigating the draft Investigatory Powers Bill – which includes provisions concerning encryption that are ambiguous at best. It is not because they’re allies of terrorists or because they make money from paedophiles, nor because they’re putty in the hands of the ‘privacy lobby’. Very much the opposite. It is because they know how critical encryption is to the way that the internet works.

That matters to all of us. The internet is fundamental to the way that we live our lives these days. Almost every element of our lives has an online aspect. We need the internet for our work, for our finances, for our personal and social lives, for our dealings with governments, corporations and more. It isn’t a luxury any more – and neither is our privacy. Privacy isn’t an indulgence – and neither is security. Encryption supports both. We should support it, and tell our governments so.

As well as providing oral evidence to the Draft Investigatory Powers Bill Joint Committee (which I have written about here, can be watched here, and a transcript can be found here) I submitted written evidence on the 15th December 2015.

The contents of the written submission are set out below. It is a lot more detailed than the oral evidence, and a long read (around 7,000 words) but even so, given the timescale involved, it is not as comprehensive as I would have liked – and I didn’t have as much time to proof read it as I would have liked. There are a number of areas I would have liked to have covered that I did not, but I hope it helps.

As it is published, the written evidence is becoming available on the IP Bill Committee website here – my own evidence is part of what has been published so far.

Submission to the Joint Committee on the draft Investigatory Powers Bill by Dr Paul Bernal

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. My book, Internet Privacy Rights – Rights to Protect Autonomy, was published by Cambridge University Press in 2014. I am a member of the National Police Chiefs’ Council’s Independent Digital Ethics Panel. The draft Investigatory Powers Bill therefore lies precisely within my academic field.

I gave oral evidence to the Committee on 7th December 2015: this written evidence is intended to expand on and explain some of the evidence that I gave on that date. If any further explanation is required, I would be happy to provide it.

One page summary of the submission

The submission looks specifically at the nature of internet surveillance, as set out in the Bill, at its impact on broad areas of our lives – not just what is conventionally called ‘communications’ – and on a broad range of human rights – not just privacy but freedom of expression, of association and assembly, and of protection from discrimination. It looks very specifically at the idea of ‘Internet Connection Records, briefly at data definitions and at encryption, as well as looking at how the Bill might be ‘future proofed’ more effectively.

The submission will suggest that in its current form, in terms of the overarching/thematic questions set out in the Committee’s Call for Written Evidence, it is hard to conclude that all of the powers sought are necessary, uncertain that they are legal, likely that many of them are neither workable nor carefully defined, and unclear whether they are sufficiently supervised. In some particular areas – Internet Connection Records is the example that I focus on in this submission – the supervision envisaged does not seem sufficient or appropriate. Moreover, there are critical issues – for example the vulnerability of gathered data – that are not addressed at all. These problems potentially leave the Bill open to successful legal challenge and rather than ‘future-proofing’ the Bill, they provide what might be described as hostages to fortune.

Many of the problems, in my opinion, could be avoided by taking a number of key steps. Firstly, rethinking (and possibly abandoning) the Internet Connection Records plans. Secondly, being more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge. Thirdly, taking a new look at encryption and being clear about the approach to end-to-end encryption. Fourthly, strengthening and broadening the scope of oversight. Fifthly, through the use of some form of renewal or sunset clauses to ensure that the powers are subject to full review and reflection on a regular basis.

1 Introductory remarks

1.1 Before dealing with the substance of the Bill, there is an overriding question that needs to be answered: why is the Committee being asked to follow such a tight timetable? This is a critically important piece of legislation – laws concerning surveillance and interception are not put forward often, particularly as they are long and complex and deal with highly technical issues. That makes detailed and careful scrutiny absolutely crucial. Andrew Parker of MI5 called for ‘mature debate’ on surveillance immediately prior to the introduction of the Bill: the timescale set out for the scrutiny of the Bill does not appear to give an adequate opportunity for that mature debate.

1.2 Moreover, it is equally important that the debate be an accurate one, and engaged upon with understanding and clarity. In the few weeks since the Bill was introduced the public debate has been far from this. As shall be discussed below, for example, the analogies chosen for some of the powers envisaged in the Bill have been very misleading. In particular, to suggest that the proposed ‘Internet Connection Records’ (‘ICRs’) are like an ‘itemised phone bill’, as the Home Secretary described it, is wholly inappropriate. As I set out below (in section 5) the reality is very different. There are two possible interpretations for the use of such inappropriate analogies: either the people using them don’t understand the implications of the powers, which means more discussion is needed to disabuse them of their illusions, or they are intentionally oversimplifying and misleading, which raises even more concerns.

1.3 For this reason, the first and most important point that I believe the Committee should be making in relation to the scrutiny of the Bill is that more time is needed. As I set out below (in 8.4 below) the case for the urgency of the Bill, particularly in the light of the recent attacks in Paris, has not been made: in many ways the attacks in Paris should make Parliament pause and reflect more carefully about the best approach to investigatory powers in relation to terrorism.

1.4 In its current form, in terms of the overarching/thematic questions set out in the Committee’s Call for Written Evidence, it is hard to conclude that all of the powers sought are necessary, uncertain that they are legal, likely that many of them are neither workable nor carefully defined, and unclear whether they are sufficiently supervised. In some particular areas – Internet Connection Records is the example that I focus on in this submission – the supervision envisaged does not seem sufficient or appropriate. Moreover, there are critical issues – for example the vulnerability of gathered data – that are not addressed at all. These problems potentially leave the Bill open to successful legal challenge and rather than ‘future-proofing’ the Bill, they provide what might be described as hostages to fortune.

1.5 Many of the problems, in my opinion, could be avoided by taking a number of key steps. Firstly, rethinking (and possibly abandoning) the Internet Connection Records plans. Secondly, being more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge. Thirdly, taking a new look at encryption and being clear about the approach to end-to-end encryption. Fourthly, strengthening and broadening the scope of oversight. Fifthly, through the use of some form of renewal or sunset clauses to ensure that the powers are subject to full review and reflection on a regular basis.

2 The scope and nature of this submission

2.1 This submission deals specifically with the gathering, use and retention of communications data, and of Internet Connection Records in particular. It deals more closely with the internet rather than other forms of communication – this is my particular area of expertise, and it is becoming more and more important as a form of communications. The submission does not address areas such as Equipment Interference, and deals only briefly with other issues such as interception and oversight. Many of the issues identified with the gathering, use and retention of communications data, however, have a broader application to the approach adopted by the Bill.

2.2 It should be noted, in particular, that this submission does not suggest that it is unnecessary for either the security and intelligence services or law enforcement to have investigatory powers such as those contained in the draft Bill. Many of the powers in the draft Bill are clearly critical for both security and intelligence services and law enforcement to do their jobs. Rather, this submission suggests that as it is currently drafted the bill includes some powers that are poorly defined, poorly suited to the stated function, have more serious repercussions than seem to have been understood, and could represent a distraction, a waste of resources and add an unnecessary set of additional risks to an already risky environment for the very people that the security and intelligence services and law enforcement are charged with protecting.

3 The Internet, Internet Surveillance and Communications Data

3.1 The internet has changed the way that people communicate in many radical ways. More than that, however, it has changed the way people live their lives. This is perhaps the single most important thing to understand about the internet: we do not just use it for what we have traditionally thought of as ‘communications’, but in almost every aspect of our lives. We don’t just talk to our friends online, or just do our professional work online, we do almost everything online. We bank online. We shop online. We research online. We find relationships online. We listen to music and watch TV and movies online. We plan our holidays online. We try to find out about our health problems online. We look at our finance online. For most people in our modern society, it is hard to find a single aspect of our lives that does not have a significant online element.

3.2 This means that internet interception and surveillance has a far bigger potential impact than traditional communications interception and surveillance might have had. Intercepting internet communications is not the equivalent of tapping a telephone line or examining the outside of letters sent and received, primarily because we use the internet for far more than we ever used telephones or letters. This point cannot be overemphasised: the uses of the internet are growing all the time and show no signs of slowing down. Indeed, more dimensions of internet use are emerging all the time: the so-called ‘internet of things’ which integrates ‘real world’ items (from cars and fridges to Barbie dolls[1]) into the internet is just one example.

3.3 This is also one of the reasons that likening Internet Connection Records to an itemised phone bill is particularly misleading. Another equally important reason to challenge that metaphor is the nature and potential uses of the data itself. What is labelled Communications Data (and in particular ‘relevant communications data’, as set out in clause 71(9) of the draft Bill) is by nature of its digital form ideal for analysis and profiling. Indeed, using this kind of data for profiling is the heart of the business models of Google, Facebook and the entire internet advertising industry.

3.4 The inferences that can be – and are – drawn from this kind of data, through automated, algorithmic analysis rather than through informed, human scrutiny – are enormous and are central to the kind of ‘behavioural targeting’ that are the current mode of choice for internet advertisers. Academic studies have shown that very detailed inferences can be drawn: analysis of Facebook ‘Likes’, for example, has been used to indicate the most personal of data including sexuality, intelligence and so forth. A recent study at Cambridge University concluded that ‘by mining Facebook Likes, the computer model was able to predict a person’s personality more accurately than most of their friends and family.’[2]

3.5 This means that the kind of ‘communications’ data discussed in the Bill is vastly more significant that what is traditionally considered to be communications. It also means that from a human rights perspective more rights are engaged by its gathering, holding and use. Internet ‘communications’ data does not just engage Article 8 in its ‘correspondence’ aspect, but in its ‘private and family life’ aspect. It engages Article 10 – the impact of internet surveillance on freedom of speech has become a bigger and bigger issue in recent years, as noted in depth by the UN Special Rapporteur on Freedom of Expression, most recently in his report on encryption and anonymity.[3]

3.6 Article 11, which governs Freedom of Association and Assembly, is also critically engaged: not only do people now associate and assemble online, but they use online tools to organise and coordinate ‘real world’ association and assembly. Indeed, using surveillance to perform what might loosely be called chilling for association and assembly has become one of the key tools of the more authoritarian governments to stifle dissent. Monitoring and even shutting off access to social media systems, for example, was used by many of the repressive regimes in the Arab Spring. Even in the UK, the government communications plan for 2013/14 included the monitoring of social media in order to ‘head off badger cull protests’, as the BBC reported.[4] This kind of monitoring does not necessarily engage Article 8, as Tweets (the most obvious example to monitor) are public, but it would engage both aspects of Article 11, and indeed of Article 10.

3.7 Article 14, the prohibition of discrimination, is also engaged: the kind of profiling discussed in paragraph 3.4 above can be used to attempt to determine a person’s race, gender, possible disability, religion, political views, even direct information like membership of a trade union. It should be noted, as is the case for all these profiling systems, that accuracy is far from guaranteed, giving rise to a bigger range of risks. Where derived or profiling data is accurate, it can involve invasions of privacy, chilling of speech and discrimination: where it is inaccurate it can generate injustice, inappropriate decisions and further chills and discrimination.

3.8 This broad range of human rights engaged means that the ‘proportionality bar’ for any gathering of this data, interception and so forth is higher than it would be if only the correspondence aspect of Article 8 were engaged. It is important to understand that the underlying reason for this is that privacy is not an individual, ‘selfish’, right, but one that underpins the way that our communities function. We need privacy to communicate, to express ourselves, to associate with those we choose, to assemble when and where we wish – indeed to do all those things that humans, as social creatures, need to do. Privacy is a collective right that needs to be considered in those terms.

3.9 It is also critical to note that communications data is not ‘less’ intrusive than content: it is ‘differently’ intrusive. In some ways, as has been historically evident, it is less intrusive – which is why historically it has been granted lower levels of protection – but increasingly the intrusion possible through the gathering of communications data is in other was greater than that possible through examination of content. There are a number of connected reasons for this. Firstly, it is more suitable for aggregation and analysis – communications data is in a structured form, and the volumes gathered make it possible to use ‘big data’ analysis, as noted above. Secondly, content can be disguised more easily – either by technical encryption or by using ‘coded’ language. Thirdly, there are many kinds of subjects that are often avoided deliberately when writing content – things like sexuality, health and religion – that can be determined by analysis of communications data. That means that the intrusive nature of communications data can often be greater than that of content. Moreover, as the levels and nature of data gathered grows, the possible intrusions are themselves growing. This means that the idea that communications data needs a lower level of control, and less scrutiny, than content data is not really appropriate – and in the future will become even less appropriate.

4 When rights are engaged

4.1 A key issue in relation to the gathering and retention of communications data is when the relevant rights are engaged: it is when data is gathered and retained, when it is subject to algorithmic analysis or automated filtering, or when it is subject to human examination. When looked at from what might be viewed an ‘old fashioned’ communications perspective, it is only when humans examine the data that ‘surveillance’ occurs and privacy is engaged. In relation to internet communications data this is to fundamentally miss the nature of the data and the nature of the risks. In practice, many of the most important risks occur at the gathering stage, and more at what might loosely be described as the ‘automated analysis’ stage.

4.2 It is fundamental to the nature of data that when it is gathered it becomes vulnerable. This vulnerability has a number of angles. There is vulnerability to loss – from human error to human malice, from insiders and whistle-blowers to hackers of various forms. The recent hacks of Talk Talk and Ashley Madison in particular should have focussed the minds of any envisaging asking communications providers to hold more and more sensitive data. There is vulnerability to what is variously called ‘function creep’ or ‘mission creep’: data gathered for one reason may end up being used for another reason. Indeed, when business models of companies such as Facebook and Google are concerned this is one of the key features: they gather data with the knowledge that this data is useful and that the uses will develop and grow with time.

4.3 It is also at the gathering stage that the chilling effects come in. The Panopticon, devised by Bentham and further theorised about by Foucault, was intended to work by encouraging ‘good’ behaviour in prisoners through the possibility of their being observed, not by the actual observation. Similarly it is the knowledge that data is being gathered that chills freedom of expression, freedom of association and assembly and so forth, not the specific human examination of that data. This is not only a theoretical analysis but one borne out in practice, which is one of the reasons that the UN Special Rapporteur on Freedom of Expression and many others have made the link between privacy and freedom of expression.[5]

4.4 Further vulnerabilities arise at the automated analysis stage: decisions are made by the algorithms, particular in regard to filtering based on automated profiling. In the business context, services are tailored to individuals automatically based on this kind of filtering – Google, for example, has been providing automatically and personally tailored search results to all individuals since 2009, without the involvement of humans at any stage. Whether security and intelligence services or law enforcement use this kind of a method is not clear, but it would be rational for them to do so: this does mean, however, that more risks are involved and that more controls and oversight are needed at this level as well as at the point that human examination takes place.

4.5 Different kinds of risks arise at each stage. It is not necessarily true that the risks are greater at the final, human examination stage. They are qualitatively different, and engage different rights and involve different issues. If anything, however, it is likely that as technology advances the risks at the earlier stages – the gathering and then the automated analysis stages – will become more important than the human examination stage. It is critical, therefore, that the Bill ensures that appropriate oversight and controls are put in place at these earlier stages. At present, this does not appear to be the case. Indeed, the essence of the data retention provisions appears to be that no real risk is considered by the ‘mere’ retention of data. That is to fundamentally misunderstand the impact of the gathering of internet communications data.

5 Internet Connection Records

5.1 Internet Connection Records (‘ICRs’) have been described as the only really new power in the Bill, and yet they are deeply problematic in a number of ways. The first is the question of definition. The ‘Context’ section of the Guide to Powers and Safeguards (the Guide) in the introduction to the Bill says that:

“The draft Bill will make provision for the retention of internet connection records (ICRs) in order for law enforcement to identify the communications service to which a device has connected. This will restore capabilities that have been lost as a result of changes in the way people communicate.” (paragraph 3)

This is further explained in paragraphs 44 and 45 of the Guide as follows:

“44. A kind of communications data, an ICR is a record of the internet services a specific device has connected to, such as a website or instant messaging application. It is captured by the company providing access to the internet. Where available, this data may be acquired from CSPs by law enforcement and the security and intelligence agencies.

45. An ICR is not a person’s full internet browsing history. It is a record of the services that they have connected to, which can provide vital investigative leads. It would not reveal every web page that they visit or anything that they do on that web page.”

Various briefings to the press have suggested that in the context of web browsing this would mean that the URL up to the first slash would be gathered (e.g. www.bbc.co.uk and not any further e.g. http://www.bbc.co.uk/sport/live/football/34706510 ). On this basis it seems reasonable to assume that in relation to app-based access to the internet via smartphones or tablets the ICR would include the activation of the app, but nothing further.

5.2 The ‘definition’ of ICRs in the bill is set out in 47(6) as follows:

“In this section “internet connection record” means data which—

(a) may be used to identify a telecommunications service to which a communication is transmitted through a telecommunication system for

the purpose of obtaining access to, or running, a computer file or computer program, and

(b) is generated or processed by a telecommunications operator in the process of supplying the telecommunications service to the sender of the communication (whether or not a person).”

This definition is vague, and press briefings have suggested that the details would be in some ways negotiated directly with the communications services. This does not seem satisfactory at all, particularly for something considered to be such a major part of the Bill: indeed, the only really new power according to the Guide. More precision should be provided within the Bill itself – and specific examples spelled out in Codes of Practice that accompany the Bill, covering the major categories of communications envisaged. Initial versions of these Codes of Practice should be available to Parliament at the same time as the Bill makes its passage through the Houses.

5.3 The Bill describes the functions to which ICRs may be put. In 47(4) it is set out that ICRs (and data obtained through the processing of ICRs) can only be used to identify:

“(a) which person or apparatus is using an internet service where—

(i) the service and time of use are already known, but

(ii) the identity of the person or apparatus using the service is not known,

(b) which internet communications service is being used, and when and how it is being used, by a person or apparatus whose identity is already known, or

(c) where or when a person or apparatus whose identity is already known is obtaining access to, or running, a computer file or computer program which wholly or mainly involves making available, or acquiring, material whose possession is a crime.”

The problem is that in all three cases ICRs insofar as they are currently defined are very poorly suited to performing any of these three functions – and better methods either already exist for them or could be devised to do so. ICRs provide at the same time much more information (and more intrusion) than is necessary and less information than is adequate to perform the function. In part this is because of the way that the internet is used and in part because of the way that ICRs are set out. Examples in the following paragraphs can illustrate some (but not all) of the problems.

5.4 The intrusion issue arises from the nature of internet use, as described in Section 3 of this submission. ICRs cannot be accurately likened to ‘itemised telephone bills’. They do not record the details of who a person is communicating with (as an itemised telephone bill would) but they do include vastly more information, and more sensitive and personal information, than an itemised telephone bill could possibly contain. A record of websites visited, even at the basic level, can reveal some of the most intimate information about an individual – and not in terms of what might traditionally be called ‘communications’. This intrusion could be direct – such as accessing a website such as www.samaritans.org at 3am or accessing information services about HIV – or could come from profiling possibilities. The commercial profilers, using what is often described as ‘big data’ analysis (and has been explained briefly in section 3 above) are able to draw inferences from very few pieces of information. Tastes, politics, sexuality, and so forth can be inferred from this data, with a relatively good chance of success.

5.5 This makes ICRs ideal for profiling and potentially subject to function-creep/mission-creep. It also makes them ideally suited for crimes such as identity theft and personalised scamming, and the databases of ICRs created by communications service providers a perfect target for hackers and malicious insiders. By gathering ICRs, a new range of vulnerabilities are created. Data, however held and whoever it is held by, is vulnerable in a wide range of ways.[6] Recent events have highlighted this very directly: the hacking of Talk Talk, precisely the sort of provider who would be expected to gather and store ICRs, should be taken very seriously. Currently it appears as though this hack was not done by the kind of ‘cyber-terrorists’ that were originally suggested, but by disparate teenagers around the UK. Databases of ICRs would seem highly likely to attract the interest both hackers of many different kinds. In practice, too, precisely those organisations who should have the greatest expertise and the greatest motivations to keep data secure – from the MOD and HMRC and the US DoD to Swiss Banks, technology companies including Sony and Apple – have all proved vulnerable to hacking or other forms of data loss in recent years. Hacking is the most dramatic, but human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

5.6 ICRs are also unlikely to provide the information that law enforcement and the intelligence and security services need in order to perform the three functions noted above. The first example of this is Facebook. Facebook messages and more open communications would seem on the surface to be exactly the kind of information that law enforcement might need to locate missing children – the kind of example referred to in the introduction and guide to the bill. ICRs, however, would give almost no relevant information in respect of Facebook. In practice, Facebook is used in many different ways by many different people – but the general approach is to remain connected to Facebook all the time. Often this will literally be 24 hours a day, as devices are rarely turned off at night – the ‘connection’ event has little relationship to the use of the service. If Facebook is accessed by smartphone or tablet, it will generally be via an app that runs in the background at all times – this is crucial for the user to be able to receive notifications of events, of messages, of all kinds of things. If Facebook is accessed by PC, it may be by an app (with the same issues) or through the web – but if via the web this will often be using ‘tabbed browsing’ with one tab on the browser keeping the connection to Facebook available without the need to reconnect.

5.7 Facebook and others encourage and support this kind of long-term and even permanent connection to their services – it supports their business model and in a legal sense gives them some kind of consent to the kind of tracking and information gathering about their users that is the key to their success. ICRs would not help in relation to Facebook except in very, very rare circumstances. Further, most information remains available on Facebook in other ways. Much of it is public and searchable anyway. Facebook does not delete information except in extraordinary circumstances – the requirement for communications providers to maintain ICRs would add nothing to what Facebook retains.

5.8 The story is similar in relation to Twitter and similar services. A 24/7 connection is possible and indeed encouraged. Tweets are ‘public’ and available at all times, as well as being searchable and subject to possible data mining. Again, ICRs would add nothing to the ways that law enforcement and the intelligence and security services could use Twitter data. Almost all the current and developing communications services – from WhatsApp and SnapChat to Pinterest and more – have similar approaches and ICRs would be similarly unhelpful.

5.9 Further, the information gathered through ICRs would fail to capture a significant amount of the ‘communications’ that can and do happen on the internet – because the interactive nature of the internet now means that almost any form of website can be used for communication without that communication being the primary purpose of the website. Detailed conversations, for example, can and do happen on the comments sections of newspaper websites: if an analysis of ICRs showed access to www.telegraph.co.uk would the immediate thought be that communications are going on? Similarly, coded (rather than encrypted) messages can be put on product reviews on www.amazon.co.uk. I have had detailed political conversations on the message-boards of the ‘Internet Movies Database’ (www.imdb.com) but an ICR would neither reveal nor suggest the possibility of this.

5.10 This means that neither can the innocent missing child be found by ICRs via Facebook or its equivalents nor can the even slightly careful criminal or terrorist be located or tracked. Not enough information is revealed to find either – whilst extra information is gathered that adds to intrusion and vulnerability. The third function stated for ICRs refers to people whose identity is already known. For these people, ICRs provide insufficient information to help. This is one of the examples where more targeted powers would help – and are already envisaged elsewhere in the Bill.

5.11 The conclusion for all of this is that ICRs are not likely to be a useful tool in terms of the functions presented. The closest equivalent form of surveillance used around the world has been in Denmark, with very poor results. In their evaluation of five years’ experience the Danish Justice Ministry concluded that ‘session logging’, their equivalent of Internet Connection Records, had been of almost no use to the police. [7] It should be noted that when the Danish ‘session logging’ suggestion was first made, the Danish ISPs repeatedly warned that the system would not work and that the data would be of little use. Their warnings were not heeded. Similar warnings from ISPs in the UK have already begun to emerge. The argument has been made that the Danish failure was a result of the specific technical implementation – I would urge the Committee to examine it in depth to come to a conclusion. However, the fundamental issues as noted above are only likely to grow as the technology becomes more complex, the data more dense and interlinked, and the use of it more nuanced. All these trends are likely only to increase in speed.

5.12 The gathering and holding of ICRs are also likely to add vulnerabilities to all those about whom they are collected, as well as requiring massive amounts of data storage at a considerable cost. At a time when resources are naturally very tight, for the money, expertise and focus to be on something like this appears inappropriate.

6 Other brief observations about communications data, definitions and encryption

6.1 There is still confusion between ‘content’ and ‘communications’ data. The references to ‘meaning’ in 82(4), 82(8),106(8) and 136(4) and emphasised in 193(6) seem to add rather than reduce confusion – particularly when considered in relation to the kinds of profiling possible from the analysis of basic communications data. It is possible to derive ‘meaning’ from almost any data – this is one of the fundamental problems with the idea that content and communications can be simply and meaningfully separated. In practice, this is far from the case.[8] Further, Internet Connection Records are just one of many examples of ‘communications’ data that can be used to derive deeply personal information – and sometimes more directly (through analysis) than often confusing and coded (rather than encrypted) content.

6.2 There are other issues with the definitions of data – experts have been attempting to analyse them in detail in the short time since the Bill was published, and the fact that these experts have been unable to agree or at times even ascertain the meaning of some of the definitions is something that should be taken seriously. Again it emphasises the importance of having sufficient time to scrutinise the Bill. Graham Smith of Bird & Bird, in his submission to the Commons Science and Technology Committee,[9] notes that the terms ‘internet service’ and ‘internet communications service’ used in 47(4) are neither defined nor differentiated, as well as a number of other areas in which there appears to be significant doubt as to what does and does not count as ‘relevant communications data’ for retention purposes. One definition in the Bill particularly stands out: in 195(1) it is stated that ‘”data” includes any information which is not data’. Quite what is intended by this definition remains unclear.

6.3 In his report, ‘A question of trust’, David Anderson QC called for a law that would be ‘comprehensive and comprehensible’: the problems surrounding definitions and the lack of clarity about the separation of content and communications data mean that the Bill, as drafted, does not meet either of these targets yet. There are other issues that make this failure even more apparent. The lack of clarity over encryption – effectively leaving the coverage of encryption to RIPA rather than drafting new terms – has already caused a significant reaction in the internet industry. Whether or not the law would allow end-to-end encryption services such as Apple’s iMessage to continue in their current form, where Apple would not be able to decrypt messages themselves, needs to be spelled out clearly, directly and comprehensibly. In the current draft of the Bill it does not.

6.4 This could be solved relatively simply by the modification of 189 ‘Maintenance of technical capability’, and in particular 189(4)(c) to make it clear that the Secretary of State cannot impose an obligation to remove electronic protection that is a basic part of the service operated, and that the Bill does not require telecommunications services to be designed in such a way as to allow for the removal of electronic protection.

7 Future Proofing the Bill

7.1 One of the most important things for the Committee to consider is how well shaped the Bill is for future developments, and how the Bill might be protected from potential legal challenges. At present, there are a number of barriers to this, but there are ways forward that could provide this kind of protection.

7.2 The first of these relates to ICRs, as noted in section 5 above. The idea behind the gathering ICRs appears on the face of it to be based upon an already out-dated understanding of both the technology of the internet and of the way that people use it. In its current form, the idea of requiring communications providers to retain ICRs is also a hostage to fortune. The kind of data required is likely to become more complex, of a vastly greater volume and increasingly difficult to use. What is already an unconvincing case will become even less convincing as time passes. The best approach would seem to be to abandon the idea of requiring the collection of ICRs entirely, and looking for a different way forward.

7.3 Further, ICRs represent one of the two main ways in which the Bill appears to be vulnerable to legal challenge. It is important to understand that recent cases at both the CJEU (in particular the Digital Ireland case[10] and the Schrems case[11]) and the European Court of Human Rights (in particular the Zakharov case[12]) it is not just the examination of data that is considered to bring Article 8 privacy rights into play, but the gathering and holding of data. This is not a perverse trend, but rather a demonstration that the European courts are recognising some of the issues discussed above about the potential intrusion of gathering and holding data. It is a trend that is likely to continue. Holding data of innocent people on an indiscriminate basis is likely to be considered disproportionate. That means that the idea of ICRs – where this kind of data would be required to be held – is very likely to be challenged in either of these courts and indeed is likely to be overturned at some point.

7.4 The same is likely to be true of the ‘Bulk’ powers, unless those bulk powers are more tightly and clearly defined, including the giving of examples. At the moment quite what these bulk powers consist of – and how ‘bulky’ they are – is largely a matter of speculation, and while that speculation continues, so does legal uncertainty. If the powers involve the gathering and holding of the data of innocent people on a significant scale, a legal challenge either now or in the future seems to be highly likely.

7.5 It is hard to predict future developments either in communications technology or in the way that people use it. This, too, is something that seems certain to continue – and it means that being prepared for those changes needs to be built into the Bill. At present, this is done at least in part by having relatively broad definitions in a number of places, to try to ensure that future technological changes can be ‘covered’ by the law. This approach has a number of weaknesses – most notably that it gives less certainty than is helpful, and that it makes ‘function creep’ or ‘mission creep’ more of a possibility. Nonetheless, it is probably inevitable to a degree. It can, however, be ameliorated in a number of ways.

7.6 The first of these ways is to have a regular review process built in. This could take the form of a ‘sunset clause’, or perhaps a ‘renewal clause’ that requires a new, full, debate by Parliament on a regular basis. The precise form of this could be determined by the drafters of the Bill, but the intention should be clear: to avoid the situation that we find ourselves in today with the complex and almost incomprehensible regime so actively criticised by David Anderson QC, RUSI and to an extent the ISC in their reviews.

7.7 Accompanying this, it is important to consider not only the changes in technology, but the changes in people’s behaviour. One way to do this would be to charge those responsible for the oversight of communications with a specific remit to review how the powers are being used in relation to the current and developing uses of the internet. They should report on this aspect specifically.

8 Overall conclusions

8.1 I have outlined above a number of ways in which the Bill, in its current form, does not seem to be workable, proportionate, future-proofed and protected from potential legal challenges. I have made five specific recommendations:

8.1.1 I do not believe the case has been made for retaining ICRs. They appear unlikely to be of any real use to law enforcement in performing the functions that are set out, they add a significant range of risks and vulnerabilities, and are likely to end up being extremely expensive. This expense is likely to fall upon both the government – in which case it would be a waste of resources that could be put to more productive use to achieve the aims of the Bill – or ordinary internet users through increased connection costs.

8.1.2 The Bill needs to be more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge.

8.1.3 The Bill needs to be more precise about encryption and to be clear about the approach to end-to-end encryption. This is critical to building trust in the industry, and in particular with overseas companies such as those in Silicon Valley. It is also a way to future-proof the Bill: though some within the security and intelligence services may not like it, strong encryption is fundamental to the internet now and will become even more significant in the future. This should be embraced rather than fought against.

8.1.4 Oversight needs strengthening and broadening – including oversight of how the powers have been used in relation to changes in behaviour as well as changes in technology

8.1.5 The use of some form of renewal or sunset clause should be considered, to ensure that the powers are subject to full review and reflection by parliemant on a regular basis.

8.2 The question of resource allocation is a critical one. For example, have alternatives to the idea of retaining ICRs been properly considered for both effectiveness and costs? The level of intrusion of internet surveillance (as discussed in section 3 above) adds to the imperative to consider other options. Where a practice is so intrusive, and impacts upon such a wide range of human rights (Articles 8, 10, 11 and 14 of the ECHR – and possibly Article 6), a very high bar has to be set to make it acceptable. It is not at all clear either that the height of that bar has been appropriately set or that the benefits of the Bill mean that it has met them. In particular, the likely ineffectiveness of ICRs mean that it is very hard to argue that this part of the Bill would meet even a far lower requirement. The risks and vulnerabilities that retention of ICRs adds will in all probability exceed the possible benefits, even without considering the intrusiveness of their collection, retention and use.

8.3 The most important overall conclusion at this stage, however, is that more debate and analysis is needed. The time made available for analysis is too short for any kind of certainty, and that means that the debate is being held without sufficient information or understanding. Time is also needed to enable MPs and Lords to gain a better understanding of how the internet works, how people use it in practice, and how this law and the surveillance envisaged under its auspices could impact upon that use. This is not a criticism of MPs or Lords so much as a recognition that people in general do not have that much understanding of how the internet works – one of the best things about the internet is that we can use it quickly and easily without having to understand much of what is actually happening ‘underneath the bonnet’ as it were. In passing laws with significant effects – and the Investigatory Powers Bill is a very significant Bill – much more understanding is needed.

8.4 It is important for the Committee not to be persuaded that an event like the recent one in Paris should be considered a reason to ‘fast-track’ the Bill, or to extend the powers provided by the Bill. In Paris, as in all the notable terrorism cases in recent years, from the murder of Lee Rigby and the Boston Bombings to the Sydney Café Siege and the Charlie Hebdo shootings, the perpetrators (or at the very least a significant number of the perpetrators) were already known to the authorities. The problem was not a lack of data or a lack of intelligence, but the use of that data and that intelligence. The issue of resources noted above applies very directly here: if more resources had been applied to ‘conventional’ intelligence it seems, on the surface at least, as though there would have been more chance of the events being avoided. Indeed, examples like Paris, if anything, argue against extending large-scale surveillance powers. If the data being gathered is already too great for it to be properly followed up, why would gathering more data help?

8.5 As a consequence of this, in my opinion the Committee should look not just at the detailed powers outlined in the Bill and their justification, but also more directly at the alternatives to the overall approach of the Bill. There are significant costs and consequences, and the benefits of the approach as opposed to a different, more human-led approach, have not, at least in public, been proven. The question should be asked – and sufficient evidence provided to convince not just the Committee but the public and the critics in academia and elsewhere. David Anderson QC made ‘A Question of Trust’ the title of his review for a reason: gaining the trust of the public is a critical element here.

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law

[8] This has been a major discussion point amongst legal academics for a long time. See for example the work of Daniel Solove, e.g. Reconstructing Electronic Surveillance Law, Geo. Wash. L. Review, vol 72, 2003-2004

Last week was a momentous one for information law. Two dramatic and potentially very significant rulings. The first was the Black Spider memos Freedom of Information case through which it now appears certain that 27 ‘private’ letters from Prince Charles to government ministers will be published. The second was the decision in the Vidal-Hall vs Google case, which may have opened the doors for people whose privacy was effectively being invaded by Google to take action through the UK courts, despite their being unable to demonstrate economic damage from that privacy invasion. I won’t go into the legal details of either: far better legal minds than mine have already done so, the two pieces on the 11KBW blog about the Black Spider letters and Vidal-Hall vs Google respectively explain them really well. Instead, I want to look at one particular issue – the relationship between privacy and power, which is played out in different but related ways in both cases.

Princes, information and power

It is often said that ‘information is power’ – and in the case of Prince Charles’ Black Spider letters that does seem to be the case. Without knowing the contents of the letters – something that may shortly change – it can be assumed that power and information are central to them. The letters concerned – 27 of them – are letters written by Prince Charles to government ministers. The very fact that he wrote them, and could expect to have answers to them, shows that he had power and was using it (at the very least) to get information. He might have been using it to attempt to use that information to influence policy – we may be able to determine that as and when we see the content – but information is central to it. What is more, he knows that if we get hold of the information, he may lose some of his power, and we may gain some power over him – which is, presumably, why he is so keen for them to remain out of the public eye. Information really is power here.

Privacy, in this context, can be seen as control over information – and it is hardly surprising that Prince Charles invoked privacy in his response:

“Clarence House is disappointed the principle of privacy has not been upheld”

Privacy has value – it is a human right – and as an argument against disclosure, it feels better than saying (for example) that Clarence House is disappointed that it wasn’t able to exert its power as effectively as it wished, or that it is disappointed to be about to be losing some of its power. And yet that’s what’s really happening here. People with power have often used privacy as a way to maintain that power, to maintain their control over the situation. Indeed, in the courts, privacy has often been invoked by powerful people – from philandering footballers to secretive celebrities – to keep their lives and loves under wraps. Sometimes that’s entirely right – privacy really is a human right, and we all have that right. It is, however, a right that is held in balance, not an absolute right. It’s held in balance with freedom of expression, with freedom of information – and when looking at surveillance and so forth in balance with interests and needs like security. It is also a right that relates primarily to our private lives – not our public lives, or our professional lives. If we’re talking about professional lives, ideas such as confidentiality are more relevant – not quite the same as privacy, and subject to different checks and balances. Here, this really wasn’t about Prince Charles’ private life: writing to government ministers when you’re the heir to the throne is not a private life issues. I would defend Prince Charles’ right to privacy over letters to his children, his wife, his mother, his friends and so on just as much as I would defend anyone’s right to privacy over their correspondence – but that’s not what this is about.

Ultimately, that’s why the Black Spider Letters are becoming public – because there’s a public interest in our knowing the contents, which is what Freedom of Information is supposed to be about. It’s a redressing of a power imbalance.

The New Princes of the Internet

…which brings us on to Google, one of the new Princes of the Internet, in the Machiavellian sense, and the Vidal-Hall case. Ultimately, this is also about power. The essence of the story is about Google tracking people’s activities on the internet, without their consent – indeed, when they had directly said that they didn’t want Google to track them. Why does Google do this? To get information, and ultimately to get power. They use this information to get power over people – not just over the people they’re tracking, but people generally, as they gather more and more data about people’s behaviour and learn about how people use the internet, what they’re interested in and so forth. That information, those invasions of privacy (for that is what they are) is used for Google’s own purposes – and despite how they often like to appear, Google are not neutral indexers of the net, helping develop systems and services for the betterment of humankind, champions of freedom of speech and so forth. They do do a lot of that – but because by doing so they can make money.

Google are a business, and what they do they do for business reasons – and there’s nothing wrong with that at all. We do, however, need to be a bit more aware of how that works and what the implications of that are. Amongst other things, it means that they will use the information they gather to get power over us – ultimately power to make more money from us, or by using us as tools to make money from others and so on. Again, power is the key, and again, that’s where privacy is involved. They invade our privacy in order to gain power over us, and if we’re able to assert our privacy, to protect our privacy, they lose power.

Privacy for ordinary people

It’s a subtle thing – none of the individual invasions of privacy is particularly significant – but that’s one of the reasons this ruling really is significant. By allowing people to take action even without proving economic loss, it could provide people who usually don’t have power the chance to protect their privacy. As noted above, privacy actions in the past have generally only been a tool for the powerful, not something for the rest of us – this might change that, and that is something that really matters.

Indeed, it could be the most important thing of all. Privacy, like all human rights, is most important as a way to protect those who don’t have power from those who do have power. It shouldn’t be a tool just for the rich and powerful – they already have a vast arsenal of tools at their disposal – it should be something that we can all use. We need privacy from all kinds of powerful entities, from businesses like Google and Facebook to a wide variety of governmental agencies and others.

What’s more, all those powerful entities invoke privacy for themselves to protect their own power. The Snowden revelations have showed how carefully governments have hidden their own actions from our scrutiny – indeed, how they continue to disclose as little of what they do as possible, and continue to ‘neither confirm nor deny’ the existence of many of their actions. Google, Facebook and others expect others to abandon their own privacy – indeed as shown in the Vidal-Hall case, sometimes they just ride roughshod over people’s privacy – whilst keeping their own actions as well hidden as possible. Google’s algorithms remain almost entirely opaque – trade secrets – no matter how often they talk about transparency. At a conference on Friday discussing the ‘Right to be Forgotten’, I asked the Google representative why they hadn’t updated their examples of right to be forgotten cases for almost a year, and the response I got was terse to say the least. They don’t want us to know what they do – while they want to know everything about what we do.

Redressing the privacy imbalance

For me, one of the key roles of the law is to redress this imbalance – to find ways to protect the privacy of ordinary people, and prevent princes – old princes like Charles and new princes like Google – both from invading our privacy and from invoking their own privacy to hold onto their power. In both the Black Spider Letters case and Vidal-Hall vs Google the law seems to have done exactly that, and the courts in both cases should be applauded. Of course there’s a long way to go, and those with power can and do use every means they can to hold onto that power. I fully expect the Black Spider letters to be heavily redacted as and when we finally see them, and Google is apparently seeking permission to appeal the Vidal-Hall case to the Supreme Court.

They may well succeed. Even if they do, the two cases this last week should be seen as victories, and both Prince Charles and Google should be more than a little afraid. Holding onto their power may be a little harder than they thought. I hope so.

Today is #DigitalRightsMatter day – and yes, I know there are days for many things (including, despite the complaints from some, an International Men’s Day (November 19th)). I’m usually fairly cynical about these days – but they do serve a purpose – to focus minds on significant issues, and hopefully to find ways to actually do something about them. In this case, the issue is digital rights – one close to my heart – and the thing to do is to support the Open Rights Group (ORG).

I should say, right from the start, that I’m on the Advisory Council of ORG so I have something of a vested interest – but I’m only on the Advisory Council because I think what ORG does is of critical importance, particularly right now. Never has there been a time when digital rights have been more important, and never has there been a time when they are more under threat. We use the internet for more and more things – from our work to our personal life, from our political activism to our entertainment, from finding jobs to finding romance. Indeed, there are pretty much no parts of our lives that are untouched by the internet – so what happens online, what happens to our digital freedoms and rights, is of ever increasing importance.

Now is when we need them

The threats that we face to our freedoms are growing at a seemingly exponential rate. Surveillance is almost everywhere, and the political pressure to increase it is frightening. Censorship, the other side of that authoritarian coin, is growing almost as fast – from more and more uses for ‘web-blocking’ to ‘porn’ filters that hide vastly more than porn, from critically important sex education websites to sites that discuss alcohol, anorexia and hate speech. David Cameron talks about banning encryption without seemingly having any idea of what he’s talking about – or the implications of his suggestions.

This last point highlights one of the reasons ORG is critically important right now. Politicians from all the mainstream parties seem to have very little grasp of how the internet works – and they reach for ‘easy’ solutions which get the right headlines in the Tabloid press but are not only almost always counterproductive and authoritarian but actually encourage the perpetuation of damaging myths that will make things continue to get worse. The media, left to their own devices, also have a tendency to look for easy headlines and worse.

That’s one of the places that ORG comes in. It campaigns on these issues – current campaigns include ‘Don’t Spy On Us’ dealing with surveillance, Blocked! which looks at filtering, and 451 Unavailable which tries to bring transparency to the blocking of websites by court orders. It produces information that cuts through the confusion and makes sense of these issues – and tries to help politicians and the media to understand them more. And it works – ORG representatives are now quoted regularly in the media and when they make submissions to government inquiries they’re the ones who are given hearings and referred to in reports.

They do much more than this. They help with court cases working with other excellent advocacy groups like Privacy International – the current challenge to the Data Retention and Investigatory Powers Act (DRIPA) is just one of many they’ve been involved in, and these cases really matter. They don’t always win – indeed, sadly they don’t win often – but they often force the disclosure of critical information, they sometimes bring about changes in the law, and they raise the profile of critical issues. ORG are also part of the critical European organisation EDRi who bring together digital rights groups from all over Europe to even more effect.

Now is when they need us

ORG, like other advocacy groups, regularly punches above its weight. It doesn’t have the massive resources of the government agencies and international corporations whose activities they often have to campaign against. There are no deep pockets in ORG, and no massive numbers of staff – they rely on donations, and on volunteers. That’s where #DigitalRightsMatter day comes in – ORG is trying to find new members, get more donations and find access to more expertise. Can you help?