Tag Archives: ICO

Recent ICO decision notices show the Home Office and MoJ repeatedly simply failing to respond to FOI requests. Surely the time has come for ICO action?

The Information Commissioner’s Office (ICO) recently stated to me that they were not monitoring the Home Office’s and Ministry of Justice’s (MoJ) compliance with the statutory timescales required by section 10 of the Freedom of Information Act 2000 (FOIA)

This was despite the fact that they’d published decision notices about delays by those two government bodies which reported that “The delay in responding to this request will be logged as part of ongoing monitoring of the MoJ’s compliance with the FOIA”. This was not formal monitoring, I was told; rather, it was informal monitoring. Ah. Gotcha.

So what does trigger formal monitoring? Interestingly, the ICO’s own position on this has recently changed, and got a bit stricter. It’s generally meant to be initiated in the following circumstances:

our analysis of complaints received by the ICO suggests that we have received in the region of 4 to 8 or more complaints citing delays within a specific authority within a six month period

(for those authorities which publish data on timeliness) – it appears that less than 90% of requests are receiving a response within the appropriate timescales. [this used to be 85%]

Evidence of a possible problem in the media, other external sources or internal business intelligence.

Despite the apparent increase in robustness of approach, the ICO do not appear to be monitoring any public authorities at the moment. The last monitoring took place between May and July 2016 when Trafford Council were in their sights. Although they are not mentioned in the relevant report, an ICO news item from July last year says that the Metropolitan Police, who have been monitored off and on for a period of years without any real outward signs of improvement, were also still being monitored.

But if they aren’t monitoring the compliance of any authorities at the moment, but particularly the Home Office and the MoJ, one is led to wonder why, when one notes the pattern in recent ICO decision notices involving those two authorities. Because, in 16 out of the last 25 decision notices involving the Home Office, and 6 out of the last 25 involving the MoJ, the ICO has formally issued decision notices finding that the authorities had failed to comply with the FOI request in question, by the time the decision notice was issued.

At this point, it might be helpful to explain the kind of chronology and process that would lead up to the issuing of such decision notices. First, a request must be made, and there will have been a failure by the authority to reply within twenty working days. Then, the requester will normally (before the ICO will consider the case) have had to ask for an internal review by the authority of its handling of the request. Then, the requester will have complained to the ICO. Then, the ICO will have normally made informal enquiries of the authority, effectively “geeing” them up to provide a response. Then, as still no response will have been sent, the ICO will have moved to issuing a formal decision notice. At any point in this process the authority could (and should) still respond to the original request, but no – in all of these cases (again – 16 of the last 25 Home Office decisions, 6 of the last 25 MoJ ones) the authorities have still not responded many months after the original request. Not only does this show apparent contempt for the law, but also for the regulator.

So why does the ICO not do more? I know many FOI officers (and their public authority employers) who work their socks off to make sure they respond to requests in a timely manner. In the absence of formal monitoring of (let alone enforcement action against) those authorities who seem to ignore their legal duties much of the time, those FOI officers would be forgiven for asking why they bother: it is to their credit that bother they still do.

Elizabeth Denham became Information Commissioner in July last year, bringing with her an impressive track record and making strong statements about enforcing better FOI compliance. Her first few months, with GDPR and Brexit to deal with, will not have been easy, and she could be forgiven for not having had the time to focus on FOI, but the pressing question now surely is “if not now, when?”

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Pre-ticked consent boxes and unsolicited emails from the Consumers’ Association

Which?, the brand name of the Consumers’ Association, publishes a monthly magazine. In an era of social media, and online reviews, its mix of consumer news and product ratings might seem rather old-fashioned, but it is still (according to its own figures1) Britain’s best-selling monthly magazine. Its rigidly paywalled website means that one must generally subscribe to get at the magazine’s contents. That’s fair enough (although after my grandmother died several years ago, we found piles of unread, unopened even, copies of Which? She had apparently signed up to a regular Direct Debit payment, probably to receive a “free gift”, and had never cancelled it: so one might draw one’s own conclusion about how many of Which?’s readers are regular subscribers for similar reasons).

In line with its general “locked-down” approach, Which?’s recent report into the sale of personal data was, except for snippets, not easy to access, but it got a fair bit of media coverage. Intrigued, I bit: I subscribed to the magazine. This post is not about the report, however, although the contents of the report drive the irony of what happened next.

As I went through the online sign-up process, I arrived at that familiar type of page where the subject of future marketing is broached. Which? had headlined their report “How your data could end up in the hands of scammers” so it struck me as amusing, but also irritating, that the marketing options section of the sign-in process came with a pre-ticked box:

As guidance from the Information Commissioner’s Office makes clear, pre-ticked boxes are not a good way to get consent from someone to future marketing:

Some organisations provide pre-ticked opt-in boxes, and rely on the user to untick it if they don’t want to consent. In effect, this is more like an opt-out box, as it assumes consent unless the user clicks the box. A pre-ticked box will not automatically be enough to demonstrate consent, as it will be harder to show that the presence of the tick represents a positive, informed choice by the user.

The Article 29 Working Party goes further, saying in its opinion on unsolicited communications for marketing purposes that inferring consent to marketing from the use of pre-ticked boxes is not compatible with the data protection directive. By extension, therefore, any marketing subsequently sent on the basis of a pre-ticked box will be a contravention of the data protection directive (and, in the UK, the Data Protection Act 1998) and the ePrivacy directive (in the UK, the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR)).

Nothwithstanding this, I certainly did not want to consent to receive subsequent marketing, so, as well as making a smart-arse tweet, I unticked the box. However, to my consternation, if not my huge surprise, I have subsequently received several marketing emails from Which? They do not have my consent to send these, so they are manifestly in contravention of regulation 22 of PECR.

It’s not clear how this has happened. Could it be a deliberate tactic by Which? to ignore subscribers’ wishes? One presumes not: Which? says it “exists to make individuals as powerful as the organisations they deal with in their daily live” – deliberately ignoring clear expressions regarding consent would hardly sit well with that mission statement. So is it a general website glitch – which means that those expressions are lost in the sign-up process? If so, how many individuals are affected? Or is it just a one-off glitch, affecting only me?

Let’s hope it’s the last. Because the ignoring or overriding of expressions of consent, and the use of pre-ticked boxes for gathering consent, are some of the key things which fuel trade in and disrespect for personal data. The fact that I’ve experience this issue with a charity which exists to represent consumers, as a result of my wish to read their report into misuse of personal data, is shoddy, to say the least.

I approached Which? for a comment, and a spokesman said:

We have noted all of your comments relating to new Which? members signing up, including correspondence received after sign-up, and we are considering these in relation to our process.

I appreciate the response, although I’m not sure it really addresses my concerns.

1Which? Annual Report 2015/2016

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

But a recent case in the Scottish Sheriff Court shows that CCTV and data protection can also have relevance in private law civil proceedings. In Woolley against Akbar [2017] ScotsSC 7 the husband and wife pursuers (equivalent to claimants in England and Wales) successfully brought a claim for compensation for distress caused by the defender’s (defendant in England and Wales) use of CCTV cameras which were continuously recording video and audio, and which were deliberately set to cover the pursuers’ private property (their garden area and the front of their home). Compensation was assessed at £8634 for each of the pursuers (so £17268 in total) with costs to be assessed at a later date.

Two things are of particular interest to data protection fans: firstly, the willingness of the court to rule unequivocally that CCTV operated in non-compliance with the DPA Schedule One principles was unlawful; and secondly, the award of compensation despite the absence of physical damage.

The facts were that Mr and Mrs Woolley own and occupy the upper storey of a dwelling place, while Mrs Akbar owns and operates the lower storey as a guest house, managed by her husband Mr Akram. In 2013 the relationship between the parties broke down. Although both parties have installed CCTV systems, the pursuers’ system only monitors their own property, but this was not the case with the defender’s:

any precautions to ensure that coverage of the pursuers’ property was minimised or avoided. The cameras to the front of the house record every person approaching the pursuers’ home. The cameras to the rear were set deliberately to record footage of the pursuers’ private garden area. There was no legitimate reason for the nature and extent of such video coverage. The nature and extent of the camera coverage were obvious to the pursuers, as they could see where the cameras were pointed. The coverage was highly intrusive…the defender also made audio recordings of the area around the pursuers’ property…they demonstrated an ability to pick up conversations well beyond the pursuers’ premises. There are four audio boxes. The rear audio boxes are capable of picking up private conversations in the pursuers’ rear garden. Mr Akram, on one occasion, taunted the pursuers about his ability to listen to them as the pursuers conversed in their garden. The defender and Mr Akram were aware of this at all times, and made no effort to minimise or avoid the said audio recording. The nature of the coverage was obvious to the pursuers. Two audio boxes were installed immediately below front bedroom windows. The pursuers feared that conversations inside their home could also be monitored. The said coverage was highly intrusive.

Although, after the intervention of the ICO, the defender realigned the camera at the rear of the property, Sheriff Ross held that the coverage “remains intrusive”. Fundamentally, the sheriff held that the CCTV use was: unfair (in breach of the first data protection principle); excessive in terms of the amount of data captured (in breach of the third data protection principle); and retained for too long (in breach of the fifth data protection principle).

The sheriff noted that, by section 13(2) of the DPA, compensation for distress can only be awarded if the pursuer has suffered “damage”, which was not the case here. However, the sheriff further correctly noted, and was no doubt taken to, the decision of the Court of Appeal in Vidal-Hall & Ors v Google [2015] EWCA Civ 311 in which the court struck down section 13(2) as being incompatible with the UK’s obligations under the European data protection directive and the Charter of Fundamental Rights (my take on Vidal Hall is here). Accordingly, “pure” distress compensation was available.

Although the facts here show a pretty egregious breach of DPA, it is good to see a court understanding and assessing the issues so well, no doubt assisted in doing so by Paul Motion, of BTO Solicitors, who appeared for the pursuers.

One niggle I do have is about the role of the ICO in all this: they were clearly apprised of the situation, and could surely have taken enforcement action to require the stopping of the CCTV (although admittedly ICO cannot make an award of compensation). It’s not clear to me why they didn’t.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Anyone used to reading Freedom of Information Act 2000 (FOIA) decision notices from the Information Commissioner’s Office (ICO) will be familiar with this sort of wording:

The Commissioner has concluded that the public interest favours maintaining the exemption contained at section x(y) of FOIA. In light of this decision, the Commissioner has not gone on to consider the public authority’s reliance on section z(a) of FOIA.

In fact, a search on the ICO website for the words “has not gone on” throws up countless examples.

What lies behind this approach is this: a public authority, in refusing to disclose recorded information, is entitled to rely on more than one of the FOIA exemptions, because information might be exempt under more than one. An obvious example would be where information exempted from disclosure for the purposes of safeguarding national security (section 24 FOIA) would also likely to be exempt under section 31 (law enforcement).

One assumes that the ICO does this for pragmatic reasons – if information is exempt it’s exempt, and application of a further exemption in some ways adds nothing. Indeed, the ICO guidance for public authorities advises

you [do not] have to identify all the exemptions that may apply to the same information, if you are content that one applies

Now, this is correct as a matter of law (section 78 of FOIA makes clear that, as a general principle, reliance by public authorities upon the Act’s exemptions is discretionary), and the ICO’s approach when making decisions is understandable, but it is also problematic, and a recent case heard by the Information Tribunal illustrates why.

In Morland v IC & Cabinet Office (EA/2016/0078) the Tribunal was asked to determine an appeal from Morland, after the Cabinet Office had refused to disclose to him minutes of the Honours and Decorations Committee, and after the ICO had upheld the refusal. As the Tribunal noted

The Cabinet Office refused the Appellant’s information request in reliance upon s. 37 (1) (b) and s. 35 (1) (a) of the Freedom of Information Act 2000 (“FOIA”) [and the ICO] Decision Notice found (at paragraph 13) that the exemption under s. 37 (1) (b) was 5 engaged by the request and (at paragraph 25) that the public interest favoured maintaining the exemption “by a narrow margin”. The Decision Notice expressly did not consider the Cabinet Office’s reliance on s. 35 (1) (b). [emphasis added]

The problem arose because the Tribunal found that, pace the ICO’s decision, the exemption at section 37(1)(b) was not engaged (because that section creates an exemption to disclosure if the information relates to the conferring by the crown of an honour or dignity, and the information request related to whether an entirely new honour should be created). But what of the exemption at s35(1)(b)? Well, although it would not always be the case in similar circumstances, here the Tribunal and the parties were in a bind, because, as the Tribunal said

We are left with a situation where, as the Decision Notice did not reach a conclusion on that issue, none of the parties appear to have regarded s. 35 (1) (a) as being seriously in play in this appeal, with the effect that we have received limited argument on that issue

There is no power to remit a decision to the ICO (see IC v Bell [2014] UKUT 0106 (AAC) (considered in a Panopticonblog post here), so the Tribunal had to make findings in relation to s35, despite a “concern whether it is right to do so”. On the expressly limited evidence before it it found that the exemption was not engaged at the time of the request, and, accordingly, upheld Morland’s appeal, saying that it

[regarded] the failure of the Decision Notice to determine a key issue between the parties as rather unsatisfactory

Whether this will lead the ICO to revisit its apparent policy of, at least at times, focusing on only one of multiple claimed exemptions remains to be seen. It’s not often that I have sympathy with the Cabinet Office when it comes to matters of FOIA, but there is a modicum here.

Nonetheless, I think what this case does suggest is that a public authority should, when faced with an appeal of an ICO Decision Notice upholding a FOIA refusal, give strong consideration to whether it needs to be joined to the appeal (as, admittedly, the Cabinet Office was here) and to make sure that its response to the appeal (under part 27 of the Tribunal Rules) fully deals with all applicable exemptions, notwithstanding the contents of the Decision Notice. In this way, the Tribunal can, where necessary, take as fully-apprised a decision as possible on all of those exemptions.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

[Edited to add: it is well worth reading the comments to this piece, especially the ones from Chris Pounder and Reuben Binns]

I needed a way to break a blogging drought, and something that was flagged up to me by a data protection colleague (thanks Simon!) provides a good opportunity to do so. It suggests that the drafting of the GDPR could lead to an enormous workload for the ICO.

The General Data Protection Regulation (GDPR) which entered into force on 24 May this year, and which will apply across the European Union from 25 May 2018, mandates the completion of Data Protection Impact Assessments (DPIAs) where indicated. Article 35 of the GDPR explains that

Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data

In the UK (and indeed elsewhere) we already have the concept of “Privacy Impact Assessments“, and in many ways all that the GDPR does is embed this area of good practice as a legal obligation. However, it also contains some ancillary obligations, one of which is to consult the supervisory authority, in certain circumstances, prior to processing. And here is where I get a bit confused.

Article 36 provides that

The controller shall consult the supervisory authority prior to processing where a data protection impact assessment under Article 35 indicates that the processing would result in a high risk in the absence of measures taken by the controller to mitigate the risk[emphasis added]

A close reading of Article 36 results in this: if the data controller conducts a DPIA, and is of the view that if mitigating factors were not in place the processing would be high risk, it will have to consult supervisory authority (in the UK, the Information Commissioner’s Office (ICO)). This is odd: it effectively renders any mitigating measures irrelevant. And it appears directly to contradict what recital 84 says

Where a data-protection impact assessment indicates that processing operations involve a high risk which the controller cannot mitigate by appropriate measures in terms of available technology and costs of implementation, a consultation of the supervisory authority should take place prior to the processing [emphasis added]

So, the recital says the obligation to consult will arise where high risk is involved which the controller can’t mitigate, while the Article says the obligation will arise where high risk is involved notwithstanding any mitigation in place.

Clearly, the Article contains the specific legal obligation (the recital purports to set out the reason for the contents of the enacting terms), so the law will require data controllers in the UK to consult the ICO every time a DPIA identifies an inherently high risk processing activity, even if the data controller has measures in place fully to mitigate and contain the risk.

For example, let us imagine the following processing activity – collection of and storage of customer financial data for the purposes of fulfilling a web transaction. The controller might have robust data security measures in place, but Article 36 requires it to consider “what if those robust measures were not in place? would the processing be high risk?” To which the answer would have to be “yes” – because the customer data would be completely unprotected.

In fact, I would submit, if article 36 is given its plain meaning virtually any processing activity involving personal data, where there is an absence of mitigating measures, would be high risk, and create a duty to consult the ICO.

What this will mean in practice remains to be seen, but unless I am missing something (and I’d be delighted to be corrected if so), the GDPR is setting the ICO and other supervisory authorities up for a massive influx of work. With questions already raised about the ICO’s funding going forward, that is the last thing they are likely to need.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

“Don’t panic” says David to those data controllers who are currently relying on Safe Harbor as a means of ensuring that personal data transferred by them to the United States has adequate protection (in line with the requirements of Article 25 of the European Data Protection Directive, and the eighth principle of schedule one of the UK’s Data Protection Act 1998 (DPA)). He is referring, of course, to the recent decision of the Court of Justice of the European Union in Schrems. which Data controllers should, he says, “take stock” and “make their own minds up”:

businesses in the UK don’t have to rely on Commission decisions on adequacy. Although you won’t get the same degree of legal certainty, UK law allows you to rely on your own adequacy assessment. Our guidance tells you how to go about doing this. Much depend [sic] here on the nature of the data that you are transferring and who you are transferring it to but the big question is can you reduce the risks to the personal data, or rather the individuals whose personal data it is, to a level where the data are adequately protected after transfer? The Safe Harbor can still play a role here.

Smith also refers to a recent statement by the Article 29 Working Party – the grouping of representatives of the various European data protection authorities, of which he is a member – and refers to “the substance of the statement being measured, albeit expressed strongly”. What he doesn’t say is how unequivocal it is in saying that

transfers that are still taking place under the Safe Harbour decision after the CJEU judgment are unlawful

And this is particularly interesting because, as I discovered today, the ICO itself appears (still) to be making transfers under Safe Harbor. I reported a nuisance call using its online tool (in doing so I included some sensitive personal data about a family member) and noticed that the tool was operated by SnapSurveys. The ICO’s own website privacy notice says

We collect information volunteered by members of the public about nuisance calls and texts using an online reporting tool hosted by Snap Surveys. This company is a data processor for the ICO and only processes personal information in line with our instructions.

This does not unambiguously say that SnapSurveys are transferring the personal data of those submitting reports to the ICO to the US under Safe Harbor – it is possible that the ICO has set up some bespoke arrangement with its processor, under which they process that specific ICO data within the European Economic Area – but it strongly suggests it.

It is understandable that a certain amount of regulatory leeway and leniency be offered to data controllers who have relied on Safe Harbor until now – to that extent I agree with the light-touch approach of the ICO. But if it is really the case that peoples’ personal data are actually being transferred by the regulator to the US, three weeks after the European Commission decision of 2000 that Safe Harbor provided adequate protection was struck down, serious issues arise. I will be asking the ICO for confirmation about this, and whether, if it is indeed making these transfers, it has undertaken its own adequacy assessment.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Over a year ago I blogged about a tweet by a member of the Oyston family connected with Blackpool FC:

a fan replies to a news item about the club’s manager, and calls the Oyston family “wankers”. Sam Oyston responds by identifying the seat the fan – presumably a season-ticket holder – occupies, and implies that if he continues to be rude the ticket will be withdrawn

For the reasons in that post I thought this raised interesting, and potentially concerning, data protection issues, and I mentioned that the Information Commissioner’s Office (ICO) had powers to take action. It was one of (perhaps the) most read posts (showing, weirdly, that football is possibly more of interest to most people than data protection itself) and it seemed that some people did intend complaining to the ICO. So, recently, I made an FOI request to the ICO for any information held by them concerning Blackpool FC’s data protection compliance. This was the reply

We have carried out thorough searches of the information we hold and have identified one instance where a member of the public raised concerns with the ICO in September 2014, about the alleged processing of personal data by Blackpool FC.

We concluded that there was insufficient evidence to consider the possibility of a s55 offence under the Data Protection Act 1998 (the DPA), and were unable to make an assessment as the individual had not yet raised their concerns with Blackpool FC direct. We therefore advised the individual to contact the Club and to come back to us if they were still concerned, however we did not hear from them again. As such, no investigation took place, nor was any assessment made of the issues raised.

This suggests the ICO appears wrongly to consider itself unable to undertake section 42 assessments under the Data Protection Act 1998 unless the data subject has complained to the data controller – a stance strongly criticised by Dr David Erdos on this blog, and one which has the potential to put the data subject further in dispute with the data controller (as I can imagine could have happened here, with a family some of whose members are ready to sue to protect their reputation). It also suggests though that maybe people weren’t quite as interested as the page views suggested. Nonetheless, I am posting this brief update, because a few people asked about it.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Some time ago I complained to the Information Commissioner’s Office (ICO) about the innuendo carried in the message that Google serves with search results on most personal names: “Some results may have been removed under data protection law in Europe”. I had already complained to Google UK, and wrote about it here. Google UK denied any responsibility or liability, and referred me to their enormous, distant, parents at 1600 Amphitheatre Parkway. I think they were wrong to do so, in light of the judgment of the Court of Justice of the European Union in the Google Spaincase C‑131/12, but I will probably pursue that separately.

However, section 42 of the Data Protection Act 1998 (DPA) allows me to ask the ICO to assess whether a data controller has likely or not complied with its obligations under the DPA. So that’s what I did (pointing out that a search on “Jon Baines” or “Jonathan Baines” threw up the offending message).

In her response the ICO case officer did not address the jurisdiction point which Google had produced, and nor did she actually make a section 42 assessment (in fairness, I had not specifically cited section 42). What she did say was this

As you know, the Court of Justice of the European Union judgement in May 2014 established that Google was a data controller in respect of the processing of personal data to produce search results. It is not in dispute that some of the search results do relate to you. However, it is also clear that some of them will relate to other individuals with the same name. For example, the first result returned on a search on ‘Jonathan Baines’ is ‘LinkedIn’, which says in the snippet that there are 25 professionals named Jonathan Baines, who use LinkedIn.

It is not beyond the realms of possibility that one or more of the other individuals who share your name have had results about them removed. We cannot comment on this. However, we understand that this message appears in an overwhelming majority of cases when searching on any person’s name. This is likely to be regardless of whether any links have actually been removed.

True, I guess. Which is why I’ve reverted with this clarification of my complaint:

If it assists, and to extend my argument and counter your implied question “which Jon Baines are we talking about?”, if you search < “Jon Baines” Information Rights and Wrongs > (where the search term is actually what lies between the < >) you will get a series of results which undoubtedly relate to me, and from which I can be identified. Google is processing my personal data here (that is unavoidable a conclusion, given the ruling by the Court of Justice of the European Union in “Google Spain” (Case C‑131/12)). The message “Some results may have been removed under data protection law in Europe” appears as a result of the processing of my personal data, because it does not appear on every search (for instance < prime minister porcine rumours > or < “has the ICO issued the cabinet office an enforcement notice yet” >). As a product of the processing of my personal data, I argue that the message relates to me, and constitutes my personal data. As it carries an unfair innuendo (unfair because it implies I might have asked for removal of search results) I would ask that you assess whether Google have or have not likely complied with their obligation under section 4(4) to comply with the first and fourth data protection principles. (Should you doubt the innuendo point, please look at the list of results on a Twitter search for “Some results may have been removed”).

Let’s hope this allows the ICO to make the assessment, without my having to consider whether I need to litigate against one of the biggest companies in world history.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

One of the options open to the Information Commissioner’s Office (ICO), when considering whether to take enforcement action under the Data Protection Act 1998 (DPA) is – as an alternative to such action – to invite an offending data controller to sign an “undertaking”, which will in effect informally commit it to taking, or desisting from, specified actions. An undertaking is a relatively common event (there have been fifty over the last year) – so much so that the ICO has largely stopped publicising them (other than uploading them to its website) – very rarely is there a press release or even a tweet.

There is a separate story to be explored about both ICO’s approach to enforcement in general, and to its approach to publicity, but I thought it was worth highlighting a rather remarkable undertaking uploaded to the ICO’s site yesterday. It appears that the airline Flybe reported itself to the ICO last November, after a temporary employee managed to scan another individual’s passport, and email it to his (the employee’s) personal email account. The employee in question was in possession of an “air side pass”. Such a pass allows an individual to work unescorted in restricted areas of airports and clearly implies a level of security clearance. The ICO noted, however, that

Flybe did not provide data protection training for all staff members who process personal data. This included the temporary member of staff involved in this particular incident…

This is standard stuff for DPA enforcement: lack of training for staff handling personal data will almost always land the data controller in hot water if something goes wrong. But it’s what follows that strikes me as remarkable

the employee accessed various forms of personal data as part of the process to issue air side passes to Flybe’s permanent staff. This data included copies of passports, banking details and some information needed for criminal record background checks. The Commissioner was concerned that such access had been granted without due consideration to carrying out similar background checks to those afforded to permanent employees. Given the nature of the data to which the temporary employee had access, the Commissioner would have expected the data controller to have had some basic checking controls in place.

Surely this raises concerns beyond the data protection arena? Data protection does not exist in isolation from a broader security context. If it was really the case that basic checking controls were not in place regarding Flybe’s temporary employees and data protection, might it raise concerns about how that impacts on national security?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

The politics.co.uk site reports that an anti-EU umbrella campaign called Leave.EU (or is it theknow.eu?) has been written to by the Information Commissioner’s Office (ICO) after allegedly sending unsolicited emails to people who appear to have been “signed up” by friends or family. The campaign’s bank-roller, UKIP donor Aaron Banks, reportedly said

We have 70,000 people registered and people have been asked to supply 10 emails of friends or family to build out (sic) database

Emails sent to those signed up in this way are highly likely to have been sent in breach of the campaign’s obligations under the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR), and the ICO is reported to have to written to the campaign to

inform them of their obligations under the PECR and to ask them to suppress [the recipient’s] email address from their databases

But is this really the main concern here? Or, rather, should we (and the ICO) be asking what on earth is a political campaign doing building a huge database of people, and identifying them as (potential) supporters without their knowledge? Such concerns go to the very heart of modern privacy and data protection law.

Data protection law’s genesis lie, in part, in the desire, post-war, of European nations to ensure “a foundation of justice and peace in the world”, as the preamble to the European Convention on Human Rights states. The first recital to the European Community Data Protection Directive of 1995 makes clear that the importance of those fundamental rights to data protection law.

The Directive is, of course, given domestic effect by the Data Protection Act 1998 (DPA). Section 2 of the same states that information as to someone’s political beliefs is her personal data: I would submit that presence on a database purporting to show that someone supports the UK”s withdrawal from the European Union is also her personal data. Placing someone on that database, without her knowledge or ability to object, will be manifestly “unfair” when it comes to compliance with the first data protection principle. It may also be inaccurate, when it comes to compliance with the fourth principle.

I would urge the ICO to look much more closely at this – the compiling of (query inaccurate) of secret databases of people’s political opinions has very scary antecedents.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.