from the yeah,-sure dept

Techdirt has been following the ridiculous proposal to extend EU copyright even further to include tiny snippets from articles for years now. The idea has already been tried twice in the European Union, and failed dismally on both occasions. In Spain, a study showed the move there caused serious economic damage, especially to smaller companies; German publishers tacitly admitted the law was pointless when they granted Google a free license to use snippets from their titles. More recently, the European Commission's own research confirmed that far from harming publishers, news aggregators have a positive impact on the industry's advertising revenue. Despite the clear indications that a snippet tax is a terrible idea, some want to go even further, and make it apply to hyperlinks too. Writing in the French newspaper Le Monde back in December, large news agencies including Germany's DPA and France's AFP complained that sites:

offer internet users the work done by others, the news media, by freely publishing hypertext links to their stories. […] Solutions must be found. […] We strongly urge our governments, the European parliament and the commission to proceed with this directive.

Now EU publishers have weighed in on the snippet tax, formally known as Article 11 of the proposed Copyright Directive. Their latest position paper, embedded below, makes a confession:

We acknowledge that concerns have been raised that Article 11 as proposed by the Commission may have a negative effect on the legitimate personal non-commercial use of excerpts from press publications by a natural person by way of hyperlinking or sharing.

But there's no reason to worry, they say, for the following reason:

However, we would like to emphasize that it is in publishers' interest to make their products available as widely as possible, on as many platforms as possible and this is why publishers themselves encourage their readers to share articles and news on social media for free.

In other words: trust us, we won't misuse a new right to forbid anyone from sharing even tiny snippets. Except, of course, copyright holders have repeatedly abused their intellectual monopoly to censor material, in precisely this way. EU publishers want this new right to block snippets to apply even to single words:

We therefore question the necessity of introducing in the new [EU] Presidency's compromise text, a reduction of the scope of protection granted to press publishers to acts of reproduction and making available to the public performed by "service providers" and excluding "individual words or very short extracts of text".

They also want to extend the scope of the snippet ban:

In our view it is essential that any commercial entity or organisation, regardless of their business model, including those currently licensed by press publishers, exclusively or collectively, continues to be within scope of protection. Typically these organisations can be aggregators, media monitoring and press clipping agencies, individual companies, or public institutions.

This isn't just about making search engines pay for the privilege of using snippets of text: it would include every company, of whatever size, and every public body, however meritorious or altruistic its activities, that uses them. The new position paper is important because it makes clearer than ever before that the snippet tax is not about stopping a few big players like Google from indexing stories from publications. After all, that could be easily achieved by blocking the crawlers using the robot.txt file. Article 11 is about something much bigger. It is the latest expression of the publishing industry's apparently infinite sense of entitlement -- that it has a right to control even "individual words or very short extracts of text" used by "any commercial entity or organisation, regardless of their business model", as the document puts it. The egotism of publishers is so monstrous that they don't even care if achieving this insane level of control over the Internet goes against their own economic interests, as the evidence shows it will. Power, it seems, is more important than profits.

Should the EU introduce an extra copyright for news sites, restricting how we can share news online? The controversy around this plan continues to brew – this time in the Council, where the member state governments are trying to find a consensus.

[...]

The Bulgarian Council Presidency is pushing what it calls a new compromise, instead of the choice of two options that their Estonian predecessors offered.

But upon closer investigation, the “compromise” looks mighty familiar: With exceptions for very short snippets and non-commercial use by individuals, as well as a shorter protection term than the Commission wanted, it looks much like the current German “ancillary copyright”, which almost all experts agree has been an abject failure.

The failure of snippet taxes/Google taxes is well documented, but never seems to deter further legislative efforts in the same direction. Google reacted to the initiative by dropping snippets from German news agencies, a move that produced a noticeable drop in traffic. German publishers called it "blackmail," but the simplest way to comply with bad laws is to opt out. Similar things happened in Spain with its snippet tax. Google nuked its local Google News service, resulting in affected publishers demanding the government force Google to re-open the service and start sending them traffic/money.

This push in the EU Commission for a snippet tax deliberately ignores research showing link taxes don't work, harm publishers, and are opposed by many of the journalists who would supposedly benefit from it. This is more than cherry-picking facts to support a Google tax. Pirate Party EU Parliament member Julia Reda (who wrote the post quoted above) previously uncovered reports the Commission tried to bury, including one that showed news aggregation services like Google News were a net benefit for listed publications.

At this point, it looks as though some form of snippet tax will eventually become EU law. Only half of the member countries oppose snippet taxes, and only a few of those are actively fighting the proposal. If it does become law, it won't work out the way publishers believe it will. Instead, it will harm smaller publishers and smaller aggregators, resulting in a consolidation of power for the largest publishers and platforms. The EU has no leverage in this battle. Google won't hang around for long if the situation is unprofitable and publishers will have to settle for taxing Yahoo, Bing, etc. for whatever traffic these search engines manage to send their way.

from the new-EU-law-will-soon-make-that-possible-anyway dept

Last November we reported on the legal opinion of one of the Advocates General that advises the EU's top court, the Court of Justice of the European Union (CJEU). It concerned yet another case brought by the data protection activist and lawyer Max Schrems against Facebook, which he claims does not follow EU privacy laws properly. There were two issues: whether Schrems could litigate against Facebook in his home country, Austria, and whether he could join with 25,000 people to bring a class action against the company. The Advocate General said "yes" to the first, and "no" to the second, and in its definitive ruling, the CJEU has agreed with both of those views (pdf). Here's what Schrems has to say on the judgment (pdf):

The Court of Justice of the European Union (CJEU) confirms that Max Schrems can litigate in Vienna against Facebook for violation of EU privacy rules. Facebook's attempt to block the privacy lawsuit was not successful.

However, today the CJEU gave a very narrowly definition of the notion of a "consumer" which deprives many consumer from consumer protection and also makes an Austrian-style "class action" impossible.

The rest of his press release gives background details of the case. Schrems explains why being able to bring a class action in Austria is important:

If a multinational knows that they cannot win a case, they try to find reasons so that a case is not admissible, or they try to squeeze a plaintiff out of the case by inflating the costs.. Facebook wanted to ensure that the case can only be heard in Dublin [where its EU headquarters are located], as Ireland does not have any class action and litigating even one model claim would cost millions of Euros in legal fees. In this case we'd have a valid claim, but it would be basically unenforceable.

The ECJ's finding has caused outrage among consumer groups that have campaigned for years for the [European] Commission to propose legislation allowing for EU-level class action lawsuits involving complainants from different member states. They argue that collective redress will make it easier and cheaper for consumer to sue.

The same articles notes that Schrems and the consumer groups may soon get their wish: the European Commission is expected to unveil proposals for a new law that will allow collective redress to be sought across the EU. Even if that does happen, it's likely to take years to implement. Before then, Facebook has many other problems it needs to confront. First, there is Schrem's personal suit against the company, which can now proceed in the Austrian courts. As he points out:

Facing a lawsuit, which questions Facebook's business model, is a huge risk for the company. Any judgement in Austria is directly enforceable at Facebook's Irish headquarter and throughout Europe.

That is, if Schrems wins his case, other EU citizens will be able to use the judgment to sue Facebook more easily. And Facebook may have headed off the threat of a class action under existing law, but the EU's new General Data Protection Regulation (GDPR), which will be enforced from May of this year, explicitly allows non-profit organizations to sue on the behalf of individuals. Article 80 of the GDPR says:

The data subject shall have the right to mandate a not-for-profit body, organisation or association which has been properly constituted in accordance with the law of a Member State, has statutory objectives which are in the public interest, and is active in the field of the protection of data subjects' rights and freedoms with regard to the protection of their personal data to lodge the complaint on his or her behalf

Germany is threatening curbs on how Facebook amasses data from millions of users in what would be an unprecedented intervention in the social network's business model.

Andreas Mundt, head of Germany's main antitrust agency, the Federal Cartel Office, said Facebook could be banned from collecting and processing third-party user data as one possible outcome of an investigation that in December concluded the US technology group was abusing its dominant market position.

If Germany goes ahead with these plans, it will drastically reduce the scope for Facebook to make money by using consolidated data about its users to sell advertising space, and may well encourage other EU nations to follow suit.

from the beware-the-innovations-you-kill dept

We've written a few times about the GDPR -- the EU's General Data Protection Regulation -- which was approved two years ago and is set to go into force on May 25th of this year. There are many things in there that are good to see -- in large part improving transparency around what some companies do with all your data, and giving end users some more control over that data. Indeed, we're curious to see how the inevitable lawsuits play out and if it will lead companies to be more considerate in how they handle data.

However, we've also noted, repeatedly, our concerns about the wider impact of the GDPR, which appears to go way too far in some areas, in which decisions were made that may have made sense in a vacuum, but where they could have massive unintended consequences. We've already discussed how the GDPR's codification of the "Right to be Forgotten" is likely to lead to mass censorship in the EU (and possibly around the globe). That fear remains.

But, it's also becoming clear that some potentially useful innovation may not be able to work under the GDPR. A recent NY Times article that details how various big tech companies are preparing for the GDPR has a throwaway paragraph in the middle that highlights an example of this potential overreach. Specifically, Facebook is using AI to try to catch on if someone is planning to harm themselves... but it won't launch that feature in the EU out of a fear that it would breach the GDPR as it pertains to "medical" information. Really.

Last November, for instance, the company unveiled a program that uses artificial intelligence to monitor Facebook users for signs of self-harm. But it did not open the program to users in Europe, where the company would have had to ask people for permission to access sensitive health data, including about their mental state.

Now... you can argue that this is actually a good thing. Maybe we don't want a company like Facebook delving into our mental states. You can probably make a strong case for that. But... there's also something to the idea of preventing someone who may harm or kill themselves from doing so. And that's something that feels like it was not considered much by the drafters of the GDPR. How do you balance these kinds of questions, where there are certain innovations that most people probably want, and which could be incredibly helpful (indeed, potentially saving lives), but don't fit with how the GDPR is designed to "protect" data privacy. Is data protection in this context more important than the life of someone who is suicidal? These are not easy calls, but it's not clear at all that the drafters of the GDPR even took these tradeoff questions into consideration -- and that should worry those of us who are excited about potential innovations to improve our lives, and who worry about what may never see the light of day because of these rules.

That's not to say that companies should be free to do whatever they want. There are, obviously LOTS of reasons to be concerned and worried about just how much data some large companies are collecting on everyone. But it frequently feels like people are acting as if any data collection is bad, and thus needs to be blocked or stopped, without taking the time to recognize just what kind of innovations we may lose.

from the this-is-not-helping dept

David Kaye, a law professor who has also been the UN's Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (quite the title!) has penned a very interesting article for Foreign Affairs (possibly behind a paywall or registration wall) about how Europe's recent attempts to regulate the internet are now a major threat to free speech. It talks about many issues we've written about, from the awful Right to be Forgotten cases to efforts to fine internet platforms if they don't magically disappear hate speech. While telling internet platforms to "fix it' may feel good, the reality is that it doesn't work, creates more problems, and gives those platforms even more power as the de facto speech police (something we should all be worried about). As Kaye writes:

In September of this year, the commission doubled down on these principles, adopting a formal communication that urges “online platforms to step up the fight against illegal content.” As with the right to be forgotten, the communication puts the companies themselves in the position of identifying, especially through the use of algorithmic automation, illegal content posted to their platforms. But, as Daphne Keller of Stanford’s Center for Internet and Society has argued, the idea that automation can solve illegal content problems without sweeping in vast amounts of legal content is fantasy. Machines typically fail to account for satire, critique, and other kinds of context that turn superficial claims of illegality into fully legitimate content. Automation thus involves a disproportionate takedown of legal content all to target a smaller amount of illegal material online. As a matter of law, as attorney and legal analyst Graham Smith noted, the commission process reverses the normal presumptions of legality in favor of illegality, with safeguards so weak that companies will likely err on the side of taking down content.

The communication expressly avoids the problem of disinformation and propaganda. But regulation of such content may also be on the horizon, as the commission has announced creation of a High-Level Group to address it. Even the staunchest promoters of freedom of expression in European politics recognize that disinformation is a major problem. Marietje Schaake, a Dutch member of the European Parliament and a leading proponent of respect for human rights in Europe, captured a widespread view on the continent when she said in parliamentary debate that she is “not reassured when Silicon Valley or Mark Zuckerberg are the de facto designers of our realities or of our truths.”

He goes on to point out a number of other problematic attacks on free speech in the name of "regulating the internet" -- including the EU's attempt at copyright reform. He concludes by noting we should be very, very concerned about how this will play out for free speech:

These rules should concern anyone who cares about freedom of expression, as they involve limitations on European uses of online platforms. European policymakers have good faith reasons to advocate them, such as countering rampant abuse at a time of human dislocation, political instability, and rise of far-right parties. Yet the tools used often risk overregulation, incentivizing private censorship that could undermine public debate and creative pursuits. Companies may be forced into the position of facilitating practices that undermine their customers’ access to information. Europeans should be concerned, as many are.

Why should anyone else care? In the analog era, after all, a fair response in the United States to speech regulation across the pond (or anywhere else) might have been: that’s the way they do it in Europe. They have different experiences, giving some support (if very limited) to rules that U.S. courts would never permit—such as those against Holocaust denial or the glorification of terrorism.

But online space is different. All of the major companies operate at scale, and there is significant risk that troubling content regulations in Europe will seep into global corporate practices with an impact on the uses of social media and search worldwide. The possibility of global delinking of search results may be the most obvious form of content threat, but all of the rules and proposals noted above may slowly move to undermine freedom of expression. For instance, once a company invests the considerable funding required to develop sophisticated content filters for European markets, the barriers to applying them in American contexts are likely to come down.

There's much more in Kaye's piece and I highly recommend reading the whole thing.

One important point in all of this, which is noted in the quoted section above, but is worth repeating: most of the people pushing for these laws are not doing so with the intention of suppressing speech. Some may be doing so, but most of them really do have good intentions. The problem is that if you don't live in the world of free speech (or spend time watching how speech is stifled in various ways) it's often hard to realize how your "perfectly reasonable" attempt to stop some form of "bad" speech can be turned around and twisted, stretched and abused to silence all sorts of important speech. Even worse, it's very difficult for many people proposing these rules to comprehend that their attempts to, say, silence Nazis online, will almost certainly be used to silence the most vulnerable. Tools to suppress speech are frequently used against the powerless, especially when they try to speak out against the powerful.

The fact that so many new regulations in Europe (not to mention elsewhere) are being pushed with little concern for or regard to the wider impact on free expression should concern us all.

from the if-it-looks-like-a-duck,-swims-like-a-duck,-and-quacks-like-a-duck,-then-it-prob dept

Uber is a company that provokes strong emotions, as numerous stories on Techdirt indicate. Uber has been involved in some pretty bad situations, including inappropriate behavior, special apps to hide from regulators, and massive leaks of customer information. Despite this, it is undeniable that millions of people around the world love the convenience and competitive pricing of its service.

Equally, traditional taxi services dislike it for the way Uber flouts transports regulations that they obey, which is fair enough, and hate it for the way Uber challenges their often lazy monopolies, which is not. This has led to some appalling violence in some countries, as well as numerous legal actions. One of those, instituted by a professional taxi drivers' association in Spain, has resulted in a case before the EU's highest court (pdf), the Court of Justice of the European Union (CJEU), which has just ruled as follows:

the Court declares that an intermediation service such as that at issue in the main proceedings, the purpose of which is to connect, by means of a smartphone application and for remuneration, non-professional drivers using their own vehicle with persons who wish to make urban journeys, must be regarded as being inherently linked to a transport service and, accordingly, must be classified as 'a service in the field of transport' within the meaning of EU law.

The CJEU's reasoning was that Uber is more than a simple intermediation service. Its smartphone app is "indispensable" for the process of agreeing to deals between the driver and the customer, and Uber exercises "decisive influence over the conditions under which the drivers provide their service." As a result, the CJEU ruled that Uber is not "an information society service", but a "service in the field of transport", and may therefore be regulated just like traditional taxi services.

In practice, this means that Uber will be able to operate in the EU, but will be unable to continue with its swashbuckling approach that has seen it ignore many traditional requirements for taxi services. That result will be important for its knock-on effect on other services offered as part of the so-called "sharing economy". In fact, these are better described as new kinds of rental services, and like Uber they have often skirted around existing laws that cover their field of operation. The CJEU ruling, which can't be appealed, is likely to mean that other companies using online technology to provide such services will also need to obey relevant EU laws.

The protection offered by the E-Commerce Directive is a hot topic right now, one which necessarily involves the UK. However, with the UK due to leave the EU at 11pm local time on Friday 29 March, 2019, it will then be free to make its own laws. It’s now being suggested that as soon as Brexit happens, the UK should introduce new laws that hold tech companies liable for “illegal content” that appears on their platforms.

The advice can be found in a new report published by the Committee on Standards in Public Life. Titled “Intimidation in Public Life”, the report focuses on the online threats and intimidation experienced by Parliamentary candidates and others.

The report summary, thankfully, puts all the bad news up front. Following a list of (terrible) recommendations, the report quotes government officials wringing their hands about the fact terrible people exist.

Lord Bew, Chair of the Committee, said:

This level of vile and threatening behaviour, albeit by a minority of people, against those standing for public office is unacceptable in a healthy democracy. We cannot get to a point where people are put off standing, retreat from debate, and even fear for their lives as a result of their engagement in politics. This is not about protecting elites or stifling debate, it is about ensuring we have a vigorous democracy in which participants engage in a responsible way which recognises others’ rights to participate and to hold different points of view.

Lord Bew's beef should be with "vile and threatening minority of people," rather than platforms. But of course it isn't. Like many government officials, Bew believes the liable party should be the host of the offending content, rather than the actual offenders. The Committee wants to handle questionable content in the most ineffectual and dangerous way possible:

Government should bring forward legislation to shift the liability of illegal content online towards social media companies.

And if that's not dumb enough, there's this recommendation which is sure to have a damaging effect on political speech -- normally the sort of things governments should strive to protect.

Government should consult on the introduction of a new offence in electoral law of intimidating Parliamentary candidates and party campaigners.

Cool. A brand new #PoliticalLivesMatter law that won't be abused by every thin-skinned politician who finds criticism intimidating.

And that's not all. As TorrentFreak notes, the report also suggests adding new intermediary liability for things having nothing to do with "intimidation in public life."

“Currently, social media companies do not have liability for the content on their sites, even where that content is illegal. This is largely due to the EU E-Commerce Directive (2000), which treats the social media companies as ‘hosts’ of online content. It is clear, however, that this legislation is out of date,” the report reads.

If this goes through, platforms could be held liable for IP infringement and defamation, even though those acts were committed by platform users.

Fortunately, these are still just recommendations. There's no telling how this all will work out when the UK's divorce is official. But past actions by the UK government hardly raise hope it won't decide to go after the largest targets, rather than the proper targets, when finally free of EU regulations.

from the definition-of-insanity dept

Nine European press agencies, including AFP, called Wednesday on internet giants to be forced to pay copyright for using news content on which they make vast profits.

The call comes as the EU is debating a directive to make Facebook, Google, Twitter and other major players pay for the millions of news articles they use or link to.

"Facebook has become the biggest media in the world," the agencies said in a plea published in the French daily Le Monde.

"Yet neither Facebook nor Google have a newsroom... They do not have journalists in Syria risking their lives, nor a bureau in Zimbabwe investigating Mugabe's departure, nor editors to check and verify information sent in by reporters on the ground."

"Access to free information is supposedly one of the great victories of the internet. But it is a myth," the agencies argued.

"At the end of the chain, informing the public costs a lot of money."

This is a doomed idea. First off, if the demands are a pain to implement, news agencies can expect to start seeing referral traffic drop as other news sources not tied to payment demands see their search engine stock rise. If they continue to press for a cut of these companies "billions," they can expect to be cut off completely. This isn't hypothetical.

Second, any agency that wants to cut off the search engines supposedly bleeding them dry can always block the engines' crawlers. But this obviously isn't about killing off search engine hits and Facebook sharing -- it's about dipping a hand into pockets of service providers for having the audacity to expand the reach of European news agencies.

Finally, there's nothing in it for news agencies even if they succeed in getting a snippet tax implemented. They see companies worth billions and think skimming a little off the top will put them back in the black permanently. But anyone who knows anything about ad payouts knows CPM "taxes" aren't the road to riches. In reality, any implemented scheme would involve hundreds of news sites divvying up fractions of cents between themselves for search result impressions. Payouts might be slightly higher for more direct clicks from referrers like Facebook, but at best, new agencies should expect a few bucks a month from a link tax, rather than the thousands (or millions) they envision.

The news agencies supporting this move are complaining about declining ad revenue and think charging platforms for sending them traffic is the solution. This has been tried and it hasn't worked, but hope springs eternal when you're all out of innovative ideas.

from the please-don't-make-us-do-this dept

The Privacy Shield framework is key to allowing personal data to flow legally across the Atlantic from the EU to the US. As we've noted several times this year, there are a number of reasons to think that the EU's highest court, the Court of Justice of the European Union (CJEU), could reject Privacy Shield just as it threw out its predecessor, the Safe Harbor agreement. An obscure but influential advisory group of EU data protection officials has just issued its first annual review of Privacy Shield (pdf). Despite its polite, bureaucratic language, it's clear that the privacy experts are not happy with the lack of progress in dealing with problems pointed out by them previously. As the "Article 29 Data Protection Working Party" -- the WP29 for short -- explains:

Based on the concerns elaborated in its previous opinions ... the WP29 focused on the assessment of both the commercial aspects of the Privacy Shield and on the government access to personal data transferred from the EU for the purposes of Law Enforcement and National Security, including the legal remedies available to EU citizens. The WP29, assessed whether these concerns have been solved and also whether the safeguards provided under the EU-U.S. Privacy Shield are workable and effective.

As far as the commercial aspects of Privacy Shield are concerned, the WP29 is unhappy about a number of important "unresolved" issues such as "the lack of guidance and clear information on, for example, the principles of the Privacy Shield, on onward transfers [of personal data] and on the rights and available recourse and remedies for data subjects."
The issue of US government access to the personal data of EU citizens is even thornier. Although the WP29 welcomed efforts by the US government to become more "transparent on their use of their surveillance powers", the collection of and access to personal data for national security purposes under both section 702 of FISA and Executive Order 12333 were still a problem. On the former, WP29 suggests:

Instead of authorizing surveillance programs, section 702 should provide for precise targeting, along with the use of the criteria such as that of "reasonable suspicion", to determine whether an individual or a group should be a target of surveillance, subject to stricter
scrutiny of individual targets by an independent authority ex-ante.

As regards the Executive Order 12333, WP29 wants the Privacy and Civil Liberties Oversight Board (PCLOB) "to finish and issue its awaited report on EO 12333 to provide information on the concrete operation of this Executive Order and on its necessity and proportionality with regard to interferences brought to data protection in this context." That's likely to be a bit tricky, because the PCLOB is understaffed due to unfilled vacancies, and possibly moribund. In conclusion, the WP29 "acknowledges the progress of the Privacy Shield in comparison with the invalidated Safe Harbor Decision", but underlines that the EU group has "identified a number of significant concerns that need to be addressed by both the [European] Commission and the U.S. authorities." It spells out what will happen if they aren't sorted out:

In case no remedy is brought to the concerns of the WP29 in the given time frames, the members of WP29 will take appropriate action, including bringing the Privacy Shield Adequacy decision to national courts for them to make a reference to the CJEU for a preliminary ruling.

That is, it will ask the EU's highest court to rule on the so-called "adequacy decision" of the European Commission, where it decided that Privacy Shield offered enough protection for EU personal data moving to the US. There's a clear implication that WP29 doubts the CJEU's ruling will be favorable unless all the changes it has requested are made soon. And without the Privacy Shield framework, it will be much harder to transfer personal data legally across the Atlantic. Moreover, the EU's data protection laws are about to become even more stringent next year, when the new General Data Protection Regulation (GDPR) is enforced. Organizations in breach of the GDPR can be fined up to 4% of annual global turnover, which means even the biggest Internet companies will have a strong incentive to comply.

from the who-said-life-is-fair? dept

It's well known that the EU has laws offering relatively strong protection for personal data -- some companies say too strong. Possible support for that viewpoint comes from a new data protection case in the UK, which follows EU law, where the judge has come to a rather surprising conclusion. Details of the case can be found in a short post on the Panopticon blog, or in the court's 59-page judgment (pdf), but the basic facts are as follows.

In 2014, a file containing personal details of 99,998 employees of the UK supermarket chain Morrisons was posted on a file-sharing Web site. The file included names, addresses, gender, dates of birth, phone numbers (home or mobile), bank account numbers and salary information. Public links to the file were placed elsewhere, and copies of the data sent on a CD to three local newspapers, supposedly by someone who had found it on the Internet. In fact, all the copies originated from Andrew Skelton, a Senior IT Auditor in Morrisons, as later investigations discovered. According to the court, Skelton had a grudge against the company because of a disciplinary process that took place in 2013. As a result of the massive data breach in 2014, Skelton was sentenced to eight years in prison.

The current case was brought by some 5,500 employees named in the leaks, who sought compensation from Morrisons. There were two parts to the claim. One was that Morrisons was directly to blame, and the other that it had "vicarious liability" -- that is, liability for the actions or omissions of others. The UK judge found that Morrisons was not directly liable, since it had done everything it could to avoid personal data being leaked. However, as the Panopticon blog explains:

having concluded that Morrisons was entirely legally innocent in respect of Skelton's misuse of the data, the Judge held that it was nonetheless vicariously liable for Skelton's misdeeds

That is a legal bombshell as far as UK privacy law is concerned, since it means that a company that does everything it reasonably can to prevent personal data being revealed can nonetheless be held vicariously liable for the actions of an employee, even a malicious one. That clearly offers an extremely easy -- if potentially self-damaging -- route for disgruntled employees who want to harm their employers. All they need to do is intentionally leak personal data, and the company they work for will have vicarious responsibility for the privacy breach. In fact, even the judge was worried by the implications of his own decision:

The point which most troubled me in reaching these conclusions was the submission that the wrongful acts of Skelton were deliberately aimed at the party whom the claimants seek to hold responsible, such that to reach the conclusion I have may seem to render the court an accessory in furthering his criminal aims.

As a result, the judge granted leave for Morrisons to appeal against his judgment that it was vicariously liable. Hundreds of thousands of companies around the UK will now be hoping that a higher court, either nationally or even at the EU level, overturns the ruling, and sets a limit on those super-strong data protection laws.