from the of-course-it-did dept

Whatever the actual numbers, it seems like some hefty percentage of technology news revolves around leaks of one kind or another. Whether it concerns government, corporate, or legal proceedings information leaking to the public, it happens enough that at this point the operating posture of any organization should probably be to expect leaks, rather than flailing at modernity and trying to stop them. Hell, if the White House can't keep what seems like literally anything under wraps, what hope does the average business have?

Apple, of course, is not an average company. And, yet, when the company put out an internal memo warning its employees not to do the leaking, that memo almost immediately leaked to the press.

On Friday, Bloomberg News published what it described as an "internal blog" post in full. The memo warned that Apple "employees, contractors, or suppliers—do get caught, and they’re getting caught faster than ever."

The post also reportedly noted that, "in some cases," leakers "face jail time and massive fines for network intrusion and theft of trade secrets both classified as federal crimes," adding that, in 2017, "Apple caught 29 leakers, and of those, 12 were arrested."

Memos like this set off a delightfully oppressive mood within the organizations that send them. Part of the reason for that is that the practice of leaking is so widespread so as to make the selective persecution of any leaker seem callous and unfair. Add to that the simple fact that well-timed strategic leaks are practically marketing SOP in many larger organizations and this seems doubly so. And, finally, I cannot be the only one struck by how low Apple's catch-rate feels within the memo itself. 29 leakers caught in a year? That has to be some unimpressive fraction of the actual leakers that exist.

Anyone who might want to argue the points above needs to make that argument in the context of a reality in which this scare-memo itself leaked to the press. That this occurred only buttresses the argument that battling all leaks all the time is a losing battle. And if that's the case, then the selective enforcement of anti-leaking policies will only come off as both confusing and capricious.

Not to mention a giant waste of time and money, compared with incentivizing employees to leak only when its beneficial to the company.

from the creepers-gonna-creep dept

With site-blocking now fully en vogue in much of the world as the preferred draconian solution to copyright infringement, one point we've made over and over again is that even this extreme measure has no hope of fully satisfying the entertainment industries. Once thought something of a nuclear option, the full censorship of websites will now serve as a mere stepping stone to the censorship of all kinds of other platforms that might sometimes be used for piracy. It was always going to be this way, from the very moment that world governments creaked open this door.

And it appears it isn't taking long for the entertainment industries to want to take that next step, either. As the debate about Kodi addons rages, and as governments begin to clamp down on the platform at the request of the entertainment industry, several industry players at an IP forum event in Russia have started announcing plans to push for app-blocking as the next step.

Over in Russia, a country that will happily block hundreds or millions of IP addresses if it suits them, the topic of infringing apps was raised this week. It happened during the International Strategic Forum on Intellectual Property, a gathering of 500 experts from more than 30 countries. There were strong calls for yet more tools and measures to deal with films and music being made available via ‘pirate’ apps.

The forum heard that in response to widespread website blocking, people behind pirate sites have begun creating applications for mobile devices to achieve the same ends – the provision of illegal content. This, key players in the music industry say, means that the law needs to be further tightened to tackle the rising threat.

“Consumption of content is now going into the mobile sector and due to this we plan to prevent mass migration of ‘pirates’ to the mobile sector,” said Leonid Agronov, general director of the National Federation of the Music Industry.

Look, all of that is true. Innovation happens often at the margins when it comes to technology, after all, and the technology that powers piracy is no exception to this rule. At the same time, neither the entertainment industry nor the governments of the world have ever, even once, shown themselves to be good or fair arbiters of what tools are "pirate tools" and which are legitimate tools that sometimes are used for piracy. If given the power, both will overshoot the mark, with entertainment groups carpet-bombing their way to collateral damage just to be sure that pirates are obliterated, and governments all too often using this copyright censorship as cover to enact oppressive censorship on matters of pure politics.

In other words, it's not that the entertainment industry is wrong that there is some measure of a problem to be dealt with, it's just that their censorious solution creates way more problems than it solves.

Despite that, the music industry, in particular, is banging its war drum.

The same concerns were echoed by Alexander Blinov, CEO of Warner Music Russia. According to TASS, the powerful industry player said that while recent revenues had been positively affected by site-blocking, it’s now time to start taking more action against apps.

“I agree with all speakers that we can not stop at what has been achieved so far. The music industry has a fight against illegal content in mobile applications on the agenda,” Blinov said.

This is not an arms race that the content industry has shown it is capable of winning. But while they beat these war drums for evermore censorship, the unintended consequences are strewn like bodies all around them. From Blinov's home country of Russia, the government has been laughably inept at separating pirate site from non-pirate site to the tune of a ten-fold blocking of collateral damage sites, all while the government also uses those same copyright laws to shut down political speech and reporters it doesn't like.

And it is in this climate that content companies want to hand even more blocking powers to the authorities? First they came for the websites, then they came for the mobile applications? Whatever comes after that is not something to look forward to.

from the this-is-not-good dept

For many years now, various internet companies have released Transparency Reports. The practice was started by Google years back (oddly, Google itself fails me in finding its original trasnparency report). Soon many other internet companies followed suit, and, while it took them a while, the telcos eventually joined in as well. Google's own Transparency Report site lists out a bunch of other companies that now issue such reports:

We've celebrated many of these transparency reports over the years, often demonstrating the excesses of attempts to stifle and censor speech or violate users privacy, and in how these reports often create incentives for these organizations to push back against those demands. Yet, in an interesting article over at Politico, a former Google policy manager warns that the purpose of these platforms is being flipped on its head, and that they're now being used to show how much these platforms are willing to censor:

Fast forward a decade and democracies are now agonizing over fake news and terrorist propaganda. Earlier this month, the European Commission published a new recommendation demanding that internet companies remove extremist and other objectionable content flagged to them in less than an hour — or face legislation forcing them to do so. The Commission also endorsed transparency reports as a way to demonstrate how they are complying with the law.

Indeed, Google and other big tech companies still publish transparency reports, but they now seem to serve a different purpose: to convince authorities in Europe and elsewhere that the internet giant is serious about cracking down on illegal content. The more takedowns it can show, the better.

If true, this is a pretty horrific result of something that should be a good thing: more transparency, more information sharing and more incentives to make sure that bogus attempts to stifle speech and invade people's privacy are not enabled.

Part of the issue, of course, is the fact that governments have been increasingly putting pressure on internet platforms to take down speech, and blaming internet platforms for election results or policies they dislike. And the companies then feel the need to show the governments that they do take these "issues" seriously, by pointing to the content they do takedown. So, rather than alerting the public to all the stuff they don't take down, the platforms are signalling to governments (and some in the public too, frankly) that they frequently take down content. And, unfortunately, that's backfiring, as it's making politicians (and some individuals) claim that this just proves the platforms aren't censoring enough.

The pace of private sector censorship is astounding — and it’s growing exponentially.

The article talks about how this is leading to censorship of important and useful content, such as the case where an exploration of the dangers of Holocaust revisionism got taken down because YouTube feared that a look into it might actually violate European laws against Holocaust revisionism. And, of course, such censorship machines are regularly abused by authoritarian governments:

Turkey demands that internet companies hire locals whose main task is to take calls from the government and then take down content. Russia reportedly is threatening to ban YouTube unless it takes down opposition videos. China’s Great Firewall already blocks almost all Western sites, and much domestic content.

Rohingya activists—in Burma and in Western countries—tell The Daily Beast that Facebook has been removing their posts documenting the ethnic cleansing of Rohingya people in Burma (also known as Myanmar). They said their accounts are frequently suspended or taken down.

That article has many examples of the kind of content that Facebook is pulling down and notes that in Burma, people rely on Facebook much more than in some other countries:

Facebook is an essential platform in Burma; since the country’s infrastructure is underdeveloped, people rely on it the way Westerners rely on email. Experts often say that in Burma, Facebook is the internet—so having your account disabled can be devastating.

You can argue that there should be other systems for them to use, but the reality of the situation right now is they use Facebook, and Facebook is deleting reports of ethnic cleansing.

Having democratic governments turn around and enable more and more of this in the name of stopping "bad" speech is acting to support these kinds of crackdowns.

Indeed, as Europe is pushing for more and more use of platforms to censor, it's important that someone gets them to understand how these plans almost inevitably backfire. Daphne Keller at Stanford recently submitted a comment to the EU about its plan, noting just how badly demands for censorship of "illegal content" can turn around and do serious harm.

Errors in platforms’ CVE content removal and police reporting will foreseeably, systematically, and unfairly burden a particular group of Internet users: those speaking Arabic, discussing Middle Eastern politics, or talking about Islam. State-mandated monitoring will, in this way, exacerbate existing inequities in notice and takedown operations. Stories of discriminatory removal impact are already all too common. In 2017, over 70 social justice organizations wrote to Facebook identifying a pattern of disparate enforcement, saying that the platform applies its rules unfairly to remove more posts from minority speakers. This pattern will likely grow worse in the face of pressures such as those proposed in the Recommendation.

There are longer term implications of all of this, and plenty of reasons why we should be thinking about structuring the internet in better ways to protect against this form of censorship. But the short term reality remains, and people should be wary of calling for more platform-based censorship over "bad" content without recognizing the inevitable ways in which such policies are abused or misused to target the most vulnerable.

from the Privilege-for-me-not-for-thee dept

Attorney Client privilege is now a thing of the past. I have many (too many!) lawyers and they are probably wondering when their offices, and even homes, are going to be raided with everything, including their phones and computers, taken. All lawyers are deflated and concerned!

Attorney Client privilege is now a thing of the past. I have many (too many!) lawyers and they are probably wondering when their offices, and even homes, are going to be raided with everything, including their phones and computers, taken. All lawyers are deflated and concerned!

Attorney-client privilege is indeed a serious thing. It is inherently woven into the Sixth Amendment's right to counsel. That right to counsel is a right to effective counsel. Effective counsel depends on candor by the client. That candor in turn depends on clients being confident that their communications seeking counsel will be confidential. If, however, a client has to fear the government obtaining those communications then their ability to speak openly with their lawyer will be chilled. But without that openness, their lawyers will not be able to effectively advocate for them. Thus the Sixth Amendment requires that attorney-client communications – those communications made in the furtherance of seeking legal counsel – be privileged from government (or other third party) view.

So Trump is right: attorney-client privilege in America is under attack, and ever since we started learning about these programs lawyers have definitely been worried about how they impose an intolerableburden on the Sixth Amendment right to counsel. But unlike in Trump's situation where there is serious reason to doubt whether there's any privilege to be maintained at all (after all, privilege only applies to communications made in the course of seeking legal counsel, not communications made for other purposes, including the furtherance of crime or fraud), and care being taken to preserve what privilege there may be, bulk surveillance sweeps up all communications, including all those for which there is no doubt as to their privileged status, and without any sort of care taken to protect these sensitive communications from the prying eyes of the state. Indeed, the whole point of bulk surveillance is so that the prying eyes of the state can get to see who was saying what to whom without any prior reason to target any of these communications in particular, because with bulk surveillance there is no targeting: it swoops up everything, privileged or not.

If Trump truly finds it troubling for the government to be able obtain privileged communications he could put an end to these programs. It would certainly help make any argument he raises about how his own privilege claims should be sacrosanct rings ring less hollow if his administration weren't currently being so destructive to everyone else's.

from the good-deals-on-cool-stuff dept

Keep your skills sharp and stay up to date on new developments with the $89 Virtual Training Company Unlimited Single User Subscription. With courses covering everything from MCSE certification training to animation, graphic design and page layout, you'll have unlimited access to the entire catalog. They have over 1,000 courses, add more each week, and each course comes with a certificate of completion.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

from the making-citizens-pay-for-the-government's-sins dept

A 19-year-old Canadian is being criminally-charged for accessing a website. The Nova Scotian government's Freedom of Information portal (FOIPOP) served up documents it shouldn't have and now prosecutors are thinking about adding charges on top of the ten-year sentence the teen could already be facing. (via Databreaches.net)

Even once the government learned of the breach, it waited until Wednesday to begin notifying affected people. Arab said they held off notifying people was because police suggested it would help them in their investigation.

Seems logical, except…

But [Halifax Police Superintendent Jim] Perrin told reporters police did not make that request. He could not say if advising people would have compromised the investigation. The province's protocols for a privacy breach state it is supposed to inform people as soon as possible, unless otherwise instructed by law enforcement.

The suspect obtained 7,000 documents from the Freedom of Information portal. Apparently around 250 of those contained unredacted personal information. Here's how the government portrayed the supposed hacking:

Government officials said someone got in by "exploiting a vulnerability in the system." The person wrote a script allowing them to alter the website's URL, which then granted access to the personal information.

Internal Services found more than 7,000 PDF documents had been downloaded by a "non-authorized user" in early March. They filed a complaint with police on Saturday.

Document number 1235 is stored at https://foipop.novascotia.ca/foia/views/_AttachmentDownload.jsp?attachmentRSN=1235.

Guess where document 1236 is stored? This is not a new problem. In fact, it was recognized over a decade ago as one of the top ten issues affecting web application security. All [the "hacker'] had to do is add.

All this "hacker" did was automate the retrieval of published documents from the government's FOI portal. That's it. This wasn't an attempt to access personal info. That problem lies with the government, which did not properly secure documents it hadn't redacted yet. As D'Etremont points out, plenty of other government websites use the same software for document access. (Searching "inurl:attachmentRSN"will bring up a handful of government websites, including Nova Scotia's temporarily disabled FOI portal).

But other sites have taken care to wall off publicly-available documents from others they're not prepared to make public by using a PublicPortal subfolder. Nova Scotia's site apparently did not, hence the teen's ability to access unredacted documents. This isn't evidence of fraudulent access or malicious hacking. This is evidence of government carelessness.

The question remains, was the access fraudulent?

Remember what I said about the other installations being called “PublicPortal”? And how 6750 of the 7000 records were public anyways, and how this system is literally designed for facilitating “access to information?” Looking at it further, there are no authentication mechanisms, no password protection, no access restrictions. It’s very clear that the software is intended to serve as a public repository of documents.

It’s also very clear that there at least 250 documents improperly stored there by the province. Documents that the province had a responsibility to protect, and failed.

This wasn't a criminal act. This was simply efficient harvesting of publicly-available documents. If some documents weren't supposed to be publicly-available, the blame lies with the government for failing to secure them. The fact that the government decided to get police involved gives this the ugly appearance of scapegoating. This is an embarrassed government body trying to turn its mistake into the malicious works of teen hacker.

It would be very surprising to see these charges stick. The URLs -- and the documents they held -- were publicly-accessible. But if they do stick -- and the Halifax PD has stated it may add more charges -- it will be due to the Nova Scotia government's unwillingness to take responsibility for its own carelessness.

from the good-luck-with-that dept

As we've noted previously, Comcast has enjoyed a little more resilience to the cord cutting threat than satellite TV and telco TV providers--thanks to its growing monopoly over broadband. As DSL users frustrated by lagging telco upgrades switch to cable to get faster speeds, they're often forced to sign up for cable and TV bundles they may not want (since standalone broadband is often priced prohibitively by intent). Of course that doesn't mean these users or stick around (or that they even actively use the cable subscription they pay for), but it has helped Comcast all the same.

There are some indications that advantage isn't helping as much now that we're seeing so many streaming services come to market. At least one Wall Street research firm predicts that Comcast's cord cutting defections will double this year, though those totals still remain modest (400,000) compared to the company's total number of pay TV (22.4 million) and broadband (25.5 million) subscribers.

"Netflix offers one of the most popular on demand services and is an important supplement to the content offering and value proposition of the X1 platform,” said Sam Schwartz, Chief Business Development Officer, Comcast Cable. “Netflix is a great partner, and we are excited to offer its services to our customers in new ways that provide them with more choice, value and flexibility. The seamless integration of Netflix with the vast Xfinity entertainment library on X1 present a unique and comprehensive experience for customers."

There's no indication yet whether Comcast will sell Netflix at any kind of discount. Still, the move isn't likely to help Comcast stop what's become an obvious example of market evolution. Customers looking for actual "choice, value and flexibility" pretty consistently find that's not something they get from traditional cable, thanks in part to Comcast's relentless rate hikes and hidden fees. Since most of these customers are ditching cable due to having to pay $130 or more per month, even a discounted subscription to Netflix isn't likely to help.

Of course Comcast still has an ace up its sleeve: usage caps and overage fees. The company's slow and steady deployment of these arbitrary, unnecessary and punitive limits will allow Comcast to (ab)use a lack of broadband competition to not only counter reduced TV revenues by jacking up the price of broadband, but to punish customers who choose to wander outside of Comcast's walled gardens.

After all, Comcast's own streaming services don't count against the company's caps, while Netflix's service does. And should Comcast and the FCC survive legal challenges to the net neutrality repeal, there's not much to stop Comcast from using a lack of adult oversight on this front to brutal, anti-competitive advantage.

from the on-the-other-hand,-claimant-2-[hereinafter-JERKFACE]-gets-nothing dept

The UK High Court has handed down a win (and a loss) in the Right to be Forgotten column. Two plaintiffs seeking delisting of information about their past criminal exploits had their cases considered by the court. Only one of them is walking away with a court order for delisting. The other one will apparently have to live with his past.

The claimant who lost, referred to only as NT1 for legal reasons, was convicted of conspiracy to account falsely in the late 1990s; the claimant who won, known as NT2, was convicted more than 10 years ago of conspiracy to intercept communications. NT1 was jailed for four years, while NT2 was jailed for six months.

Granting an appeal in the case of NT1, the judge added: "It is quite likely that there will be more claims of this kind, and the fact that NT2 has succeeded is likely to reinforce that."

Google disputed both of these claims when they were filed, prompting the legal challenges. While the court admits there's a public interest in both cases, only one of the two claimants apparently deserves to have his history wiped clean. NT2 was more of a model citizen and convicted on lesser charges, so that's where the line is being (vaguely) drawn in enforcing the European Union's Right To Be Forgotten. The summary [PDF] of the decision quickly details the merits of NT2's case.

The crime and punishment information has become out of date, irrelevant and of no sufficient legitimate interest to users of Google Search to justify its continued availability, so that an appropriate delisting order should be made. The conviction was always going to become spent, and it did so in March 2014, though it would have done so in July of that year anyway. NT2 has frankly acknowledged his guilt, and expressed genuine remorse. There is no evidence of any risk of repetition. His current business activities are in a field quite different from that in which he was operating at the time. His past offending is of little if any relevance to anybody’s assessment of his suitability to engage in relevant business activity now, or in the future. There is no real need for anybody to be warned about that activity.

In comparison, NT1 has apparently learned nothing from his brush with the justice system, and headed right back into the professional field where he committed his original crimes.

NT1 did not enjoy any reasonable expectation of privacy in respect of the information at the time of his prosecution, conviction and sentence. My conclusion is that he is not entitled to have it delisted now. It has not been shown to be inaccurate in any material way. It relates to his business life, not his personal life. It is sensitive information, and he has identified some legitimate grounds for delisting it. But he has failed to produce any compelling evidence in support of those grounds. Much of the harm complained of is business-related, and some of it pre-dates the time when he can legitimately complain of Google’s processing of the information. His Article 8 private life rights are now engaged, but do not attract any great weight. The information originally appeared in the context of crime and court reporting in the national media, which was a natural and foreseeable result of the claimant’s own criminal behaviour.

NT1's sentence has also been served, but the court -- while nodding its head toward fresh starts after repaying debts to society -- determines NT1 only paid his debt begrudgingly and benefited from an interim law change that saw him released ahead of schedule.

The information is historic, and the domestic law of rehabilitation is engaged. But that is only so at the margins. The sentence on this claimant was of such a length that at the time he had no reasonable expectation that his conviction would ever be spent. The law has changed, but if the sentence had been any longer, the conviction would still not be spent. It would have been longer but for personal mitigation that has no bearing on culpability. His business career since leaving prison made the information relevant in the past to the assessment of his honesty by members of the public. The information retains sufficient relevance today. He has not accepted his guilt, has misled the public and this Court, and shows no remorse over any of these matters. He remains in business, and the information serves the purpose of minimising the risk that he will continue to mislead, as he has in the past.

It's a bit of an inconsistent decision, but probably about as much as can be expected from a European ruling that says certain people can erase their pasts while others are doomed to repeatedly be disappointed with their vanity search results. At least this ruling shows challenged requests are being examined on a case-by-case basis weighing as much relevant information as possible. This is what Google is attempting to do as well, even though it has less outside info to work with and more than a half-million requests per year to work through. That Google appears to be operating in good faith despite its obvious opposition to the new "right" likely explains the court's refusal to award damages to the prevailing party.

The recently-established right is still problematic and prone to abuse. But this decision shows the courts aren't viewing search engines as towering, villainous money machines hellbent on ruining lives through algorithmic indexing. Instead, this court appears to be willing to engage all sides of the issue when addressing claimants' complaints about troublesome search results.