from the ill-communication dept

While Facebook tends to get the lion's share of (deserved) criticism, the telecom sector continues to make its case for being the absolute worst when it comes to protecting your private data. Scandal after scandal have highlighted how wireless carriers routinely collect and store your daily location data, then sell that data to a universe of shady middlemen with little to no oversight as to how the data is used. Users sign one overlong privacy policy with their wireless carrier, and that policy is being read to mean consumers sign off on the practice, which they certainly haven't.

This week journalist Joseph Cox again highlighted the problems on the location data front, reporting how many stalkers and debt collectors are able to get access to this data without paying for it. How? By pretending to be law enforcement officers:

"...bounty hunters and people with histories of domestic violence have managed to trick telecommunications companies into providing real-time location data by simply impersonating US officials over the phone and email, according to court records and multiple sources familiar with the technique. In some cases, these people abuse telecom company policies created to give law enforcement real-time location data without a court order in “exigent circumstances,” such as when there is the imminent threat of physical harm to a victim.

In addition to cellular tower location data, carriers were also recently busted selling A-GPS data, which is supposed to be protected by FCC data rules. Despite significant reporting on this subject and carrier promises to stop collecting and selling this data, this practice is still ongoing. Like Facebook, these are companies that are staring down the barrel of looming regulation -- and still somehow can't seem to find the motivation to behave. Regulators at the Ajit Pai FCC have also sat on their hands and have yet to issue so much as a warning to cellular carriers.

At least one skiptracer told Motherboard that wireless carriers remain several steps behind in trying to crack down on the practice:

"So many people are doing that and the telcos have been very stupid about it. They have not done due diligence and called the police [departments] directly to verify the case or vet the identity of the person calling,” Valerie McGilvrey, a skiptracer who said she has bought phone location data from those who obtained access to it, told Motherboard. A skiptracer is someone tasked with finding out where people, typically fugitives on the run or those who owe a debt, are located."

In many instances the third parties are exploiting telecom company procedures for "exigent circumstances," allowing them to request and receive real-time location data by fabricating law enforcement data request documents telecom operators aren't properly verifying. Of course as the New York Times noted more than a year ago, law enforcement officers have also been busted abusing this system to spy on judges and other law enforcement officers.

Like so many sectors, wireless carriers were so excited by the billions to be made selling your daily habits, they forgot to actually protect that data. As reporters like Cox continue to dig deeper, you have to think that many cellular carriers are scrambling hard to clean up their mess as inevitable class action lawsuits and regulatory investigations wait in the wings. This scandal is getting so ugly, even the carrier-cozy Trump FCC may, at some point, be forced to actually do something about it.

European Union officials have begun talks with counterparts in several Middle Eastern countries, including Egypt and Turkey, about proposed data-sharing deals that would allow Europol to exchange personal information about suspects with local law enforcement authorities.

In some circumstances, the deals could allow the transfer of data concerning a person’s race and ethnic origin, their political opinions and religious beliefs, trade-union memberships, genetic data and data concerning their health and sex life.

The deals are being sought by the EU as part of efforts to bolster counter-terrorism policing across the continent despite concerns being raised about the human rights records of the countries by the bloc’s own data protection watchdog.

When someone starts talking about terrorism and national security, all rational thought goes out the window. The EU will share data with Egypt, which recently made the news for executing nine people who claimed their "confessions" were tortured out of them.

Turkey isn't much of an improvement, seeing how its government also likes to jail critics -- going so far as to use other countries' laws against foreigners to punish non-Turkish citizens for insulting the president.

It's hard to see how all of the data being shared is relevant to multi-national terrorism investigations. In fact, much of what would be shared seems more like blackmail material than evidence tying people to terrorist groups or acts. Why else would the EU include data about targets' sex lives?

In normal countries under normal circumstances, data about political and religious affiliations would be off limits, as would medical information and trade union memberships. This isn't a case of creeping totalitarianism. This is full-blown enabling of existing totalitarian states, weaponizing the massive amount of data European law enforcement agencies collect on investigation targets.

The EU Commission claims this set of very personal data will only be disclosed if Europol believes it should be. Not very reassuring.

“The transfer of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, genetic data and data concerning a person's health and sex life by Europol shall be prohibited, unless it is strictly necessary and proportionate in individual cases for preventing or combating criminal offences as referred to in the Agreement and subject to appropriate safeguards,” the directives say.

The same EU government that condemned Egypt's ongoing human rights abuses has no problem giving it data ammo to use against critics, dissidents, and activists. It seems like the claims about "appropriate safeguards" will be ignored if Europol feels the data it could obtain from other countries necessitates increased quid pro quo. Whatever oversight Europol has is probably no better than any other massive law enforcement/counter-terrorist agency, which usually ranges from slim to none.

Human rights abuses aren't going to stop as long as major nation-states continue to treat abusive governments as equals on the national security playing field. Just as certainly as Turkey has weaponized US-based social media moderation tools to silence critics, other governments seeking to permanently silence critics will weaponize this proposed data sharing to achieve the same ends. The world won't be any safer, but it might be just a bit more silent.

from the get-your-act-together dept

There's another Facebook scandal story brewing today and, once again, it appears that Facebook's biggest enemy is the company itself and how it blunders into messes that were totally unnecessary. When the last story broke, we pointed out that much of the reporting was exaggerated, and people seemed to be jumping to conclusions that weren't actually warranted by some internal discussions about Facebook's business modeling. The latest big scandal, courtesy of a big New York Times story, reveals that Facebook agreed to share a lot more information than previously known or reported with a bunch of large companies (though, hilariously, one of those companies is... The NY Times, which The NY Times plays down quite a bit).

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

As Kash Hill notes in a separate story at Gizmodo, this suddenly explains a story she had explored years ago, where Amazon rejected a review of a book, claiming the reviewer "knew the author," (which was not true). However, the reviewer had followed the author on Facebook, and Amazon magically appeared to know about that connection even if the reviewer did not directly share her Facebook data with Amazon.

The NY Times report further explains another bit of confusion that Hill has spent years trying to track down: how Facebook's People You May Know feature is so freaking creepy. Apparently, Facebook had data sharing agreements with other companies to peek through their data as well:

Among the revelations was that Facebook obtained data from multiple partners for a controversial friend-suggestion tool called “People You May Know.”

The feature, introduced in 2008, continues even though some Facebook users have objected to it, unsettled by its knowledge of their real-world relationships. Gizmodo and other news outlets have reported cases of the tool’s recommending friend connections between patients of the same psychiatrist, estranged family members, and a harasser and his victim.

Facebook, in turn, used contact lists from the partners, including Amazon, Yahoo and the Chinese company Huawei — which has been flagged as a security threat by American intelligence officials — to gain deeper insight into people’s relationships and suggest more connections, the records show.

As Hill noted on Twitter, when she asked Facebook last year if it uses data from "third parties such as data brokers" to figure out PYMK, Facebook's answer was technically correct, but totally misleading:

In the summer of 2017, I asked Facebook if it used signals from "third parties such as data brokers" for friend recommendations. Kicking myself for not recognizing the evasion in their answer.

Specifically, Facebook responded: "Facebook does not use information from data brokers for People You May Know." Note that the question was if Facebook used information from "third parties" and the "data brokers" were just an example. Facebook responded that it didn't use data brokers, which appears to be correct, but left out the other third parties from which it did use data.

And this is why Facebook is, once again, its own worst enemy. It answers these kinds of questions in the same way that the US Intelligence Community answers questions about its surveillance practices: technically correct, but highly misleading. And, as such, when it comes out what the company is actually doing, the company has completely burned whatever goodwill it might have had. If the company had just been upfront, honest and transparent about what it was doing, none of this would be an issue. The fact that it chose to be sneaky and misleading about it shows that it knew its actions would upset users. And if you know what you're doing will upset users, and you're unwilling to be frank and upfront about it, that's a recipe for disaster.

And it's a recipe that Facebook keeps making again and again and again.

And that's an issue that goes right to the top. Mark Zuckerberg has done too much apologizing without actually fixing any of this.

One bit in the NY Times piece deserves a particular discussion:

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread — privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. Facebook acknowledged that it did not consider any of those three companies to be service providers. Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them. A Royal Bank of Canada spokesman disputed that the bank had any such access.

Spotify, which could view messages of more than 70 million users a month, still offers the option to share music through Facebook Messenger. But Netflix and the Canadian bank no longer needed access to messages because they had deactivated features that incorporated it.

This particular issue has raised a lot of alarm bells. As Alvarao Bedoya points out, disclosing the content of private communications is very much illegal under the Stored Communications Act. But, the NY Times reporting is not entirely clear here either. Facebook did work hard for a while to try to turn its Messenger into more of a "platform" that would let you do more than just chat -- so I could see where it might "integrate" with 3rd party services to enable their features within Messenger. But the specifics of how that works would be (1) really, really important, and (2) should be 100% transparent with users -- such that if they're agreeing to, say, share Spotify songs via Messenger, they should absolutely be told that this means Spotify has access to whatever they have access to. A failure to do that -- as appears to be the case here -- is yet another braindead move by Facebook.

Over and over and over again we see this same pattern with Facebook. Even when there are totally reasonable and logical business and product decisions being made, the company's blatant unwillingness to be transparent about what it is doing, and who has access to what data, is what is so damning for the company. It is a total failure of the management team and until Facebook recognizes that fact, nothing will change.

And, of course, the most annoying part in all of this is that it will come back to bite the entire internet ecosystem. Facebook's continued inability to be open and transparent about its actions -- and give users a real choice -- is certainly going to lead to the kinds of hamfisted regulations from Congress that will block useful innovations from other companies that aren't so anti-user, but which will be swept up in whatever punishment Facebook is bringing to the entire internet.

from the evidence-optional dept

Facebook is under fire yet again for potentially being far too casual in its treatment of private consumer data.

Earlier this week, the New York Times issued a report noting that Facebook had struck deals with more than 60 different hardware vendors since at least 2010, providing them with "vast amounts" of private user data. According to the report, these partnerships allowed some devices to retrieve personal information even from users’ friends who believed they had barred any sharing with third party vendors, potentially violating a 2011 FTC consent decree that banned such sharing without obtaining express customer permission.

To be clear, the partnerships are notably different from the deals struck with companies like Cambridge Analytica, which we now know routinely let app makers hoover up private data under false pretenses, then use that data for other purposes (like oh, riling up partisans ahead of an election). And Facebook was quick to issue a blog post trying to downplay the scope of the revelations:

"This is very different from the public APIs used by third-party developers, like Aleksandr Kogan. These third-party developers were not allowed to offer versions of Facebook to people and, instead, used the Facebook information people shared with them to build completely new experiences."

And while that's all well and good, the problem for Facebook is that nobody trusts that they routinely policed whether this data was being abused. And while the data was all stored locally on user devices, privacy experts were quick to point out that this could still wind up being a problem:

"You might think that Facebook or the device manufacturer is trustworthy,” said Serge Egelman, a privacy researcher at the University of California, Berkeley, who studies the security of mobile apps. “But the problem is that as more and more data is collected on the device — and if it can be accessed by apps on the device — it creates serious privacy and security risks."

These are all legitimate questions that Facebook will need to answer in the wake of the Cambridge scandal.

That said, this story was initially reported on Sunday without too much attention. But things took a turn with additional reports by both the Washington Post and New York Times indicating that some of these partner companies included Chinese gear makers like Huawei.

"The agreements, which date to at least 2010, gave private access to some user data to Huawei, a telecommunications equipment company that has been flagged by American intelligence officials as a national security threat, as well as to Lenovo, Oppo and TCL.
The four partnerships remain in effect, but Facebook officials said in an interview that the company would wind down the Huawei deal by the end of the week."

Given that the Trump administration is currently trying to blacklist companies like Huawei amidst allegations of being proxies for the Chinese government, the story's overall tone quickly shifted to one of mass hyperventilation:

This could be a very big problem. If @Facebook granted Huawei special access to social data of Americans this might as well have given it directly to the government of #Chinahttps://t.co/5K86CDpjVE

The problem: as we've noted a few times now, the allegations that employee-owned Huawei routinely spies on American consumers for the Chinese government isn't backed up by any publicly-available evidence, something both the Post and Times oddly don't mention.

An 18 month investigation by the White House found no evidence of such spying, and companies like Cisco have been caught routinely fanning such fears among gullible lawmakers in the hopes of thwarting overseas competitors. That hysteria has been notably escalated in recent years thanks to U.S. networking vendors being afraid to compete with cheaper Chinese gear as they jockey for 5G deployment contracts with wireless carriers worldwide.

While it's certainly possible Huawei spies on the U.S., there's just not much evidence for it. And you'd also have to ignore the U.S.' epic hypocrisy on that particular subject. You know, like the time Snowden docs revealed that the NSA was caught hacking into Huawei, stealing the company's source code, and attempting to install backdoors in Huawei gear so they could spy on countries that were avoiding the use of U.S. networking gear. You know, the exact thing we're accusing Huawei of. Except with supporting evidence.

"“What happens is you get competitors who are able to gin up lawmakers who are already wound up about China,” said one Hill staffer who was not authorized to speak publicly about the matter. “What they do is pull the string and see where the top spins.”

But some experts say these concerns are exaggerated. These experts note that much of Cisco’s own technology is manufactured in China."

That's not to say Facebook still doesn't need to answer some questions about whether all of these partnerships have been unwound, and how it ensured that the data stored on these vendors' devices wasn't abused in any fashion. That said, the focus should remain on the 60 companies in total that Facebook struck these deals with, without getting too hung up on the CHINA CHINA CHINA aspect of the story. Lax treatment of private data is the norm, not the exception (especially in the telecom sector), and getting too hung up on Huawei alone tends to miss the forrest for the trees.

from the standard-operating-procedure dept

Whatever you think about the Facebook Cambridge Analytica kerfuffle, it's pretty obvious that the scandal is causing a long overdue reassessment of our traditionally lax national privacy standards. While most companies talk a good game about their breathless dedication to consumer privacy, that rhetoric is usually pretty hollow and oversight borders on nonexistent. The broadband industry is a giant poster child for that apathy, as is the internet of very broken things sector. For a very long time we've made it abundantly clear that making money was more important than protecting user data, and the check is finally coming due.

While it may only be a temporary phenomenon, the Cambridge Analytica scandal is finally causing some much-needed soul searching on this front. And given how deep our collective privacy apathy rabbit hole goes, being sloppy with consumer data may actually bear witness to something vaguely resembling accountability for a little while. Case in point is gay dating site Grindr, which this week was hammered in the media after it was revealed that the company was sharing an ocean of data with app optimization partner companies, including location data and even HIV status.

Norwegian nonprofit SINTEF was commissioned to dig into the problem on behalf of Swedish public broadcaster SVT, which first broke the story. According to SINTEF, Grindr was also sharing its users’ precise GPS position, "tribe" (their preferred gay subculture), sexuality, relationship status, ethnicity, and phone ID with third-party advertising companies. And, because even "anonymized" data can never be truly considered anonymous, they concluded it isn't hard to identify these users based on this data.

Many were surprised that such a popular company would have such a casual disregard for its consumer privacy:

"Grindr is a relatively unique place for openness about HIV status,” James Krellenstein, a member of AIDS advocacy group ACT UP New York, told BuzzFeed News.

“To then have that data shared with third parties that you weren’t explicitly notified about, and having that possibly threaten your health or safety — that is an extremely, extremely egregious breach of basic standards that we wouldn’t expect from a company that likes to brand itself as a supporter of the queer community."

But again, this casual treatment of data isn't errant behavior on Grindr's part -- it's the norm. And in this case, many are correct to point out that in addition to it being problematic that users didn't know this data was being shared outside of the Grindr community, the exposure of the HIV data (which again was only with two app optimization companies) could potentially have placed people living in homophobic areas at risk of violence:

Privacy isn’t just about credit card numbers and passwords. Sharing sensitive information like this can put LGBT Americans at risk.https://t.co/Guay2RBuk8

To its credit, Grindr wound up announcing that it would stop sharing HIV data with third parties, but not before the company issued a statement tinged with the usual lamentations about "misinformation." Several statements were made of the "everybody does it," flavor which didn't help the company's case. Grindr security chief Bryce Case also got defensive in comments to Axios about how the company was being "unfairly" singled out due to the Cambridge Analytica scandal:

"I understand the news cycle right now is very focused on these issues," Case said, but added, "I think what’s happened to Grindr is, unfairly, we’ve been singled out..."It’s conflating an issue and trying to put us in the same camp where we really don’t belong."

But nobody accused Grindr of doing what Cambridge Analytica did. They did however accuse the company of what's now fairly standard privacy apathy across countless industries, including overlong terms of service that don't make it clearer what data is being shared with whom, the sharing of some of private consumer data in unencrypted plain text (you know, like your television probably does), and sharing extremely-sensitive HIV status data that pretty clearly wasn't necessary for "app optimization":

"But some security experts say that this argument about whether the data was being sold to a third party for nefarious purposes or not misses the point: that HIV data is highly sensitive, and that sharing it with any outside companies is a move away from the security of its users.

"There was no reason for them to be storing that data with these analytics companies in the first place," Cooper Quintin, senior staff technologist and security researcher at the Electronic Frontier Foundation, told BuzzFeed News. "Grindr should be taking extra steps to secure this sort of very personal data."

It's understandable that Grindr doesn't want to be lumped in with Cambridge Analytica, and it's obvious that there's a vast chasm between sharing some data with ad optimization partners and using unauthorized data to disrupt elections. Still, companies like Grindr are lucky that this come to Jesus moment in consumer privacy didn't arrive years ago.

Assuming this concern for privacy isn't just a temporary fashion trend, Grindr's certainly not going to be the last company caught in the crossfire of what should be seen as a cultural learning process. And hopefully, some of the truly terrible players on this front (like the telecom sector) will ultimately witness their time in the barrel as well. Especially since what many wireless carriers have routinely been up to makes Grindr's privacy missteps look like child's play, and the government's response so far has been to make it easier than ever to violate consumer privacy.

from the this-is-bad dept

I'm going to assume that you weren't living in an internet-proof cave this weekend, and caught at least some of the stories about Cambridge Analytica and Facebook. The news first kicked off with the announcement of a data protection lawsuit filed against Cambridge Analytica in the UK on Friday evening (we'll likely have more on that lawsuit soon), followed quickly by an attempt by Facebook to get out ahead of the coming tidal wave by announcing that it was suspending Cambridge Analytica and some associated parties from its platforms, claiming terms of service violations. This was quickly followed on Saturday with two explosive stories. The first, from Carole Cadwalladr at The Guardian, revealing a "whistleblower" from the very early days of Cambridge Analytica (who more or less set up how it works with data profiles) named Christopher Wylie. This was quickly followed up by another story at the NY Times, which was a bit more newsy, providing more details on how Cambridge Analytica got data on about 50 million people out of Facebook.

Admittedly -- much of this isn't actually new. The Intercept had reported something similar a year ago, though it only said it was 30 million Facebook users, rather than 50 million. And that story built on the work of a 2015 (yes, 2015) story in the Guardian discussing how Cambridge Analytica was using data from "tens of millions" of Facebook users "harvested without permission" in support of Ted Cruz's presidential campaign.

There's a lot of heat on this story right now, and a lot of accusations being thrown around, and I'll admit that I'm not entirely sure where I come down on the details yet. I assume people on basically both sides of this issue will scream at me and call me names over this, but there's too much going on to fully understand what happened here. I will note that, in that Guardian story in 2015, Cruz told the publication that this data collecting and targeting effort was "very much the Obama model." And political consultant Patrick Ruffini has a well worth reading Twitter thread arguing that people are overreacting to much of this, and that the 2012 Obama campaign did the exact same thing, and was celebrated for its creative use of data and targeting on the internet. Ad tech guy Jay Pinho makes the same point as well. Here's a Time article from 2012 excitedly talking up how the Obama campaign used Facebook in the same way:

That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85% of those without a listed phone number could be found in the uploaded friend lists.

Of course, there is one major difference between the Obama one and the Cambridge Analytica one, which involves the level of transparency. With the Obama campaign, people knew they were giving their data (and friend's data) to the cause of re-electing Obama. Cambridge Analytica got its data by having a Cambridge academic (who the new Guardian story revealed for the first time is also appointed to a position at St. Petersburg University) set up an app that was used to collect much of this data, and misled Facebook by telling them it was purely for academic purposes, when the reality is that it was setup and directly paid for by Cambridge Analytica with the intent of sucking up that data for Cambridge Analytica's database. Is that enough to damn the whole thing? Perhaps.

As for the claims that this is just the same old Facebook model of selling everyone's data... that was not true and still is not accurate. Facebook doesn't sell your data. It sells access to its users via the data it has on you. That may not seem different, but it is different. But the lines do seem to get a bit blurry, as it appears that Cambridge Analytica, via its partnership with the professor Dr. Aleksander Kogan (who apparently briefly changed his name to -- I kid you not -- Dr. Spectre) and his "Global Science Research," basically paid people via Amazon's Mechanical Turk to do a "personality assessment" on Facebook that, as part of the process, exposed information about their entire social graph, which GSR apparently hoovered up and passed along to Cambridge Analytica.

At the very least, it can be said that Facebook should have recognized much earlier that this could and would be done, and to understand the potential privacy problems related to it. Facebook has a fairly long and painful history of not quite realizing how what it does impacts people's privacy, and this is one more example.

But, it's raising a bigger question, as well, and it's one that caused Facebook to do something that I'll definitively call as "incredibly stupid," which is that it threatened to sue the Guardian over its story, mainly because the Guardian story refers to this whole mess as a "data breach" for Facebook's data.

Facebook instructed external lawyers and warned us we were making 'false and defamatory' allegations. Today they said it was not correct to call this a data breach. We are calling it a data breach. https://t.co/Q8wrw0FDyr

And, of course, Facebook wasn't the only one who threatened to sue. Cambridge Analytica did too:

The Observer also received the first of three letters from Cambridge Analytica threatening to sue Guardian News and Media for defamation.

There are issues of terminology here. Facebook, in its post, is adamant that what happened is not a "breach"

The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.

There are legal reasons why Facebook is so concerned about whether or not this is a "breach" and, let's face it, the company is about to face a million and a half lawsuits over this, not to mention government investigations (already Senator Amy Klobuchar has demanded Mark Zuckerberg's head on a plattertestimony before the Senate and Massachusetts' Attorney General Maura Healey has announced the opening of an investigation, and there have also been rumblings out of the UK and the EU, as well as the FTC). But, there are also some fairly important legal obligations if this was a "breach" in the traditional sense, such as disclosing that to those impacted by the breach.

I'm not entirely sure where I come down on the breach question. It doesn't feel like a traditional breach. It wasn't that Facebook coughed up this info, it was its users coughed up the info... and Facebook just made it easy for this outside "academic" to hoover up all that info by paying a bunch of people to take dopey personality quizzes. However, as the Guardian's Alex Hern points out, how do you distinguish what Kogan/GSR/Cambridge Analytica did from social engineering to get information.

If you're having trouble thinking of today's story as a "breach", try rephrasing it in your head as "Facebook fell prey to a social engineering attack in which it was convinced to hand over user data by an attacker who told it what it wanted to hear".

Of course, there is something of a difference: it still wasn't Facebook per se coughing up the info. It was Facebook's own users. And, you might even argue that if you believe that Facebook doesn't "own" all this data in the first place, that it was actually those Facebook users coughing up a bunch of their own data -- including lots of data about their friends. Needless to say, this is a mess where a lot more transparency might help, and that transparency is going to be forced upon Facebook with a sledgehammer in the near near future.

But, regardless of where you come down on all of this, Facebook threatening defamation against the Guardian for calling this a data breach is ludicrous and Facebook should be ashamed and apologize. Even as it clearly disagrees with how the Guardian characterized much of the story, that's no excuse to whip out defamation threats. Not only is it incredibly stupid from a Facebook PR perspective (and makes the company look like a giant bully), it suggests that the company still has absolutely no fucking clue how to communicate with the press and the public about how its own platform works.

It's actually quite incredible to recognize just how big Facebook has gotten in the face of how little it seems to understand about what its own platform does.

from the natsec-promiscuity dept

As a parting gift to the incoming president, Barack Obama approved information-sharing rules which gave sixteen federal agencies access to unminimized NSA collections. The whole list of agencies involved in the information sharing can be found at the ODNI's (Office of the Director of National Intelligence) website:

Two independent agencies—the Office of the Director of National Intelligence (ODNI) and the Central Intelligence Agency (CIA);

Eight Department of Defense elements—the Defense Intelligence Agency (DIA), the National Security Agency (NSA), the National Geospatial- Intelligence Agency (NGA), the National Reconnaissance Office (NRO), and intelligence elements of the four DoD services; the Army, Navy, Marine Corps, and Air Force.

Seven elements of other departments and agencies—the Department of Energy’s Office of Intelligence and Counter-Intelligence; the Department of Homeland Security’s Office of Intelligence and Analysis and U.S. Coast Guard Intelligence; the Department of Justice’s Federal Bureau of Investigation and the Drug Enforcement Agency’s Office of National Security Intelligence; the Department of State’s Bureau of Intelligence and Research; and the Department of the Treasury’s Office of Intelligence and Analysis.

Yes, the collected communications can be masked to protect the identities of US persons, but that call is made on a case-by-case basis by the NSA and there are several government officials with the power to demand unminimized access.

Introduced on April 26 by Rep. John Katko (R-NY), the “Improving Fusion Centers’ Access to Information Act” (HR 2169) is designed to plug any “information gaps” in state “fusion centers” by modifying the Homeland Security Act of 2002 to require DHS to

identify Federal databases and datasets, including databases and datasets used, operated, or managed by Department components, the Federal Bureau of Investigation, and the Department of the Treasury, that are appropriate, in accordance with Federal laws and policies, to address any gaps identified pursuant to paragraph (2), for inclusion in the information sharing environment and coordinate with the appropriate Federal agency to deploy or access such databases and datasets;

The DHS is already on the list of agencies with access to NSA collections. This bill would allow it to give underling agencies access to the same info. Some notable three-letter agencies on that list include CBP, ICE, and TSA. While the NSA's collections are supposed to serve a national security purpose, the FBI uses its access for standard criminal investigations. There's no reason to believe these agencies won't do the same.

But the bill has friends everywhere in the House. The bill was passed after 40 minutes of debate, thanks to a suspension of normal voting rules. The normal concerns for national security were voiced, but nothing was said of the NSA collection's routine use in routine, domestic criminal investigations. That Congress considers expanded information sharing with domestic security agencies "non-controversial" (hence the sped-up voting process) is an indication of the majority's view of the privacy/security balancing act.

Worse, if the bill becomes law, the worst, most ineffective parts of the DHS will be given access to data and communications gathered by the NSA. Fusion centers -- which are already known for being mostly useless, when not actively doing damage to Constitutional rights -- will have even more information to misuse. The bill would give bicycles to fish in all 50 states. The only thing guaranteed is the new powers will be used badly. Eddington quotes from a 2012 report from the Senate Homeland Security Committee, which found DHS Fusion Centers to be expensive, useless, and a harm to the public.

The Department of Homeland Security estimated that it had spent somewhere between $289 million and $1.4 billion in public funds to support state and local fusion centers since 2003, broad estimates that differ by over $1 billion.

The investigation found that DHS intelligence officers assigned to state and local fusion centers produced intelligence of “uneven quality – oftentimes shoddy, rarely timely, sometimes endangering citizens’ civil liberties and Privacy Act protections, occasionally taken from already-published public sources, and more often than not unrelated to terrorism.”

This is where the NSA's collections will ultimately end up: in the hands of DHS branch offices that do little more than repeatedly screw up. Only now, they'll be able to do significantly more harm to Americans' civil liberties. Add to that the routine clusterfuck that is the CBP, ICE, and TSA, and you have a recipe for massive Fourth Amendment violations under the pretense of national security.

from the statement-of-objections dept

When it comes to online privacy, the European data protection authorities tend to be quite interventionist as they try to police the movement of personal data within and out of the EU. The concerns over the Safe Harbor and Privacy Shield frameworks are one manifestation of this. Another is the increasing EU scrutiny of Facebook's purchase of WhatsApp.

Facebook has appealed to the administrative court against the order in the preliminary proceedings. The goal was to repeal the immediate enforcement. The court rejected this request today and clarified the fact that it does not see any legal basis for the planned data exchange. Facebook can not invoke interests of its own business because the complete data exchange is neither necessary for the purpose of network security or business analysis nor for advertising optimization. Furthermore, the court clarifies that there is no effective consent from WhatsApp users for a data exchange with Facebook. As a result, the administrative court is making a clear consideration in the context of the preliminary legal proceedings: the interests of the approximately 35 million German WhatsApp users predominates the economic interest of Facebook in a suspension of immediate enforceability.

That's not the only problem Facebook faces in Europe. A little while after WhatsApp announced that it would be consolidating its user data with Facebook, the European Commission sent what is called a "Statement of Objections" to Facebook, alleging that:

the company provided incorrect or misleading information during the Commission's 2014 investigation under the EU Merger Regulation of Facebook's planned acquisition of WhatsApp.

The problem is that:

When reviewing Facebook's planned acquisition of WhatsApp, the Commission looked, among other elements, at the possibility of Facebook matching its users' accounts with WhatsApp users' accounts. In its notification of the transaction in August 2014 and in a reply to a request of information, Facebook indicated to the Commission that it would be unable to establish reliable automated matching between the two companies' user accounts.

Once WhatsApp and Facebook started carrying out precisely that kind of automated data matching last year, the Commission naturally wondered whether Facebook had been totally frank in its answers. The company had until January 31 to explain itself, and the Commission is now deciding whether it feels it was given misleading information. If it does, the consequences may be quite costly. Under EU law, Facebook could be fined 1% of its global turnover -- which would amount to around $179 million based on 2015 revenues. On its own, that probably wouldn't be too much of a problem for the deep-pocketed company. But combined with the ruling in Germany, and the possibility that data protection authorities in other countries will follow suit -- the law is the same throughout the EU, after all -- these European concerns about privacy are turning into a major a headache for Facebook.

from the won't-somebody-think-of-the-wheat? dept

Wheat blast may not be uppermost in the minds of many Techdirt readers, but as the following explains, it's a serious plant disease that is spreading around the world:

Wheat blast is a fearsome fungal disease of wheat. It was first discovered in Paraná State of Brazil in 1985. It spread rapidly to other South American countries such as Colombia, Bolivia, Paraguay, and Argentina, where it infects up to 3 million hectares and causes serious crop losses. Wheat blast was also detected in Kentucky, USA, in 2011.

Wheat blast is caused by a fungus known as Magnaporthe oryzae although scientists are still debating its exact identity. There is a risk that wheat blast could expand beyond South America and threaten food security in wheat growing areas in Asia and Africa.

That comes from an interesting site called Open Wheat Blast. It's been set up by a group of scientists who want to help combat the threat of wheat blast. And as their name suggests, they hope to do that by sharing data as widely as possible:

To rapidly respond to this emergency, our team is making genetic data for the wheat blast pathogen available via this website and we are inviting others to do the same. Our goal is that the OpenWheatBlast website will provide a hub for information, collaboration and comment. Collectively, we can better exploit the genetic sequences and answer important questions about the nature of the pathogen and disease.

That's such a self-evidently sensible thing to do, the obvious question to ask is: why isn't this done routinely -- and for human diseases too? In fact, a couple of months ago, 33 global health bodies signed a "Statement on data sharing in public health emergencies," with particular emphasis on sharing data about the Zika virus:

The arguments for sharing data, and the consequences of not doing so, have been thrown into stark relief by the Ebola and Zika outbreaks.

In the context of a public health emergency of international concern, there is an imperative on all parties to make any information available that might have value in combatting the crisis.

We are committed to working in partnership to ensure that the global response to public health emergencies is informed by the best available research evidence and data

That declaration built on a "consensus statement" that came out of World Health Organization consultation on "Developing global norms for sharing data and results during public health emergencies" in September 2015. One of the summary points spells out the key issue holding back open sharing of key information:

WHO seeks a paradigm shift in the approach to information sharing in emergencies, from one limited by embargoes set for publication timelines, to open sharing using modern fit-for-purpose pre-publication platforms. Researchers, journals and funders will need to engage fully for this paradigm shift to occur.

As that makes clear, a big problem is the way that results are published, with researchers and publishers more interested in keeping their results under wraps for a while than spreading them widely and quickly. And there's another issue too:

Patents on natural genome sequences could be inhibitory for further research and product development. Research entities should exercise discretion in patenting and licensing genome-related inventions so as not to inhibit product development and to ensure appropriate benefit sharing.

It's a rather sad state of affairs when publishing concerns and patents are getting in the way of producing treatments and cures for serious human diseases that could improve the lives of millions of people. Protecting crops from wheat blast is, of course, welcome, but is it really the best we can do?

from the a-fine-guest-post-full-of-classic-debunkables dept

Just when we thought some surveillance reforms might stick, the administration announced it was expanding law enforcement access to NSA data hauls. This prompted expressions of disbelief and dismay, along with a letter from Congressional representatives demanding the NSA cease this expanded information sharing immediately.

This backlash prompted Office of the Director of National Intelligence General Counsel Robert Litt to make an unscheduled appearance at Just Security to explain how this was all a matter of everyone else getting everything wrong, rather than simply taking the administration at its word.

There has been a lot of speculation about the content of proposed procedures that are being drafted to authorize the sharing of unevaluated signals intelligence. While the procedures are not yet in final form, it would be helpful to clarify what they are and are not. In particular, these procedures are not about law enforcement, but about improving our intelligence capabilities.

As Litt explains it, everything about this is lawful and subject to a variety of policies and procedures.

These procedures will thus not authorize any additional collection of anyone’s communications, but will only provide a framework for the sharing of lawfully collected signals intelligence information between elements of the Intelligence Community. Critically, they will authorize sharing only with elements of the Intelligence Community, and only for authorized foreign intelligence and counterintelligence purposes; they will not authorize sharing for law enforcement purposes. They will require individual elements of the Intelligence Community to establish a justification for access to signals intelligence consistent with the foreign intelligence or counterintelligence mission of the element. And finally, they will require Intelligence Community elements, as a condition of receiving signals intelligence, to apply to signals intelligence information the kind of strong protections for privacy and civil liberties, and the kind of oversight, that the National Security Agency currently has.

So, this all sounds like it has nothing to do with law enforcement. Just intelligence "elements" from the community. Except that law enforcement and intelligence agencies are hardly separate entities. We already know the NSA is allowed to "tip" data to the FBI if it might be relevant to criminal investigations. There's no clear dividing line between intelligence and law enforcement -- not with law enforcement's steady encroachment into national security territory. When Litt says "only intelligence agencies," he's actually referring to several law enforcement agencies, as Marcy Wheeler points out.

As a threshold matter, both FBI and DEA are elements of the intelligence community. Counterterrorism is considered part of FBI’s foreign intelligence function, and cyber investigations can be considered counterintelligence and foreign intelligence (the latter if done by a foreigner). International narcotics investigations have been considered a foreign intelligence purpose since EO 12333 was written.

In other words, this sharing would fall squarely in the area where eliminating the wall between intelligence and law enforcement in 2001-2002 also happened to erode fourth amendment protections for alleged Muslim (but not white supremacist) terrorists, drug dealers, and hackers.

So make no mistake, this will degrade the constitutional protections of a lot of people, who happen to be disproportionately communities of color.

And, to go back to Litt's statement, the whole thing starts with a dodge:

These procedures will thus not authorize any additional collection of anyone’s communications…

This is something no one has actually claimed. What people are concerned about is the NSA using its massive collection abilities to become an extension of domestic law enforcement, rather than the foreign-focused entity it's supposed to be.

And, as for Litt's claims that everything is subject to clearly-defined rules on minimization, those are also false. First off, the expanded permissions originate under Executive Order 12333, which has been revised in secret on more than one occasion -- all without the full participation of Congressional oversight. Not only that, but agencies that are recipients of unminimized data from the NSA are supposed to apply their own minimization procedures to better ensure "strong protections for privacy and civil liberties." Wheeler notes that two recipients have yet to put any minimization procedures in place, despite having had years to do so.

I also suspect that Treasury will be a likely recipient of this data; as of February 10, Treasury still did not have written EO 12333 protections that were mandated 35 years ago (and DEA’s were still pending at that point).

The backdoor search loophole has yet to be closed (which gives the FBI access to unminimized data and communications obtained via Section 702) and these agencies -- along with two consecutive, very compliant administrations -- have been tearing down any walls between the NSA and law enforcement for several years now.

Litt's reassurances are worthless. It namechecks all the stuff we know is mostly worthless: oversight, minimization procedures, the frankly laughable idea that the FBI cares more about privacy and civil liberties than making busts, etc. and asks us to believe that a tangled thicket of secretive agencies and even-more-secretive laws are all designed to protect us from government overreach.