from the pay-attention----this-matters-a-lot dept

If you pay attention to Github (and you should), you know that late last week the site started experiencing some problems staying online, thanks to a massive and frequently changing DDoS attack. Over the past few days a lot more details have come out, making it pretty clear that the attack is coming via China with what is likely direct support from the Chinese government. While it's messing with all of Github, it's sending traffic to two specific Github pages: https://github.com/greatfire and https://github.com/cn-nytimes. Those both provide tools to help people in China access Greatfire and the NY Times. Notably, Greatfire itself notes that prior to the DDoS on Github, its own site was hit with a very similar DDoS attack.

If you want the technical details, Netresec explains how the DDoS works, noting that it's a "man-on-the-side" attack, injecting certain packets alongside code loaded by Chinese search engine Baidu (including both its ad platform and analytics platform), but is unlikely to be coming directly from Baidu itself.

But the much more interesting part is why China is using a DDoS attack, rather than its standard approach of just blocking access in China, as it has historically done. The key is that, two years ago, China tried to block Github entirely... and Chinese programmers flipped out, pointing out that they couldn't do their jobs without Github. The Chinese censors were forced to back down, leading to a sort of loophole in the Great Firewall. That leads to the next question of why China doesn't just block access to the URLs of the two repositories it doesn't like? And the answer there: HTTPS. Because all Github traffic is encrypted via HTTPS, China can't just block access to those URLs, because it doesn't know specifically what's being accessed.

And thus, we get the decision to turn its firewall around, launching a rather obvious DDoS attack on the two sites it doesn't like, with the rather clear message being sent to Github: if you stop hosting these projects, the DDoS will stop. Of course, so far Github is taking a stand and refusing to take down those projects (which is great and exactly what it should be doing).

However, this does suggest an interesting escalation in questions about the increasing attempts to fragment the internet. You see various countries demanding (or forcing) certain websites get blocked. But those solutions are truly only temporary. Because the overall internet is too important to block, and because some sites are necessary (like Github) there are always holes in the system. Add in a useful dose of encryption (yay!) and the ability to control everything that's read in one particular country becomes increasingly difficult. You might hope the response would be to give up attempts to censor, but China isn't likely to give up just like that. So, instead, it's basically trying to censor the global internet, by launching a high powered attack on the site that is the problem, while basically saying "get rid of these projects and we'll stop the attack."

It seems likely that this sort of escalation is only going to continue -- but in some ways it's actually a good sign. It shows that there are real cracks in China's attempts to censor the internet. We're basically realizing the limits of the Great Firewall of China, and useful services like Github have allowed a way to tunnel through. China is responding by trying to make life difficult for Github, but as long as Github and others can figure out ways to resist, censorship attempts like the Great Firewall will increasingly be useless.

In the early days of the internet, people talked about how it was resistant to censorship. Over the past decade or so, China has challenged that idea, showing that it could basically wall off large parts of the internet, and actually keep things semi-functional. Yes, there were always cracks in the wall, but for the most part, China showed that you could censor large parts of the internet. This latest move suggests that we may be moving back towards a world where the internet really is resistant to censorship -- and China is freaking out about it and responding by trying to increase the censorship globally. It's a battle that is going to be important to follow if you believe in supporting free expression online.

from the i-don't-know-anything-about-this-stuff dept

Here's a suggestion: if you're a Congressional Representative whose job it is to regulate all sorts of important things, and you state in a hearing "I don't know anything about this stuff" before spouting off on your crazy opinions about how something must be done... maybe, just maybe educate yourself before confirming to the world that you're ignorant of the very thing you're regulating. We famously saw this during the SOPA debate, where Representatives seemed proud of their own ignorance. As we noted at the time, it's simply not okay for Congress to be proud of their own ignorance of technology, especially when they're in charge of regulating it. But things have not changed all that much apparently.

We already wrote about FBI Director James Comey's bizarre Congressional hearing earlier this week, in which he warned those in attendance about the horrible world that faced us when the FBI couldn't spy on absolutely everything. But the folks holding the hearing were suckers for this, and none more so than Rep. John Carter. The ACLU's Chris Soghoian alerts us to the following clip of Carter at that hearing, which he says "is going to be the new 'The Internet is a Series of Tubes'" video. I would embed the video, but for reasons that are beyond me, C-SPAN doesn't use HTTPS so an embed wouldn't work here (randomly: Soghoian should offer CSPAN a bottle of whiskey to fix that...).

Here's the basic transcript though:

Rep. John Carter: I'm chairman of Homeland Security Appropriations. I serve on Defense and Defense subcommittees. We have all the national defense issues with cyber. And now, sir, on this wonderful committee. So cyber is just pounding me from every direction. And every time I hear something, or something just pops in my head -- because I don't know anything about this stuff. If they can do that to a cell phone why can't they do that to every computer in the country, and nobody can get into it? If that's the case, then that's the solution to the invaders from around the world who are trying to get in here. [Smug grin]

FBI Director Comey: [Chuckle and gives smug, knowing grin]

Carter: Then if that gets to be the wall, the stone wall, and even the law can't penetrate it, then aren't we creating an instrument [that] is the perfect tool for lawlessness. This is a very interesting conundrum that's developing in the law. If they, at their own will at Microsoft can put something in a computer -- or at Apple -- can put something in that computer [points on a smartphone], which it is, to where nobody but that owner can open it, then why can't they put it in the big giant super computers, that nobody but that owner can open it. And everything gets locked away secretly. And that sounds like a solution to this great cyber attack problem, but in turn it allows those who would do us harm [chuckles] to have a tool to do a great deal of harm where law enforcement can't reach them. This is a problem that's gotta be solved.

Holy crap! Rep. John Carter just learned about encryption! And he thinks it's only on mobile phones but (ooooh, scary) might one day be used on "big super computers" to keep stuff safe. But he doesn't realize that it's been widely used for many, many, many years to keep his very own data safe and many of ours as well.

The conversation continues with Carter again demonstrating confusion over some rather basic concepts:

Carter: If you're following the Bill of Rights, you have every right to be able to go before a judge, present your probable cause, and if he sees it, that's a right, get a warrant and get into that machine. And I don't think there's a right of privacy issue in the world that prevents you following the law.

Uh, right. There isn't a right of privacy issue that prevents the FBI from going and getting a warrant, but the larger argument is whether or not individuals can protect other things privately -- and they've always been able to do so. If you and I have a conversation just between the two of us, there is no way for the government to then find out what that conversation was about. Because there's no way to "decrypt" a verbal conversation that is now stored entirely in our minds. That's been true forever. Yet we don't see Rep. Carter or Director Comey demanding recording devices to record every conversation. But, to Carter, the fact that you might be able to do the same thing with your email, is a "monster."

Carter: So if that's what they've created, they've created a monster, that will harm law enforcement, national security and everything else in this country. And this really needs to be addressed. And I wasn't even going to talk about that, but that upsets the heck out of me. 'Cause, you know, I don't think that's right.

Yeah, Rep. Carter, you're kind of decades too late. And you're totally wrong, too. It didn't create a monster. It didn't harm "everything else in this country." It protected millions of law abiding people -- including Carter by keeping their data safe. That's the whole point of encryption. Saying that "it needs to be addressed" is ridiculous. However, it does make it clear that Rep. Carter was being honest at the beginning when he admitted "didn't know anything about this stuff." Perhaps he should have stopped there.

At the end there's this bizarre dialogue about how law enforcement and judges handle information in a locked safe, but it seems like Carter still doesn't understand the question, finally saying that it's "bad policy" to have a safe that can't be opened by the manufacturer and "a crisis." So is Rep. Carter arguing that all safe's need to have backdoors that the manufacturers can open?

Doesn't Rep. Carter have staffers who can point out to him that computer encryption has been around for decades, and it's what keeps all sorts of stuff safe, including his banking details, his credit card purchases, the confidential memos he receives in Congress and much, much more? And yet, he's suddenly discovered encryption and he's decided it's bad because it might, someday, end up on computers?

from the about-time... dept

About a year ago, when we switched to default HTTPS, we pointed out that one of the major reasons why other news sites refused to do the same was that most ad networks would not support HTTPS. In fact, we had to end a number of relationships with ad partners in order to make the move (but we felt it was worth it). In fact, the really crazy part was that many of the ad network partners we spoke to clearly had absolutely no clue about HTTPS, what it was and why it's important. But, over the past year, more and more attention has been placed on the value and importance of encrypting web traffic, so it's great to see that the internet ad industry is starting to wake up to this, even if it's pretty late in the process.

In fact, last year was the time to talk about security. From The New York Times to Google, the call went out for websites to encrypt communications with their users, protecting the integrity and privacy of information exchanged in both directions. Even the U.S. government heard this call, and is working to require HTTPS delivery of all publicly accessible Federal websites and web services.

This year, the advertising industry needs to finish catching up. Many ad systems are already supporting HTTPS - a survey of our membership late last year showed nearly 80% of member ad delivery systems supported HTTPS. That’s a good start, but doesn’t reflect the interconnectedness of the industry. A publisher moving to HTTPS delivery needs every tag on page, whether included directly or indirectly, to support HTTPS. That means that in addition to their ad server, the agency ad server, beacons from any data partners, scripts from verification and brand safety tools, and any other system required by the supply chain also needs to support HTTPS.

Let’s break that down a bit more - once a website decides to support HTTPS, they need to make sure that their primary ad server supports encryption. That ad server will sometimes need to include tags from brand safety, audience and viewability measurement, and other tools - all of which also need to support encryption. The publisher’s ad server will often direct to one of several agency ad servers, each of which will also need to serve over HTTPS. Each agency ad server also may include a variety of beacons or tags, depending on how the deal was set up, all of which similarly need to have encrypted versions available. That’s a lot of dependencies - and when one fails to support HTTPS, the website visitor’s experience is impacted, initiating a costly search for the failure point by the publisher.

While I question that 80% number -- given that we had difficulty finding many ad providers who supported HTTPS a year ago -- it's good to see the industry finally recognizing how important this is.

from the keeping-you-safe...-or-keeping-you-vulnerable dept

Back in October, we highlighted the contradiction of FBI Director James Comey raging against encryption and demanding backdoors, while at the very same time the FBI's own website was suggesting mobile encryption as a way to stay safe. Sometime after that post went online, all of the information on that page about staying safe magically disappeared, though thankfully I screenshotted it at the time:

If you really want, you can still see that information over at the Internet Archive or in a separate press release the FBI apparently didn't track down and memory hole yet. Still, it's no surprise that the FBI quietly deleted that original page recommending that you encrypt your phones "to protect the user's personal data," because the big boss man is going around spreading a bunch of scare stories about how we're all going to be dead or crying if people actually encrypted their phones:

Calling the use of encrypted phones and computers a “huge problem” and an affront to the “rule of law,” Comey, painted an apocalyptic picture of the world if the communications technology isn’t banned.

“We’re drifting to a place where a whole lot of people are going to look at us with tears in their eyes,” he told the House Appropriations Committee, describing a hypothetical in which a kidnapped young girl’s phone is discovered but can’t be unlocked.

So, until recently, the FBI was actively recommending you encrypt your data to protect your safety -- and yet, today it's "an affront to the rule of law." Is this guy serious?

More directly, this should raise serious questions about what Comey thinks his role is at the FBI (or the FBI's role is for the country)? Is it to keep Americans safe -- or is it to undermine their privacy and security just so it can spy on everyone?

Not surprisingly, Comey pulls out the trifecta of FUD in trying to explain why it needs to spy on everyone: pedophiles, kidnappers and drug dealers:

“Tech execs say privacy should be the paramount virtue,” Comey continued, “When I hear that I close my eyes and say try to image what the world looks like where pedophiles can’t be seen, kidnapper can’t be seen, drug dealers can’t be seen.”

Except we know exactly what that looks like -- because that's the world we've basically always lived with. And yet, law enforcement folks like the FBI and various police departments were able to use basic detective work to track down criminals.

If you want to understand just how ridiculous Comey's arguments are, simply replace his desire for unencrypted devices with video cameras in every corner of your home that stream directly into the FBI. Same thing. Would that make it easier for the FBI to solve some crimes? Undoubtedly. Would it be a massive violation of privacy and put many more people at risk? Absolutely.

It's as if Comey has absolutely no concept of a cost-benefit analysis. All "bad people" must be stopped, even if it means destroying all of our freedoms, based on what he has to say. That's insane -- and raises serious questions about his competence to lead a government agency charged with protecting the Constitution.

The American people expect government websites to be secure and their interactions with those websites to be private. Hypertext Transfer Protocol Secure (HTTPS) offers the strongest privacy protection available for public web connections with today’s internet technology. The use of HTTPS reduces the risk of interception or modification of user interactions with government online services.

This proposed initiative, “The HTTPS-Only Standard,” would require the use of HTTPS on all publicly accessible Federal websites and web services.

In a statement that clashes with the NSA's activities and the FBI's push for pre-compromised encryption, the CIO asserts that when people engage with government websites, these interactions should be no one's business but their own.

All browsing activity should be considered private and sensitive.

The proposed standard would eliminate agencies' options, forcing them to move to HTTPS, both for their safety and the safety of their sites' visitors. To be sure, many cats will still need to be shepherded if this goes into effect, but hopefully there won't be too many details to trifle over. HTTPS or else is the CIO Council's goal -- something that shouldn't be open to too much interpretation.

As the Council points out, failing to do so places both ends of the interaction at risk. If government sites are thought to be unsafe, it has the potential to harm citizens along with the government's reputation.

Federal websites that do not use HTTPS will not keep pace with privacy and security practices used by commercial organizations, or with current and upcoming Internet standards. This leaves Americans vulnerable to known threats, and reduces their confidence in their government. Although some Federal websites currently use HTTPS, there has not been a consistent policy in this area. The proposed HTTPS-only standard will provide the public with a consistent, private browsing experience and position the Federal government as a leader in Internet security.

The CIO's short, but informative, explanatory page lists the pros of this proposed move, as well as spells out what HTTPS doesn't protect against. It also notes that while most sites should actually see a performance boost from switching to HTTPS, sites that gather elements for other parties will be the most difficult to migrate. And, it notes, the move won't necessarily be inexpensive.

The administrative and financial burden of universal HTTPS adoption on all Federal websites includes development time, the financial cost of procuring a certificate and the administrative burden of maintenance over time. The development burden will vary substantially based on the size and technical infrastructure of a site. The proposed compliance timeline provides sufficient flexibility for project planning and resource alignment.

But, it assures us (at least as much as any government entity can...), the money will be well-spent.

The tangible benefits to the American public outweigh the cost to the taxpayer. Even a small number of unofficial or malicious websites claiming to be Federal services, or a small amount of eavesdropping on communication with official US government sites could result in substantial losses to citizens.

A very encouraging -- if rather belated -- sign that the government is still making an effort to take privacy and security seriously, rather than placing those two things on the scales for intelligence and law enforcement agencies to shift around as they see fit when weighing their desires against Americans' rights and privileges.

from the good-move! dept

Back in 2012 (pre-Snowden!), we wrote about why Google should encrypt everyone's emails using end-to-end encryption (inspired by a post by Julian Sanchez saying the same thing). Since then, securing private communications has become increasingly important. That's why we were happy to see Google announce that it was, in fact, working on a project to enable end-to-end encryption on Gmail, though it was still in the early stages. In December of last year, Google moved that project to Github, showing that it was advancing nicely. As we noted at the time, one interesting sidenote on this was that Yahoo's Chief Security Officer, Alex Stamos, was contributing to the project as well.

Thus it's not surprising, but still great to see, that Stamos has now announced the availability of an end-to-end encryption extension for Yahoo Mail (also posted to Yahoo's Github repository). It appears to function similarly to existing third-party extensions (like Mailvelope), but it's still good to see the big webmail providers like Yahoo and Google taking this issue more seriously. It's still not ready for prime time, and it's unlikely that either provider is going to make this a default option any time soon, but offering more, better (and more user friendly) options to give everyone at least the option of doing end-to-end encryption is a very good sign.

It also raises a separate issue that I think is important: many have argued that companies like Yahoo and especially Google would never actually push for end-to-end encryption of emails, because it takes away the ability of those companies to do contextual advertising within those emails. But that's an exceptionally short-sighted view. If Google, Yahoo and others don't do enough to protect their users' privacy, those users will go elsewhere, and then it won't matter whether or not the emails are encrypted, because they won't see them anyway. Focusing on the user first is always going to be the right solution, and that includes encrypting emails, even if it means slightly less ad revenue in the short term. Hopefully, Google, Yahoo and others remember this simple fact.

from the going-from-bad-to-worse dept

Techdirt has been charting for a while France's descent from a bastion of enlightenment values to a country that seems willing to give up any freedom in the illusory hope of gaining some security. According to a story in Le Figaro, even worse is to come in the shape of a new law (original in French, found via @gchampeau):

[the proposed law] wants to force intermediaries to "detect, using automatic processing, suspicious flows of connection data". Internet service providers as well as platforms like Google, Facebook, Apple and Twitter would themselves have to identify suspicious behavior, according to instructions they have received, and pass the results to investigators. The text does not specify, but this could mean frequent connections to monitored pages.

As well as being extremely vague, none of this "automatic detection" will require a warrant, which means that the scope for abuse and errors will be huge. And then there's this:

The Intelligence bill also addresses the obligations placed on operators and platforms "concerning the decryption of data." More than ever, France is keen to have the [encryption] keys necessary to read intercepted conversations, even if they are protected.

As we've noted before, there is a global push to demonize encryption by presenting it as a "dark place" where bad people can safely hide. What's particularly worrying is that the measures proposed by France are easy to circumvent using client-side encryption. The fear has to be that once the French government realizes that fact, it will then seek to control or ban this form too.

from the meh dept

We just had a story based on the Intercept breaking the fact that the CIA holds an annual hackathon (the CIA calls it a "Jamboree") to come up with new ways to hack secure systems, inviting in various contractors and government agencies. Much of the work is focused on hacking Apple's security, inserting backdoors and generally degrading security and encryption for everyone.

The CIA refused to comment on the Intercept's original story, but the reporters got former FTC official Steven Bellovin to sum it up as:

“Spies gonna spy,” says Steven Bellovin, a former chief technologist for the U.S. Federal Trade Commission and current professor at Columbia University. “I’m never surprised by what intelligence agencies do to get information. They’re going to go where the info is, and as it moves, they’ll adjust their tactics. Their attitude is basically amoral: whatever works is OK.”

"That's what we do," the official said. "CIA collects information overseas, and this is focused on our adversaries, whether they be terrorists or other adversaries."

Except, of course, they don't just spy overseas. The CIA has done domestic spying as well, and the descriptions of the projects don't just impact people overseas. And then there's this one:

"There's a whole world of devices out there, and that's what we're going to do," the official said. "It is what it is."

It is what it is. That's someone who clearly doesn't care one bit about the negative consequences of attacking security and inserting backdoors that can harm everyone, just so long as they can also spy on people they don't like. You know, like the US Senate.

from the the-ijamboree dept

The latest big report from the Intercept is about an annual hackathon, put on by the CIA (which the NSA and others participate in) where they try to hack encrypted systems, with a key focus on Apple products. The CIA calls this its annual "Trusted Computing Base Jamboree." The whole point: how can the CIA undermine trusted computing systems.

If you can't see that, it notes:

As in past years, the Jamboree will be an informal and interactive conference with an emphasis on presentations that provide important information to developers trying to circumvent or exploit new security capabilities.

In other words, rather than seeking to better protect Americans by making sure the security products they use remain secure, this event was about making everyone less safe -- in particular Apple users. The report notes how researchers have undermined Xcode so that the intelligence community can inject backdoors into lots of apps and to reveal private keys (apparently not caring how that makes everyone less secure):

A year later, at the 2012 Jamboree, researchers described their attacks on the software used by developers to create applications for Apple’s popular App Store. In a talk called “Strawhorse: Attacking the MacOS and iOS Software Development Kit,” a presenter from Sandia Labs described a successful “whacking” of Apple’s Xcode — the software used to create apps for iPhones, iPads and Mac computers. Developers who create Apple-approved and distributed apps overwhelmingly use Xcode, a free piece of software easily downloaded from the App Store.

The researchers boasted that they had discovered a way to manipulate Xcode so that it could serve as a conduit for infecting and extracting private data from devices on which users had installed apps that were built with the poisoned Xcode. In other words, by manipulating Xcode, the spies could compromise the devices and private data of anyone with apps made by a poisoned developer — potentially millions of people.

The risks for nearly anyone using an Apple product should become pretty clear when you realize what this "whacked" Xcode can do:

“Entice” all Mac applications to create a “remote backdoor” allowing undetected access to an Apple computer.

“Force all iOS applications” to send data from an iPhone or iPad back to a U.S. intelligence “listening post.”

Disable core security features on Apple devices.

While the Jamboree appears mostly focused on Apple products, that's not all. Microsoft's BitLocker encryption was also a target:

Also presented at the Jamboree were successes in the targeting of Microsoft’s disk encryption technology, and the TPM chips that are used to store its encryption keys. Researchers at the CIA conference in 2010 boasted about the ability to extract the encryption keys used by BitLocker and thus decrypt private data stored on the computer. Because the TPM chip is used to protect the system from untrusted software, attacking it could allow the covert installation of malware onto the computer, which could be used to access otherwise encrypted communications and files of consumers.

Again, this suggests a serious problem when you have the same government that's supposed to "protect us" in charge of also hacking into systems. With today's modern technology, the communications technologies that "bad people" use are the same ones that everyone uses. The intelligence community has two choices: protect everyone, or undermine the security of everyone. It has chosen the latter.

“The U.S. government is prioritizing its own offensive surveillance needs over the cybersecurity of the millions of Americans who use Apple products,” says Christopher Soghoian, the principal technologist at the American Civil Liberties Union. “If U.S. government-funded researchers can discover these flaws, it is quite likely that Chinese, Russian and Israeli researchers can discover them, too. By quietly exploiting these flaws rather than notifying Apple, the U.S. government leaves Apple’s customers vulnerable to other sophisticated governments.”

There's been a lot of talk lately about the growing divide between the intelligence community and Silicon Valley. As more stories come out of projects to undermine those companies and the trust they've built with the public, it's only going to get worse.

from the don't-travel-to-canada dept

I've traveled to many different countries in my life and the only time I've ever had any trouble at all at a border crossing was flying into Canada for a conference one time. I was pulled out of the line and sent to a special side room where I was quizzed about the real reasons I was coming to Canada. They couldn't believe I was speaking at a conference, because I didn't have a paper invite, and had to dig through my emails to show them it in email (thankfully, I stored my emails locally and didn't need internet access). When I tell that story it shocks some people, as Canada has always had a reputation as a fairly easy border to cross -- especially for Americans.

But apparently the Canadians are stepping up their crazy antagonism at the border. The latest story involve Alain Philippon, a Canadian citizen who was returning from a trip to the Dominican Republic. Upon landing in Halifax he was ordered to cough up the password to his smartphone, and upon refusing, was charged with obstructing border officials:

A Quebec man charged with obstructing border officials by refusing to give up his smartphone password says he will fight the charge.

[....]

Philippon had arrived in Halifax on a flight from Puerto Plata in the Dominican Republic. He's been charged under section 153.1 (b) of the Customs Act for hindering or preventing border officers from performing their role under the act.

According to the CBSA, the minimum fine for the offence is $1,000, with a maximum fine of $25,000 and the possibility of a year in jail.

In the US, there have been a number of cases concerning searches of computers and electronic devices at the border, with an unfortunately large number saying that you really don't have privacy rights at the border. Of course, it's not universal, as at least one important court has ruled otherwise. Up in Canada, however, there apparently hasn't been much caselaw on this issue, so assuming Philippon fights this, it could make for a very interesting case.