Microsoft and Google are to sue the US government to win the right to reveal more information about official requests for user data. The companies announced the lawsuit on Friday, escalating a legal battle over the Foreign Intelligence Surveillance Act (Fisa), the mechanism used by the National Security Agency (NSA) and other US government agencies to gather data about foreign internet users.

Each company filed a suit in June arguing that they should be allowed to state the details under the first amendment, which guarantees freedom of speech, and in the process defend corporate reputations battered by Edward Snowden’s revelations. Critics accused the companies of collaborating in the snooping.

In a movement that could hurt online ad and e-commerce sales, while adding to the personal privacy and security of many people who use the Internet, Americans have started to mask their identities.
According to a new survey by the Pew Internet and American Life Project:
86% of internet users have taken steps online to remove or mask their digital footprints – ranging from clearing cookies to encrypting their email.
55% of internet users have taken steps to avoid observation by specific people, organizations, or the government.

Independent security experts have long suspected that the NSA has been introducing weaknesses into security standards, a fact confirmed for the first time by another secret document. It shows the agency worked covertly to get its own version of a draft security standard issued by the US National Institute of Standards and Technology approved for worldwide use in 2006.

“Eventually, NSA became the sole editor,” the document states.

The NSA’s codeword for its decryption program, Bullrun, is taken from a major battle of the American civil war. Its British counterpart, Edgehill, is named after the first major engagement of the English civil war, more than 200 years earlier.

• A 10-year NSA program against encryption technologies made a breakthrough in 2010 which made “vast amounts” of data collected through internet cable taps newly “exploitable”.

• The NSA spends $250m a year on a program which, among other goals, works with technology companies to “covertly influence” their product designs.

• The secrecy of their capabilities against encryption is closely guarded, with analysts warned: “Do not ask about or speculate on sources or methods.”

• The NSA describes strong decryption programs as the “price of admission for the US to maintain unrestricted access to and use of cyberspace”.

• A GCHQ team has been working to develop ways into encrypted traffic on the “big four” service providers, named as Hotmail, Google, Yahoo and Facebook.

Analysts on the Edgehill project were working on ways into the networks of major webmail providers as part of the decryption project. A quarterly update from 2012 notes the project’s team “continue to work on understanding” the big four communication providers, named in the document as Hotmail, Google, Yahoo and Facebook, adding “work has predominantly been focused this quarter on Google due to new access opportunities being developed”.

To help secure an insider advantage, GCHQ also established a Humint Operations Team (HOT). Humint, short for “human intelligence” refers to information gleaned directly from sources or undercover agents.

This GCHQ team was, according to an internal document, “responsible for identifying, recruiting and running covert agents in the global telecommunications industry.”

“This enables GCHQ to tackle some of its most challenging targets,” the report said. The efforts made by the NSA and GCHQ against encryption technologies may have negative consequences for all internet users, experts warn.

• A 10-year NSA program against encryption technologies made a breakthrough in 2010 which made “vast amounts” of data collected through internet cable taps newly “exploitable”.

PGP seems the most likely?

Nah, PGP is kind of a sideshow. TLS (ie https) is what they really want.

I suspect that’s also what they mean when they talk about agreements with technology providers – I suspect the implication is that they get copies of root certs from Verisign etc. If so, lots of Very Bad Things are happening.

I suspect that’s also what they mean when they talk about agreements with technology providers – I suspect the implication is that they get copies of root certs from Verisign etc. If so, lots of Very Bad Things are happening.

I suspect that’s also what they mean when they talk about agreements with technology providers – I suspect the implication is that they get copies of root certs from Verisign etc. If so, lots of Very Bad Things are happening.

Analysis Every year or so, a crisis or three exposes deep fractures in the system that’s supposed to serve as the internet’s foundation of trust. In 2008, it was the devastating weakness in SSL, or secure sockets layer, certificates issued by a subsidiary of VeriSign. The following year, it was the minting of a PayPal credential that continued to fool Internet Explorer, Chrome and Safari browsers more than two months after the underlying weakness was exposed.

And in 2010, it was the mystery of a root certificate included in Mac OS X and Mozilla software that went unsolved for four days until RSA Security finally acknowledged it fathered the orphan credential.

This year, it was last month’s revelation that unknown hackers broke into the servers of a reseller of Comodo, one of the world’s most widely used certificate authorities, and forged documents for Google Mail and other sensitive websites. It took two, seven and eight days for the counterfeits to be blacklisted by Google Chrome, Mozilla Firefox and IE respectively, meaning users of those browsers were vulnerable to unauthorized monitoring of some of their most intimate web conversations during that time.

…

It’s hard to overstate the reliance that websites operated by Google, PayPal, Microsoft, Bank of America and millions of other companies place in SSL. And yet, the repeated failures suggest that the system in its current state is hopelessly broken.

“Right now, it’s just an illusion of security,” said Moxie Marlinspike, a security researcher who has repeatedly poked holes in the technical underpinnings of SSL. “Depending on what you think your threat is, you can trust it on varying levels, but fundamentally, it has some pretty serious problems.”

Although SSL’s vulnerabilities are worrying, critics have reserved their most biting assessments for the business practices of Comodo, VeriSign, GoDaddy and the other so-called certificate authorities, known as CAs for short. Once their root certificates are included in Internet Explorer, Firefox and other major browsers, they can’t be removed without creating disruptions on huge swaths of the internet.

In that sense, they are like Citigroup, American International Group and other investment companies that received billion-dollar bailouts from tax payers because the US government deemed them “too big to fail.”

“The current security of SSL depends on these external entities and there’s no reason for us to trust them,” Marlinspike said. “They don’t have a strong incentive to behave well because they’re not accountable.”

I suspect that’s also what they mean when they talk about agreements with technology providers – I suspect the implication is that they get copies of root certs from Verisign etc. If so, lots of Very Bad Things are happening.

Can you elaborate ?

Man in the middle attacks at-will to intercept apparently secure communication?
However, if you set up your own infrastructure/web-of-trust then you may not suffer from this specifically

• A 10-year NSA program against encryption technologies made a breakthrough in 2010 which made “vast amounts” of data collected through internet cable taps newly “exploitable”.

PGP seems the most likely?Nah, PGP is kind of a sideshow. TLS (ie https) is what they really want.

I suspect that’s also what they mean when they talk about agreements with technology providers – I suspect the implication is that they get copies of root certs from Verisign etc. If so, lots of Very Bad Things are happening.

Fair enough, though I wouldn’t really consider that a “breakthrough”. It’s just business as usual.

I suspect that’s also what they mean when they talk about agreements with technology providers – I suspect the implication is that they get copies of root certs from Verisign etc. If so, lots of Very Bad Things are happening.

Can you elaborate ?

Lengthy oversimplifaction follows:

So, much of what’s encrypted on the web is done using SSL – sites that show as “https” in your browser and give you the infamous padlock icon use SSL to enrcrypt traffic between you and them; and also to prove to you that they’re who they say they are (encryption and authentication). Your banking, your email, some Revenue services, Amazon, etc all depend on SSL (more correctly called TLS) to secure their stuff. If the NSA could decrypt SSL on the fly, they could read all your stuff. Even more interesting, they could grab your authentication cookie as it went past and impersonate you later, at their leisure.

So if you’re the NSA, how do you do that? One way is to attack the underlying encryption that SSL uses. They most certainly are trying to do this, but it’s difficult and requires lots of resources. It’s probably not feasible to decrypt all the SSL traffic on a submarine cable in real time. Another way is to attack the implementation of TLS in the browser (your end) or server software (Amazon/Gmail/etc’s end). They almost certainly are trying to do this, but it’s hard to do without being noticed. A vulnerability in a similar protocol was snuck into the NetBSD server operating system some years ago by persons unknown and was undetected for years, but it’s not easy to do.

The “simplest” way is to get copies of the encryption “keys” used by the server. Every SSL connection to a public website is verified using a public/private key pair. If you have the private key, you can pretend to be Gmail, Amazon, AIB, or the Revenue. Why would you want to? Because you sit in the middle of the conversation and pretend to be AIB to mr_anderson’s computer and mr_anderson’s computer to AIB. This is called a man-in-the-middle attack, and is as old as people writing secrets on clay tablets. Again it allows you to intercept the communication and also the session cookie, so you can impersonate mr_anderson later.

So how do you get the private key? You can hack into the server and copy it, but that might be too much work. But there’s another link in the chain that you can attack. If you’re Amazon, you get a 3rd party to “sign” your key, verifying that it’s yours – otherwise I could just pretend to be the Amazon server. There are a dozen or so of these key signing companies operating around the world. Their top-level keys get installed on your computer as part of Internet Explorer/Chrome/Firefox/Safari etc. Your browser explicitly trusts these keys and whatever they have signed (slight crypo oversimplification for clarity).

These “root certificates” and their owners are the weak link I’m talking about. If I control Verisign for example, I can sign any cert I want. I can sign a cert saying I’m google and spy on all your google traffic (assuming I have access to your traffic). In fact, someone hacked into one of the root cert signers recently and issued themselves certs for Google and a few other global companies before someone noticed and revoked the certs. If you’re the NSA, you might be able to compel the signers to do this for you with just a letter, and no hacking or subterfuge at all, let alone million of dollars of kit.

This is not news – there was a huge furore when a Chinese crypto company wanted to become a trusted root and have their cert included with the major browsers – there was a widespread assumption that they would provide fake certs to the Chinese government, allowing them to spy on Chinese citizens’ emails and browsing. Ironically, there wasn’t much discussion that I recall about the dangers of having a US-based root signer…

The National Security Agency is winning its long-running secret war on encryption, using supercomputers, technical trickery, court orders and behind-the-scenes persuasion to undermine the major tools protecting the privacy of everyday communications in the Internet age, according to newly disclosed documents.

The agency has circumvented or cracked much of the encryption, or digital scrambling, that guards global commerce and banking systems, protects sensitive data like trade secrets and medical records, and automatically secures the e-mails, Web searches, Internet chats and phone calls of Americans and others around the world, the documents show.

Many users assume — or have been assured by Internet companies — that their data is safe from prying eyes, including those of the government, and the N.S.A. wants to keep it that way. The agency treats its recent successes in deciphering protected information as among its most closely guarded secrets, restricted to those cleared for a highly classified program code-named Bullrun, according to the documents, provided by Edward J. Snowden, the former N.S.A. contractor.

Beginning in 2000, as encryption tools were gradually blanketing the Web, the N.S.A. invested billions of dollars in a clandestine campaign to preserve its ability to eavesdrop. Having lost a public battle in the 1990s to insert its own “back door” in all encryption, it set out to accomplish the same goal by stealth.

The agency, according to the documents and interviews with industry officials, deployed custom-built, superfast computers to break codes, and began collaborating with technology companies in the United States and abroad to build entry points into their products. The documents do not identify which companies have participated.

I’ve always wondered this about public key encryption - the pairs are unique, right? So if you had a sufficiently large database, you could generate all the possible pairs and then you’d have every private key to go with the public key? Or is there a random element that I’m not figuring in to it?

I’ve always wondered this about public key encryption - the pairs are unique, right? So if you had a sufficiently large database, you could generate all the possible pairs and then you’d have every private key to go with the public key? Or is there a random element that I’m not figuring in to it?

Yes you could calculate every possible key pair thus rendering public key cryptography useless. In the case of a crypto system using 1024 bit keys you’d only have to calculate

It’s a very positive development that the latest revelations are the result of a co-production between the Guardian, the NYT and Pro Publica. Good to see some investigative stuff on the front pages. Also some great coverage from DN

I saw a documentary on quantum encryption recently.
It takes into consideration the wave-particle duality, thus if someone intercepts the message, it changes, alerting the communicators of the interception.