Crypto

12/01/2014

When I began working on PGPy back in April, the decision to do so was not made lightly. Another software engineer here was in need of a PythonOpenPGP library that was able to fulfill some requirements which the one he was using could not do. In order to help take some work off his plate, I volunteered to evaluate as many existing libraries as I could find in hopes that a good fit existed somewhere. Most of the existing options fell into one of two categories:

A direct Popen wrapper around the gpg command-line binary

A wrapper around GPGME, which is a gpg command-line binary wrapping library written in C

There were also a handful of utility libraries that could read and dump OpenPGP packet information but could not actually take any actions with them, and a couple of what appeared to be early starts on real OpenPGP implementations suffering largely from a lack of documentation and missing some parts of the OpenPGP specification that we either needed immediately, or an iteration or two down the line.

Among our requirements, the most paramount was to avoid a wrapper around a separate binary. The primary reason for this was because we wanted to be able to keep all key management tasks within a single memory address space, and to avoid the problems relating to securely sending passphrases to other processes. We also wanted to avoid having to store keys in on-disk keyring files, to be able to protect them further with other means of access control, while also adding the benefit of being able to reduce some of the disk I/O requirements for the system.

During my searches, I found what seemed to me to be a fairly decent desire for a robust OpenPGP implementation for Python that was capable of platform agnosticism, was well documented, and most importantly, easy to use. I recognized that there was a greater need outside this office that, for at least some other people, was going unfulfilled. So, I cracked open a copy of RFC 4880, and got to work.

Other than prioritizing fulfilling my coworkers’ most immediate needs first, my primary goal while designing and implementing PGPy’s API has been to make it as easy to use, correctly, as possible. Particularly, it should be simple and natural to do the “right” thing from a security perspective, easy to remember without having to constantly reference the documentation, and difficult to do things egregiously insecure. I have spent a lot of time writing and rewriting documentation and unit tests to help ensure that the previous three goals are met.

The end result is a package that can accomplish a lot in very few lines of code. Consider the following example:

In just 5 lines of code, we have loaded a private key, unlocked it using a passphrase, signed a document, and then saved the new signature to the disk. The document can now be verified with that signature using any compliant OpenPGP implementation, such as GnuPG.

PGPy 0.3.0 is not yet a complete implementation of the OpenPGP specification. Most notably, it cannot yet be used to generate keys. It also does not currently implement legacy (v3 key/signature formats) support at all. It does, however, support signing and signature verification using RSA and DSA, asymmetric encryption and decryption using RSA, and symmetric encryption using passphrases with a variety of algorithms.

If your appetite has been whetted, a wealth of additional information about PGPy can be found in the documentation.The latest version can always be installed from PyPI using pip, and I am also working on getting packages for several Linux distributions into their repositories. The codebase itself lives on our GitHub, and any and all feedback and bug reports are welcome, encouraged, and appreciated.

10/21/2014

Researchers at the University of New South Wales have developed two new types of qubits (quantum bits) that can perform operations with greater than 99% accuracy. This brings the reality of quantum computers one step closer. So what does that mean for you? Quantum computing has the ability to break the two most popular asymmetric encryption algorithms, RSA and ECC, defeating the encryption used on nearly every website today.

Quantum computers are still several years away, and for most companies that long-term horizon means they are comfortable with the status quo … but they are missing an important fact … Internet communications are already being collected and stored by the NSA (in their PRISM surveillance program) and presumably by other nations or international organizations so that they can be decrypted as soon as the technology is available. Can companies afford to wait before paying more attention to how their sensitive data is protected?

If you are comfortable with your sensitive data remaining secret for only the next few years, then there is no need for action. But if you would like the data you encrypt today to remain protected into the next decade and beyond, then the time to re-evaluate your encryption strategy is NOW.

Lattice-based crypto algorithms, including Security Innovation’s NTRU, are resistant to all known quantum computing attacks, ensuring that your secrets won’t be revealed once quantum computers become a reality.

08/28/2013

Here’s a crazy thought – designing and implementing a crypto system that remains secure even if one of the most popular crypto algorithms is broken. Overkill, you say? Imagine if all data flowing over the Internet were in the clear – banking transactions, purchases, and other information you want to keep secret.

I’ve been in the crypto field for 20 years. This is what we do – contemplate ways to secure our communications and then build those systems. The fact that RSA has all been brought to its knees scares the crap out of me. As a crypto expert, I feel a bit of shame thinking about how we as a collective group got to this vulnerable situation. I get it – the rapid pace of technology change and the high change cost of a system that is free and “good enough” kept our attention on other areas. The three pillars of security are confidentiality, availability and integrity. The system we have today flies in the face of this mantra - it relies on an old algorithm with known vulnerabilities that in just a matter of time, will create the most widespread lack of confidentiality and integrity we’ve ever seen.

What I would like to see is the Internet moving to a system with better crypto algorithm agility. It seems to me that a system where you use two or more public key crypto algorithms in parallel, XORing together the two different pre-master secrets that you send, is going to be much more robust against advances than any system with a single point of failure. There are good candidate algorithms out there that you might be reluctant to rely on independently but would make good sense as an insurance policy. We have several good lattice-based encryption schemes, a few good lattice-based signature schemes, some hash-tree-based signature schemes, and some multivariate quadratic signature schemes. All of these could be supported by browsers.

In my dream world, all SSL handshakes would be based on ECC and NTRU encryption keys, and transported in certificates signed by ECDSA, PASS, or hash trees. The certificates would have short lifetimes and all Certificate Authorities (CA’s) would support these algorithms. There would be a well-defined process for adding more algorithms to the mix and for disabling the use of specific algorithms in web browsers, and all signed code would be signed with two or three different algorithms. That way if one breaks, organizations don’t need to migrate to a new one in a panic because they will be as strong as the strongest, not the weakest.

Some have lobbied for ECC to replace RSA, but as I wrote in a previous blog, the two issues there are it’s still a single point of failure and ECC is known and proven to be vulnerable to quantum computers. If we are going to endure a few years of disruption to move to ECC, we’re just going to have to replicate this effort in the subsequent 5 or 10 years after that. So the question remains – should we be architecting the Internet to cope with the quantum computing threat now in the event that they are released sooner than expected, and possibly avoid another migration headache twice in the same decade? Come to think of it, this isn’t that crazy of a thought after all.

08/07/2013

Last week at the Black Hat Conference, there was an insightful talk titled “The Factoring Dead: Preparing for the Cryptopocalypse”. The presentation discussed the inevitable fall of King RSA crypto because of its weakness and the growing number of effective attacks on that particular public-key encryption system. It’s fantastic that as an industry, we are taking this imminent threat seriously. However, the talk also encouraged the industry to move toward ECC (elliptic curve crypto). I find this troubling for two reasons:

The notion of a reigning Crypto King is dangerous. Relying on ANY single algorithm is a risk we cannot afford to take on the Internet. What ever happened to the security industry’s sacred risk mitigation mantra in defense in depth?

ECC is doomed when quantum computers arrive - which experts agree on and anticipate will occur in the lifetime of systems that are being deployed today.

Talk about going from the frying pan to the fire.

ECC issues aside, it’s dangerous as an industry to have a knee jerk reaction and say “the algorithm we all depend on 100% today (RSA) is looking shaky - let's all go depend solely on a different algorithm!" To a first approximation, processing is free and channel capacity is free these days. We should be taking advantage of that to think about how we build an Internet that is cryptographically robust even if a widely trusted algorithm gets broken. If we don't do that, we end up going through the same cycle over and over again. Then, it’s exponentially crazy to rely entirely on ECC when we already know that quantum computers break ECC. The correct way to go is to build in support for multiple algorithms running in parallel, and a good place to start is ECC running in parallel with NTRU. Wait, you’ve never heard of the fastest, quantum computer proof crypto algorithm?! How is that possible, you ask, when the industry is starving for secure communications?

NTRU has been around for quite some time and has been waiting patiently for the industry to finally stop living the glory days of RSA and ECC as viable long-term options. It has done its time: it’s an IEEE standard, X9 standard, and has been endorsed as NIST as the most practical of the lattice-based crypto algorithms to withstand quantum-computing attacks. If we have a solution that has been tested by the governing bodies, shouldn’t the industry as whole get on the same page, consider it in the mix as the future of crypto, and move collectively towards adoption?

I’d like to introduce you to NTRU:

Future-proof

No known quantum computing attacks

As required security level goes up, performance advantage of NTRU over other algorithms grows even faster

RSA is a black dwarf star, no longer emitting the energy needed to secure our communications. However, the larger question is where do we go next? First, we need to come to a consensus that relying on a King Crypto creates a risky single point of failure and agree to an alternative approach. Then, we need to take a close examination at the encryption algorithms available today and determine which two (or possibly three) are the viable options for years to come.

I made my business case for NTRU, and so hasn’t the industry at some level already. Let’s encourage others to do similarly. This is a big decision folks, let’s do it right.

02/29/2012

There's a new preprint on the IACR ePrint server (http://eprint.iacr.org/2012/094) by Jintai Ding and Peter Schmidt that presents an attack on NTRU that uses “additional information”. The abstract claims “In the case of the NTRU cryptosystem, if we assume the additional information on the modular operations, we can break the NTRU cryptosystems completely by getting the secret key.” Is NTRU broken?

Well, no.

The paper investigates the relationship f*h = p*g mod q, where f and g are small polynomials, h is an arbitrary mod q polynomial, and p and q areintegers. It starts by rewriting the equation as

f*h = p*g + q*G.

The statement is then that if G is known, f and g can be found using a new algorithm over the reals.

So, a few observations about this. First, two reasons why the attack isn't a break. Then, two observations about interesting points arising from the paper. Finally, a note about presentation.

First, the assumption that an attacker knows G is very strong (which means that it's very unlikely to be true). A straightforward implementation of NTRU key generation doesn't leak G; in fact, the precise quantity G isn't even calculated during normal key generation. There's no particular reason to think that an implementation would leak G as a result of a side-channel or fault attack any more than it would leak f or g directly. As the paper itself shows, G has a wider range of coefficients than do f and g, and so it is harder to guess G than it is to guess f or g directly. So this is clearly not a threat to a correctly implemented version of NTRU.

Second, there are other ways to obtain f and g, given G, than the algorithm in the paper. For example, if everything but f and g are known, then f*h - p*g = q*G gives N equations in 2N variables (the coefficients of f and g). However, since this equation is also an equation about polynomials, an attacker can calculate [f*h - p*g](x) = [q*G](x) at more than 2N different values of x. This gives > 2N equations that are linear in their 2N variables, allowing f and g to be recovered trivially.

Third, note that although G as defined above is unlikely to leak because it would only be used during keygen and, in fact, would not be used by natural implementations of keygen, there are other examples of NTRU operations for which the attack might be more realistic. For example, in encryption the quantity e = r*h + m mod q is calculated. If r*h is calculated one coefficient at a time, and if reduction mod q is noticeably slower than not reducing mod q, an attacker might be able to count the reductions mod q on the individual coefficients and so recover r*h + m over the integers, from which recovering m should be straightforward. Likewise, on decryption, the quantity m = (f*e mod q) mod p is calculated. If the attacker knows m (which is possible with a known-plaintext attack) and can count the reductions on decryption, they could potentially recover f. With current NTRU parameter sets, this is unlikely to happen: q is a power of 2, so reduction mod q is fast, and q is 2^11 so the natural time to reduce is when coefficients reach 2^16 or 2^32; in the first case, the attacker will get a very small amount of information, and the second case will never happen with current NTRU parameter sets.

Fourth, the paper points in an interesting direction for new research by suggesting that it would be interesting to investigate how many coefficients of G need to be known in order for the attack to work.

One way to look at this is to consider substituting for x (as in the second point above) and brute-force searching the unknown coefficients. The paper suggests that the coefficients of G are normally distributed with a standard deviation of about 5.3. In that case, the entropy of each coefficient is about 2.13 bits, and so for k-bit security, about (N - k/2) coefficients of G need to be known for the attack to be better than known attacks. This is clearly an upper bound and it's possible that a more sophisticated attack would allow more unknown coefficients of G. Note here that the attack in the paper was carried out on N = 167, and the higher values of N for recommended NTRU parameter sets would increase the width of the distribution of the coefficients of G somewhat (as well as increasing the absolute number of coefficients that must be known).

It's also interesting to generalize. The work of this paper clarifies why the reduction mod q is important. Can similar work be done about the reduction mod X^N-1?

Finally, to make a quick comment about presentation: it's unfortunate that the abstract of the paper simply says "In the case of the NTRU cryptosystem, if we assume the additional information on the modular operations, we can break the NTRU cryptosystems completely by getting the secret key." This presentation is a bit misleading on its own, as it implies that NTRU is broken. As this post should make clear, NTRU is not broken. It's better not to overclaim in papers.

10/12/2011

This is the first of a series of three blog posts about the Connected Vehicle program being run by the US Department of Transportation (DoT).

32,000 people lost their lives in car accidents in 2010 alone – an actuarial loss to the economy of hundreds of billions of dollars, before you even take into account the costs of accident recovery, vehicle repair, and travel delay.

For more than ten years, technology experts have been working on ways to reduce accidents and save lives. It’s been an effort that’s involved car companies like Ford, General Motors, Daimler, BMW, Nissan, Toyota, Honda; equipment manufacturers like Kapsch, Denso, Delphi, and Raytheon; government agencies, especially the US Department of Transportation; and smaller companies like Security Innovation, who created the IEEE 1609.2 standard for secure communications and a software implementation of the protocol.

The U.S. Department of Transportation and several related associations including Connected Vehicle Trade Association, Collision Avoidance Metrics Program, and Vehicle Safety Consortium, started exploring secure vehicle to vehicle and vehicle to infrastructure more than 5 years ago. A lot of progress has been made and this month, a 5,000 car pilot program was started with actual drivers and cars equipped with this wireless transmission equipment.

If the technologies involved can be shown to work, and if it can be installed in every vehicle, then the potential gains are staggering. As a baseline, USDoT estimates that 6,500 lives were saved by seatbelts in 2000, reducing deaths by about 13%. If secure vehicle communications is widely deployed, it would potentially prevent 80% of all accidents where the drivers aren’t drunk or otherwise impaired. This wouldn’t necessarily reduce deaths by 80%, but it’s clear that we’re talking about a safety and survivability improvement even greater than the impact of seatbelts.

But before the technology can be deployed, it has to be effective, reliable and secure. Security Innovation is working with other companies to secure it and preserve the privacy of drivers in the system. I’ll discuss how in my upcoming blogs.

08/05/2011

Neal Koblitz and Alfred Menezes are two pioneers in the field of Elliptic Curve Cryptography. In recent years, they’ve teamed up to write a series of papers (available at http://anotherlook.ca/) questioning some current practices in academic cryptography. The papers are stimulating and worth a look, and I’ll be posting some more about them. For this post, I’m most interested in the section on safety margins in their most recent paper, “Another look at security definitions” (warning -- f-bomb on page 9).

There’s a school of thought in cryptographic research that says that when you’re designing a scheme or protocol, you should determine the security requirements, design a protocol that exactly meets those requirements, and then make sure you eliminate all elements of the protocol that aren’t necessary to meet those requirements. This gives you the simplest, easiest to implement correctly, and most efficient protocol.

Koblitz and Menezes argue for a different position: unless you are truly resource-constrained, you should be biased towards including techniques that you can see an argument for, even if those techniques seem to be unnecessary within your security model. The reason is simple: your security model may be wrong. (Or it may be incomplete, which can amount to the same thing).

This attitude seems very wise to me. For a while we at Security Innovaton have been arguing that there is one basic assumption underlying almost all Internet protocols: the assumption that it’s okay to use a single public-key algorithm, because it won’t get broken. But that assumption isn’t necessarily right. It’s been right up till now, but if quantum computers come along or if a mathematical breakthrough that we weren’t expecting happens, RSA could be made insecure almost overnight. And if RSA goes, most current implementations of SSL go too, and all internet activities that use SSL will be seriously disrupted.

We don’t have to operate with these narrow safety margins. It’s easy to design a variant “handshake” for SSL that uses both RSA and NTRU to exchange parts of the symmetric key, each as secure as the whole. This would be secure if either RSA or NTRU was attacked, and the additional cost of doing NTRU alongside RSA is negligible in terms of processing. Menezes himself, speaking at the recent ECC conference in Toronto, spoke of this approach as extremely sensible.

Yes, there are some places where efficiency really is paramount, and naturally, we’d recommend using the highest performance crypto which is NTRU. However, for most devices, there’s no reason to use pure efficiency as a reason to avoid doing something that makes perfect security sense. We’re encouraged by the fact that researchers of the stature of Koblitz and Menezes seem to agree with us, and we’re going to look for ways to spread the word further.

04/08/2011

This past week has yielded a veritable treasure trove of head-shaking security stories, all related to my favorite security soft spot – people. The shimmer from our technological advances blinds us from the damage people can do – and we remain so easily fooled:

Wired reported that Albert Gonzalez, the record-setting hacker of Heartland Payment Systems, TJX and a range of other companies said the Secret Service (SS) asked him to do it. The government admitted using Gonzalez to gather intelligence and help them seek out international cyber criminals but says they didn’t ask him to commit any crimes. Uh, yeah… ok.

Storefront Backtalk and others reported on a Gucci engineer who was fired for "abusing his employee discount," but then really got even (and then some) by creating a fictitious employee account (with admin rights!) and then using that account to delete a series of virtual servers, shut down a storage area network (SAN), and delete a bunch of corporate mailboxes… allegedly.

TechAmerica wrote about HP suing a former executive who took a job at Oracle. Apparently, he downloaded and stole hundreds of files and thousands of emails containing trade secrets before quitting.

You might ask, “How can a company so advanced and large as HP not have protections on their digital trade secrets?” It’s not like DLP (data leak prevention) solutions don’t exist. And how about Gucci? I guess this is a double whammy around policy and people, who are so often intertwined. There isn’t a policy flag or checkpoint in place to verify that this newly-created employee was authorized with such privileges that he could delete entire virtual servers and mailboxes? Nobody bothered to check that this was a legitimate employee? Worst of all, this non-existent employee’s accounts were created by a fired network engineer! And then there’s Mr. Gonzalez (hacking community) and the SS (intel community) – which group do you trust less to be honest with the public? Both communities have for a long time engaged ethnically-questionable people to do their bidding. If it’s true that the SS hired him to hack, shame on him for not getting protection for himself in advance. You have to wonder what else he hacked into to merit an actual arrest.

And here we are in 2011, putting our lives on display with Facebook, Twitter, LinkedIn, Yammer, et al, broadcasting our whereabouts on vacation (or more specifically, that we’re not home for an extended period,) meeting up with strangers who have similar tastes, and making our personal details and history available for anyone to view. It’s not always technology that will get us into security trouble… it’s the people.

02/16/2011

As the recent buzz around Firesheep demonstrated, while SSL is proven, it is not deployed everywhere by default.Some of the reason behind this includes slow performance and difficulty of implementation.

Slow SSL

One important reason why SSL isn't everywhere is that the initial SSL handshake, which involves public/private key operations, takes many CPU cycles.With SSL enabled, rather than spending server CPU cycles on the functionality for the end user, CPU cycles are spent on cryptographic operations… and functionality usually trumps security.

Slower performance.SSL enabling a page will cause that page to load more slowly in the browser.This is a difficult choice since there is a direct correlation between speed of page load and user satisfaction or conversion rate down the funnel.

Higher infrastructure costs:Making a corresponding increase in the number of servers in their datacenter to handle the peak load with SSL enabled.The hardware cost (or increased monthly fee in a hosted or cloud setting) coupled with increased maintenance costs, and increased software costs can be prohibitive.

Likewise, organizations building embedded devices or software for RTOS must choose between:

Slower performance. The more SSL is relied upon for connections to other devices or servers, the more CPU cycles it will take, and the greater impact this will have on the speed with which the device performs its intended function.

Greater battery drain.

Higher cost of goods sold, higher weight: Mitigating the effects of SSL's impact can result in more expensive and heavier components in an embedded system.

The concern about SSL performance is only growing as NIST and others are recommending organizations increase the key size used with the RSA algorithm in most SSL implementations from 1024bits to 2048bits.The increase in key size will make SSL handshakes take five times as long – for organizations that do a lot of handshakes, this is a significant performance hit.

Fast SSL

There aren’t a lot of choices in public-key algorithms embedded in SSL libraries. Security Innovation has recently partnered with yaSSL to deliver CyaSSL+, an OpenSSL-comptible SSL library that incorporates the very fast, very small NTRU algorithm. Using this in place of RSA-enabled SSL can improve performance dramatically and we are encouraging users to try it for free under the GPL open source license model.

CyaSSL fully implements SSL 3, TLS 1.0, 1.1, 1.2, with SSL client libraries, an SSL server, API's, and an OpenSSL-compatibility interface.It is optimized for embedded and RTOS environments; however, it is also widely used in standard operating environments.By itself its fast.CyaSSL's optimized code runs standard RSA asymmetric crypto 4 times faster than openSSL.As mentioned above, CyaSSL+ adds an additional assymetric algorithm, NTRU.NTRU is an alternative asymmetric algorithm to RSA and ECC.It is based on a different math (approximate close lattice vector problem, if you’re interested) that makes it much faster and resistant to quantum computers’ brute force attacks, something both RSA and ECC are susceptible to.