Maintainers of the OpenSSL cryptographic code library have fixed a high-severity vulnerability that made it possible for attackers to obtain the key that decrypts communications secured in HTTPS and other transport layer security channels.

While the potential impact is high, the vulnerability can be exploited only when a variety of conditions are met. First, it's present only in OpenSSL version 1.0.2. Applications that rely on it must use groups based on the digital signature algorithm to generate ephemeral keys based on the Diffie Hellman key exchange. By default, servers that do this will reuse the same private Diffie-Hellman exponent for the life of the server process, and that makes them vulnerable to the key-recovery attack. DSA-based Diffie-Hellman configurations that rely on a static Diffie-Hellman ciphersuite are also susceptible.

Fortunately, the requirements don't appear to be met by many mainstream applications that rely on OpenSSL and use DSA-based Diffie-Hellman. The Apache Web server, for instance, turns on the SSL_OP_SINGLE_DH_USE option, which causes different private exponents to be used. The OpenSSL-derived BoringSSL code library, meanwhile, got rid of SSL_OP_SINGLE_DH_USE support a few months ago, and LibreSSL deprecated it earlier this week. The applications and libraries may still be vulnerable when using a static ciphersuite, however.

When the edge conditions are met, attackers can send a large number of handshake requests to a vulnerable server or end-user computer. When enough calculations are completed, the attacker can obtain partial secret values and combine the results with the Chinese remainder theorem to eventually deduce the decryption key. For a much more technical description of the vulnerability, which is indexed as CVE-2016-0701, see this blog post published Thursday by Antonio Sanso, the Adobe Systems researcher who discovered and privately reported it. OpenSSL officials have additional details here. Among other things, the OpenSSL advisory warns that the fix may compromise performance.

The time it took OpenSSL maintainers to fix the flaw is impressive. Sanso said he reported the bug on January 12. That means it took just over two weeks for the maintainers to complete and distribute a fix. Interestingly, when the researcher reported the vulnerability, the fix preventing the reuse of Diffie-Hellman exponents had already been committed, but it was not yet made available for release. The completion of the partial fix may have contributed to the speed of the patch release.

Remember Logjam?

Further Reading

Thursday's release also contained additional hardening against an HTTPS-crippling vulnerability that threatened tens of thousands of servers when it was first disclosed last May. Dubbed Logjam, it allowed attackers to downgrade Diffie-Hellman-generated encrypted connections to use extremely weak 512-bit key material. From there, attackers could use pre-computed data prepared ahead of time to deduce the key negotiated between the two parties.

OpenSSL will now reject all key negotiations with Diffie-Hellman parameters shorter than 1,024 bits. A previous OpenSSL patch had increased the limit to 768 bits.

People using OpenSSL version 1.0.2 should upgrade to 1.0.2f, while those still using version 1.0.1 should install 1.0.1r. Thursday's OpenSSL advisory also reminded users that support for version 1.0.1 will end at the end of this year, after which no security fixes will be available. Support for versions 0.9.8 and 1.0.0 ended in December.

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

However, open source is the only way to have any idea how secure the code actually is. What's worse? A system known to be insecure from time to time or a system that could be wholly insecure always but nobody knows?

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

I want to root for openssl, but damn this thing is like swiss cheese. Im beginning to look at it like adobe with all the exploits I heard about through 2015

Wasn't the problem about money? Basically a lot of high-profile users, but little financial contributions.

They have not been well-funded, but there are a number of people out there (e.g. the LibreSSL developers) who think the bigger problem is that the OpenSSL code base is not the greatest in the world. I'm not an experienced C programmer myself, nor have I examined the code, so I'm not really qualified to judge. But what I've heard is that there's a very large amount of code, openssl often re-implements functionality that should be provided by os or library calls, as well as other stylistic shortcomings which make it hard to properly figure out what the code is *supposed* to do, let alone actually does. The (allegedly un-necessary) complexity of the code makes it hard for external developers to either audit or contribute to the project because there's such a steep initial learning curve to understanding the existing code base. This is why bugs like the heartbleed bug occur ... and take *years* to get noticed.

But, in this particular case, it sounds like someone actually implemented an insecure algorithm, which is different kind of mistake that you can chalk up to "cryptography is hard".

But wait, Google and the Internet tell me that all blog sites, and newspaper sites, and any old website out there should be secure because fearmongering.

its quite the opposite in fact they push it because its a basic level of security that prevents people from attacking in the easiest possible way. When the NSA taps major trunks you dont want almost everything to be unencrypted.

Only marginally related, but the headline image makes me want to offer this piece of advice to everyone:

Learn how to pick/rake a deadbolt lock.

Once you've seen that the vast majority of doors that "protect" you on a daily basis can be breached with minimal training in under a minute, you idea of "safe" really changes. I think that's also why people often under-react to stories such as this one. "It's a locked door, and you'd have to really kick it down to get in" becomes replaced with "Oh shit, anybody who can follow basic command line instructions could be seeing my bank transactions" you gain a whole new appreciation. However, the physical lock picking provides a vital real world metaphor for cryptography. If your non-savvy friends don't believe they are in danger, offer to show them you can pick their front door. Watch as their tune changes.

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

You overstate the effect. For one, if the codebase is large enough, it's effectively the same thing. Secondly, what customer of OSS routinely audits the code they are using? I mean, if you have the means and the ability to review every piece of software coming your way (and every update), well, then you could pretty much be rolling your own. The kind of scrutiny we are talking about, IS NOT CHEAP. It is NOT CHEAPER than writing the original software. It's the majority of the cost that should go in to the original process.

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

Just to play devils advocate but how many people are really qualified to actually look for bug like this in OpenSSL?

Open or closed source you need people actually capable or finding the bug to look for them and since researchers tend not to work for free that basically means someone needs to hire them. OpenSSL is a bit different due to its prevalence (so is good for publishing a paper). But, for example, I do not doubt Microsoft's Schannel implementation has not had a lot of cryptography research input into it just because its closed source (the difference is that MS had to pay for some high profile cryptographers to work for them). The truth is OSS was never about accountability at all its about a more decentralised development process. So never assume someone has done any meaningful auditing on it for you just because anyone could read it.

But wait, Google and the Internet tell me that all blog sites, and newspaper sites, and any old website out there should be secure because fearmongering.

its quite the opposite in fact they push it because its a basic level of security that prevents people from attacking in the easiest possible way. When the NSA taps major trunks you dont want almost everything to be unencrypted.

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

Just to play devils advocate but how many people are really qualified to actually look for bug like this in OpenSSL?

If you can read math, you can read code. If you can't read math, go take a class.

Quote:

Open or closed source you need people actually capable or finding the bug to look for them and since researchers tend not to work for free that basically means someone needs to hire them. OpenSSL is a bit different due to its prevalence (so is good for publishing a paper). But, for example, I do not doubt Microsoft's Schannel implementation has not had a lot of cryptography research input into it just because its closed source (the difference is that MS had to pay for some high profile cryptographers to work for them). The truth is OSS was never about accountability at all its about a more decentralised development process. So never assume someone has done any meaningful auditing on it for you just because anyone could read it.

You realize this story is proof positive that people read it and fix problems with it, right?

The beauty of OSS: Not everyone needs to read the code, it only takes one person to read code, ID the issue, and disclose it.

You realize this story is proof positive that people read it and fix problems with it, right?

Thanks, I wanted to reply to the prior comment but couldn't come up with a way to NOT sound like a complete and utter jerk. (meaning I sometimes have trouble with what I say coming across clear in what I type)

Quote:

The beauty of OSS: Not everyone needs to read the code, it only takes one person to read code, ID the issue, and disclose it.

With emphasis on disclose - as you get this part of the process far more often with OSS that with closed source, and that's the important piece in my mind.

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

Just to play devils advocate but how many people are really qualified to actually look for bug like this in OpenSSL?

If you can read math, you can read code. If you can't read math, go take a class./quote]Bull. I and most other programmers I have ever spoken to utterly loath debugging another's code. If there are not a shitload of comments is often quicker to rewrite.Go read the mess that is OpenSSL's and come back to us on that. I have...

Quote:

Quote:

Open or closed source you need people actually capable or finding the bug to look for them and since researchers tend not to work for free that basically means someone needs to hire them. OpenSSL is a bit different due to its prevalence (so is good for publishing a paper). But, for example, I do not doubt Microsoft's Schannel implementation has not had a lot of cryptography research input into it just because its closed source (the difference is that MS had to pay for some high profile cryptographers to work for them). The truth is OSS was never about accountability at all its about a more decentralised development process. So never assume someone has done any meaningful auditing on it for you just because anyone could read it.

You realize this story is proof positive that people read it and fix problems with it, right?

The beauty of OSS: Not everyone needs to read the code, it only takes one person to read code, ID the issue, and disclose it.

None of which is any different someone vetting a closed source application - which is my point. And while OpenSSL does get a lot of attention there are a lot of open source projects out there and most don't get reviewed like this. So if there are not disclosures was is bug free or just not looked at?

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

Just to play devils advocate but how many people are really qualified to actually look for bug like this in OpenSSL?

Bull. I and most other programmers I have ever spoken to utterly loath debugging another's code. If there are not a shitload of comments is often quicker to rewrite.Go read the mess that is OpenSSL's and come back to us on that. I have...

Quote:

If you can read math, you can read code. If you can't read math, go take a class.

Quote:

Open or closed source you need people actually capable or finding the bug to look for them and since researchers tend not to work for free that basically means someone needs to hire them. OpenSSL is a bit different due to its prevalence (so is good for publishing a paper). But, for example, I do not doubt Microsoft's Schannel implementation has not had a lot of cryptography research input into it just because its closed source (the difference is that MS had to pay for some high profile cryptographers to work for them). The truth is OSS was never about accountability at all its about a more decentralised development process. So never assume someone has done any meaningful auditing on it for you just because anyone could read it.

You realize this story is proof positive that people read it and fix problems with it, right?

The beauty of OSS: Not everyone needs to read the code, it only takes one person to read code, ID the issue, and disclose it.

None of which is any different someone vetting a closed source application - which is my point. And while OpenSSL does get a lot of attention there are a lot of open source projects out there and most don't get reviewed like this. So if there are not disclosures was is bug free or just not looked at?

None of which is any different someone vetting a closed source application - which is my point. And while OpenSSL does get a lot of attention there are a lot of open source projects out there and most don't get reviewed like this. So if there are not disclosures was is bug free or just not looked at?

It is completely different. I don't have to jump through a million hoops to get a vendor to disclose source code with openssl to review it. I just need a text editor and an internet connection. Try that with Forefront.

None of which is any different someone vetting a closed source application - which is my point. And while OpenSSL does get a lot of attention there are a lot of open source projects out there and most don't get reviewed like this. So if there are not disclosures was is bug free or just not looked at?

It is completely different. I don't have to jump through a million hoops to get a vendor to disclose source code with openssl to review it. I just need a text editor and an internet connection. Try that with Forefront.

And you really think you can? Why didn't you see this then?

Most researchers don't go around validating products (OSS or not) for free - they get hired to do it. So we don't need you to look at Forefront. And errors like the one above don't need source code access if that was true close source would never be hacked....

The TLS protocol is complex. OpenSSL is complex because it supports almost everything in TLS (and SSLv3 and SSLv2...).

Quote:

But what I've heard is that there's a very large amount of code, openssl often re-implements functionality that should be provided by os or library calls, as well as other stylistic shortcomings which make it hard to properly figure out what the code is *supposed* to do, let alone actually does.

What I've seen (since I work on OpenSSL for my employer) is the "re-implementation" is for security.

For instance, memcmp() can leak information based on how long it takes to determine equality. A cryptographic safe version will always take the same amount of time to do the compare for n bytes. Normal operating system/compiler versions of memcmp() exit early, once inequality is known, for performance reasons.

The codebase has also undergone complete re-formatting to improve readability. Part of the code are being removed if they are no longer of use. SSLv2 support is finally being removed in 1.1. Data structures are becoming opaque to ensure future ABI compatibility.

Those groups that have taken OpenSSL and created LibreSSL or BoringSSL all started from the same place, and are likely sharing bug fixes. LibreSSL and BoringSSL are quickly diverging from the OpenSSL codebase. While the groups share serious fixes with each other, once the code diverges significantly, that won't be possible.

None of which is any different someone vetting a closed source application - which is my point. And while OpenSSL does get a lot of attention there are a lot of open source projects out there and most don't get reviewed like this. So if there are not disclosures was is bug free or just not looked at?

It is completely different. I don't have to jump through a million hoops to get a vendor to disclose source code with openssl to review it. I just need a text editor and an internet connection. Try that with Forefront.

And you really think you can?

Can, and have. Submitting your first bug report is an awesome experience.

Quote:

Why didn't you see this then?

Because I don't rely on openssl to the point of needing to do this, and there are plenty of researchers like the one that resulted in this disclosure doing that job for me.

Quote:

Most researchers don't go around validating products (OSS or not) for free - they get hired to do it. So we don't need you to look at Forefront.

By that definition, closed source would be unhackable, which is provably false. Getting paid doesn't somehow make you better at a job.

Quote:

And errors like the one above don't need source code access if that was true close source would never be hacked....

I addressed that in my first post, and I'll restate it here again: Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

It's anyone's guess how many vulnerabilities Windows still has. I think OpenSSL isn't doing all that bad, honestly. It could definitely improve though, and it would seem LibreSSL and BoringSSL are doing so.

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

Just to play devils advocate but how many people are really qualified to actually look for bug like this in OpenSSL?

If you can read math, you can read code. If you can't read math, go take a class.

As with many platitudes this understates the problem by several orders of magnitude. Reading is not the difficult part. Understanding is. And it takes much more than "a class" to understand what code does and to know what side effects it may have and to know that it does the correct thing. To audit cryptography code you also must understand in what ways the algorithm may be attacked and in what ways the number and order of operations impacts the potential for information leakage. It's substantially more complicated than "math." This doesn't even go into the time it requires to become an expert on the subject, the time required to go through a codebase and understand it; as much as I prefer Chrome over IE or Edge, it's not because I can audit the Chromium codebase - because I never have and likely never will.

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

Well said, and I would add with closed source you are also subject to the whim's of the surveillance establishment and their secret agreements / orders with those closed source vendors (as we've seen numerous times):

Can we now admit that open source does not ensure secure code? Let's just not make that argument anymore in light of empirical evidence.

Can we admit now that this question is a red herring?

The issue is with accountability. OSS means you can see for yourself what is happening. Closed source means you are subject to the whim of the devs or hackers when it comes to finding out what is happening under the hood.

Just to play devils advocate but how many people are really qualified to actually look for bug like this in OpenSSL?

If you can read math, you can read code. If you can't read math, go take a class.

As with many platitudes this understates the problem by several orders of magnitude.

Because this is a front page internet argument, not a dissertation on reading source code vs disassembly and why anyone working with any crypto needs to have a copy of all of Bruce's books on their shelf that they actually read.

Based on the fact that I think Snowden hinted that VPN's were of no consideration to the NSA and HTTPS wasn't much either - I find all this identification and fixing of vulnerabilities a good step forward (considering how limited its dev resources were before). SSH is getting "hardended" at a firehose rate and that is a good thing IMHO... I would expect more.

I want to root for openssl, but damn this thing is like swiss cheese. Im beginning to look at it like adobe with all the exploits I heard about through 2015

Wasn't the problem about money? Basically a lot of high-profile users, but little financial contributions.

Opinions differ. Some would say the code was poorly written, poorly maintained, and tried to maintain too much compatibility with deprecated operating systems Only some of those can be blamed on money.