Critical new bug in crypto library leaves Linux, apps open to drive-by attacks

A recently discovered bug in the GnuTLS cryptographic code library puts users of Linux and hundreds of other open source packages at risk of surreptitious malware attacks until they incorporate a fix developers quietly pushed out late last week.

Maliciously configured servers can exploit the bug by sending malformed data to devices as they establish encrypted HTTPS connections. Devices that rely on an unpatched version of GnuTLS can then be remotely hijacked by malicious code of the attacker's choosing, security researchers who examined the fix warned. The bug wasn't patched until Friday, with the release of GnuTLS versions 3.1.25, 3.2.15, and 3.3.4. While the patch has been available for three days, it will protect people only when the GnuTLS-dependent software they use has incorporated it. With literally hundreds of packages dependent on the library multiple operating systems dependent on the library, that may take time.

"A flaw was found in the way GnuTLS parsed session IDs from ServerHello messages of the TLS/SSL handshake," an entry posted Monday on the Red Hat Bug Tracker explained. "A malicious server could use this flaw to send an excessively long session ID value, which would trigger a buffer overflow in a connecting TLS/SSL client application using GnuTLS, causing the client application to crash or possibly execute arbitrary code."

A separate technical analysis here showed how the vulnerability can be exploited to execute malicious code of an attacker's choosing. There don't appear to be any obvious signs that an attack is under way, making it possible to exploit the vulnerability in surreptitious "drive-by" attacks. There are no reports that the vulnerability is actively being exploited in the wild. Still, readers should take steps immediately to ensure they're not using vulnerable GnuTLS versions.

Further Reading

This GnuTLS bug is worse than the big Apple "goto fail" bug patched last week.

The patch comes two months after the public disclosure of Heartbleed, a critical bug in the widely used OpenSSL crypto library. Unlike the GnuTLS flaw, Heartbleed allowed attackers to pluck passwords, cryptographic keys, and other sensitive data out of servers and client devices running a vulnerable version of the software. Almost immediately after Heartbleed became widely known, attackers were exploiting it to steal user passwords from Yahoo and other websites. In March, GnuTLS was patched against a separate bug that made it trivial for attackers to create counterfeit certificates for high-profile websites that would be accepted by GnuTLS as valid.

The latest GnuTLS bug, which is formally cataloged as CVE-2014-3466, is the result of failures to properly check the boundary of session ID sizes. As a result, attackers can corrupt the memory contents of vulnerable devices by sending values that overflow the buffer. GnuTLS maintainers made passing reference to it on their site's security advisory page. It's not yet clear how long the bug has resided in GnuTLS, an open source program that anyone can download and probe for serious flaws just like this one.

A million eyes on the code, and not a damn one of them understands it.

That, right there, is probably the single biggest weakness in the open-source community.

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

82 Reader Comments

Even if you have software using GnuTLS, couldn't you just download the precompiled files for your corresponding version and overwrite the existing GnuTLS files? Wasn't the whole point of LGPL to allow people to modify a library's source code and use their modified versions in place of the old library? Why would you need to wait for the downstream projects to update their copies of the GnuTLS executables and libraries? (Yes, a project could theoretically be statically linked various GnuTLS DLLs, but that would make any such project LGPL itself, so you'd be able to get the source code and recompile using the newest GnuTLS source.)

A million eyes on the code, and not a damn one of them understands it.

That, right there, is probably the single biggest weakness in the open-source community.

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

Well, I try to promote the use of open source software at my company as much as possible, despite serious opposition from our CTO. But every story like this, and TrueCrypt, and Heartbeat just undermine my case.

Never mind that MS and Apple regularly find issues in their code, the burden in most people's head rest on the open source world to prove themselves. It's a sad state of affair, but that's how it plays out here at least.

In hindsight the heartbleed bug was so obvious that you wouldn't need source code to find it so closed source wouldn't have protected you from stuff like this.

Open source has problem with fixing security holes because you can't fix them silently. Microsoft can issue critical security patch after a security hole is found but not tell exactly what it was. This gives you time to apply the patch. Closed source application users get a head start on fixing the problem. With open source everyone can immediately see what was fixed so they know what the vulnerability was and can start working on an attack. Now it's an even race between users to apply the fix and attackers to exploit. It's a tradeoff and doesn't make Open Source inherently bad but both have their strengths and weaknesses.

Well, I try to promote the use of open source software at my company as much as possible, despite serious opposition from our CTO. But every story like this, and TrueCrypt, and Heartbeat just undermine my case.

Never mind that MS and Apple regularly find issues in their code, the burden in most people's head rest on the open source world to prove themselves. It's a sad state of affair, but that's how it plays out here at least.

In hindsight the heartbleed bug was so obvious that you wouldn't need source code to find it so closed source wouldn't have protected you from stuff like this.

Open source has problem with fixing security holes because you can't fix them silently. Microsoft can issue critical security patch after a security hole is found but not tell exactly what it was. This gives you time to apply the patch. Closed source application users get a head start on fixing the problem. With open source everyone can immediately see what was fixed so they know what the vulnerability was and can start working on an attack. Now it's an even race between users to apply the fix and attackers to exploit. It's a tradeoff and doesn't make Open Source inherently bad but both have their strengths and weaknesses.

On the plus side, open source developes are far less inclined to let it slide on some theory/wishful-thinking that no one will learn about it. Closed source developers have been known to sit on known significant vulnerabilities for months, even years.

Are people fuzzing all these TLS libraries and servers now? Were they not doing it a year ago?

Speaking as someone who doesn't know that much about coding, it can't be that hard to recompile these things so the user-determined parameters are stored in specific memory addresses just before a constant, and then double-check the value of the constant at every step to see if the value overflowed. Or just feed it various garbage parameters and see when it crashes.

It's time for big business to stop treating Open Source software as 'free as in beer' software. If your multi-billion-dollar business is reliant on anything LAMP or LAMP-related (and frankly, unless you're a "Microsoft... 100% Microsoft... that's all we use or think" business, that's pretty much every business) then you should start throwing a small fraction of your IT budget back at these projects.

To paraphrase Isaac Newton, if you see further it's only because you're standing on the shoulders of giants.

The Linux Foundation's "Core Infrastructure Initiative" is a nice start but, for an organization of sufficient size, it's probably smarter to have a developer or two on-staff. Then you can focus them to work on Open Source projects that are core to your own business needs.

It's a shame it takes calamities like this & Heartbleed to make people realize how important this stuff is.

Still, its clear theres a lot of focus on revising SSL/TLS libraries right now.

That and SSL/TLS is complex.

Security software is notoriously complex. There are several reasons for that

- the math is difficult;- there are many people actively trying to break it;- the good guys need to anticipate and prevent every problem, but the bad guys need to find only one;- the standard protocols are often very complex (because they want to accommodate every possible configuration and encryption method).

Yet this error, as well as the Heartbleed and GoToFail errors, had nothing to do with those types of complexity. The only complexity that possibly came into play was that there was a lot of code. All large code bases have errors, but you can make an effort to check systematically for the most common ones. This error, and the Heartbleed error, both involved not properly checking the user input and allocating wrong buffer sizes. Both are programming 101. The GoToFail error should have been caught by testing (if you have security software, it is elementary that you don't limit testing to only good input).

The real problem may be that most people working on security software are security experts but not programming professionals.

Heartbleed was something that needed to happen, it was such a rookie mistake ( using custom code to do something the language itself handles ), all because of some (lame) performance reasons all in an effort to support systems that shouldn't be supported.

That's not an accurate description of Heartbleed. The bug was that there were two lengths and the wrong was used for a copy. There was no in-language way to do the copy safely. The actual patch fixed the issue by discarding those heartbeats with an incorrect length.

That wasn't a rookie mistake. It was a pretty easy mistake to make and overlook given the protocol had a redundant length field.

I have to strongly disagree.

First not checking the length was a rookie mistake. Second I am talking about the custom code to clear the contents, for performance reasons, they created their own method to do that. It being an "easy" mistake in my book is the define "rookie" mistake.

What gets me is that TLS/SSL is a standard. It should be easy enough to test these encryption and SSL library functions and compare if the results we recieve are the results we expected. Apple's testing procedure fail there is no excuse but at least that is easier to fix then the quality of the code of a library.

They're tested for operation under normal circumstances every time they communicate with another server.

But they're not generally aggressively fuzz-tested as to how they handle maliciously invalid input: that's moderately difficult to do, remarkably boring, and a horribly unending task - you can convince yourself that the code is right, it's not clear even in principle how you write a document which would let someone else convince themself that the code is right without doing as much work as you did.

The fact these libraries are not fuzz-tested is a huge concern.

I have huge concerns, it seems they are not even tested, and my evidence if the GoToFail bug. I believe that bug was found by accident, because invalid data was being sent, and it was accepted. The fact it was open source only allow somebody to confirm what the bug in the code was exactly, which makes me lipid to no end, when people say closed source can't be tested.

The simple fact is invalid certificate data can be sent, we know what should be returned, SSL is a standard outside of the control of Apple, Microsoft, OpenSSL, and any other SSL library developers.

A million eyes on the code, and not a damn one of them understands it.

That, right there, is probably the single biggest weakness in the open-source community.

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

I'm not conflating anything. One of the big arguments in favor of open-source software is "anyone can review the source code", followed by its corollary that "with many eyes reviewing the source code, any problems will be found and fixed promptly". What we've seen with Heartbleed, and now this issue is that those assumptions aren't necessarily true.

Well, I try to promote the use of open source software at my company as much as possible, despite serious opposition from our CTO. But every story like this, and TrueCrypt, and Heartbeat just undermine my case.

Never mind that MS and Apple regularly find issues in their code, the burden in most people's head rest on the open source world to prove themselves. It's a sad state of affair, but that's how it plays out here at least.

Heartbleed was bad, but it was addressed pretty rapidly in our shop, where only three servers (out of 120+) had to be patched. GnuTLS has had two vulnerabilities identified in the space of three months, and this is supposed to destroy the public perception of open source software?

Give me a break. We have to install anywhere from eight to more than a dozen patches to our Microsoft servers every goddamned month! Some are "critical," others are "important," but all are absolutely necessary to maintain the security and integrity of the datacenter. Open Source software vulnerabilities are patched regularly as well, but most are not flagged as such. Most are minor updates that improve performance rather than repair broken packages. Oh, and these updates rarely ever require me to re-boot the machines affected. Windows machines won't even apply the patches until they're re-booted. The fact that FLOSS projects have as few demonstrated vulnerabilities as we've seen this year only makes them appear more reliable to my manager.

Wait... you are comparing patching one library twice in three months, vs patching an entire OS?

Nice try.

Get the latest Linux distro with a built in auto-updater. Any distro. Your choice which. Latest ISO download.

Install it. Hit the update.

If you get less than 100 updates I will eat my boots.

Software updates are WELCOME. It means the vendor is supporting their creation. Complex software such as an OS with bundled e-mail clients and media players and web browsers and a thousand other things, they will need 10-15 updates a month.

A million eyes on the code, and not a damn one of them understands it.

That, right there, is probably the single biggest weakness in the open-source community.

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

It was a cheap shot, but it gets to a real point. Open source advocates have been saying for a long time that it's better code because anyone can review it. Well, out of the many users, a small number are programmers, a small number of them can understand the code used and a tiny proportion are interested in that specific area. "Many eyes" doesn't mean a thing if only one or two actually look at the code.

Some of these open source libraries really need proper code auditing, but who pays to audit free software?

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

You're a typical open source luser, claiming "trolling" because someone said something you didn't want to hear. It's neither pointless discussion, nor snark. It's a fact that very few people in the open source development community are competent enough and knowledgable enough to not make mistakes like this.

It's a sad state of affairs, but unless it's a sexy problem, open source developers don't give a shit unless they are being paid and even then they only care enough to do the minimum possible to resolve the problem so they can get back to whatever sexy code they were writing before the bug was assigned to their queue.

A million eyes on the code, and not a damn one of them understands it.

That, right there, is probably the single biggest weakness in the open-source community.

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

People wouldn't be making these snarky comments (" million eyes on the code, and not a damn one of them understands it.") if it weren't the case that EVERY FSCKING TIME we hit a security alert involving MS or (especially) Apple we get a million haters rushing to tell us that this is payback to these companies for their evil closed source ways, and if everyone just adopted Linux none of these problems would ever happen.

Are people fuzzing all these TLS libraries and servers now? Were they not doing it a year ago?

I guarantee you that these libraries were subject to fuzzing years ago. Just that those who likely discovered the holes back then kept quiet about it. Crypto libraries are an obvious target - the crown jewels. That this sort of vulnerability has taken so long to come out in any of these libraries is an embarassment. Evidently, the people writing/testing the software either

- don't understand how attacks can happen or how to test for or mitigate them- don't care to test the security of their security software OR- are complicit

Just because the vulnerability was publicly disclosed this week, it doesn't mean that others have not known about it for months or years.

Fuzzing should be a standard part of the GNUTLS/OpenSSL/whateverSSL test suite, done on every release candidate before release.

Well, I try to promote the use of open source software at my company as much as possible, despite serious opposition from our CTO. But every story like this, and TrueCrypt, and Heartbeat just undermine my case.

Never mind that MS and Apple regularly find issues in their code, the burden in most people's head rest on the open source world to prove themselves. It's a sad state of affair, but that's how it plays out here at least.

Look at it this way: new issues and bugs in open source code still seem relevant enough for ars to dedicate an article to it every other week (or more).Issues found in Apple's or MS's code are so much of a cliche that they stopped being news long ago.

You're obviously new here, in case anyone missed the "new poster" tag.Articles on issues on MS's code are so common that even when there aren't any new issues, someone out there digs out an old one just to write about it.Case in point: the recent series of articles everywhere on how you should stop using Silverlight 4 because of a seriously dangerous bug, that had been patched more than one year ago and never mind that the lastest version on Silverlight (version 5) never had that issue.Then there's the stream of articles every Patch Tuesday...

Well, everybody knows security is hard.Using secure development processes is hard even if you're getting paid to do it, expecting people to use them when they're working for free is much harder.I don't think this is directly related to open source, but I do think it is related to the for free mentality. You just can't get quality products for free.There's a reason why the "It can be either cheap, good, or pretty. Pick two" saying exists.Hopefully, things like the Roslyn compiler, that can use far more sophisticated rules that developers or teams can create themselves and a move to programming languages that automate many of the common problems will alleviate this.

Well, I try to promote the use of open source software at my company as much as possible, despite serious opposition from our CTO. But every story like this, and TrueCrypt, and Heartbeat just undermine my case.

Never mind that MS and Apple regularly find issues in their code, the burden in most people's head rest on the open source world to prove themselves. It's a sad state of affair, but that's how it plays out here at least.

Heartbleed was bad, but it was addressed pretty rapidly in our shop, where only three servers (out of 120+) had to be patched. GnuTLS has had two vulnerabilities identified in the space of three months, and this is supposed to destroy the public perception of open source software?

Give me a break. We have to install anywhere from eight to more than a dozen patches to our Microsoft servers every goddamned month! Some are "critical," others are "important," but all are absolutely necessary to maintain the security and integrity of the datacenter. Open Source software vulnerabilities are patched regularly as well, but most are not flagged as such. Most are minor updates that improve performance rather than repair broken packages. Oh, and these updates rarely ever require me to re-boot the machines affected. Windows machines won't even apply the patches until they're re-booted. The fact that FLOSS projects have as few demonstrated vulnerabilities as we've seen this year only makes them appear more reliable to my manager.

No, nobody says it has to "destroy the public perception of open source software". What does need to happen though is for people to stop treating open source as if it is some magical fairy fountain of guaranteed security.

Both open and closed source software is regularly patched, sometimes for security issues, sometimes for just bugs.

And, for the millionth time, when updates are applied on a Linux box, you absolutely do have to restart any application, daemon or other bit of code that has been running the affected library. Otherwise it will still be running the old, out of date version (unlike Windows, Unix systems will allow you to delete an executable that is loaded, but the copy in memory remains). And the easiest way to guarantee that is usually a full restart. I've lost count of the number of Linux servers I've seen that have been "patched" and yet are still vulnerable to exploits because the user assumed they didn't need to restart anything (after all, it never told them they needed to). This is by far and away the worst myth of the open source world, because it means that even people trying to do the right thing are getting it wrong far too often.

Anyone know if this affects Android devices? Android is Linux based. I know most of Android and the apps are written in Java, and use java libs for most stuff, but I think there are some native apps, and of course some parts of the operating system has to be written in native code, so is there any exposure to Android users?

Anyone know if this affects Android devices? Android is Linux based. I know most of Android and the apps are written in Java, and use java libs for most stuff, but I think there are some native apps, and of course some parts of the operating system has to be written in native code, so is there any exposure to Android users?

This is not a bug in Linux kernel. This is a bug in an obscure SSL library, hardly used by Linux software. Android, as well as most of Linux software, use OpenSSL.

A million eyes on the code, and not a damn one of them understands it.

That, right there, is probably the single biggest weakness in the open-source community.

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

<rant>After years (bordering on over a decade) of reading boards for Linux/OSS geeks, MacOS geeks, MS geeks, I've always noted a common thread. That is, the fan boys of any of them are the first to say "My s*it don't stink, but theirs does."

While that comment may be a trolling comment, it is not untrue. Each fan base takes cheap shots at one another all the damn time and it's never going to end. No one wants to change the culture from within.

People have spent years stating that OSS had inherent weaknesses but whenever this subject is brought up, Katy bar the door because you're about to get verbally raped for speaking such madness. Then when the fears come true it becomes, "why don't you just help and stop trolling!"

Want to change the perception? Change the damn culture or at least become the change. Otherwise yelling about it solves nothing on either end...

A million eyes on the code, and not a damn one of them understands it.

That, right there, is probably the single biggest weakness in the open-source community.

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

Why is this editors choice? It's a whine because someone has pointed out the lack of clothes on the emperor. The problem, so visible in this thread, is hubris and that is generally exhibited by the OSS community as a whole. The open-source community had a field day with Apples go to fail issues, though there wasnt as much fall out with that as with the OSS security fails. This is the 3rd such issue that OSS security projects have seen in as many months, so there clearly is a flaw in the methodology.

While that comment may be a trolling comment, it is not untrue. Each fan base takes cheap shots at one another all the damn time and it's never going to end. No one wants to change the culture from within.

The most healthy attitude is the alt.sysadmin.recovery philosophy: "It all sucks". All software stinks. Open source code stinks. Closed source stinks. All hardware has quirks, gotchas, and idiocies. Accept that, and you're in a better place.

You're obviously new here, in case anyone missed the "new poster" tag.Articles on issues on MS's code are so common that even when there aren't any new issues, someone out there digs out an old one just to write about it.Case in point: the recent series of articles everywhere on how you should stop using Silverlight 4 because of a seriously dangerous bug, that had been patched more than one year ago and never mind that the lastest version on Silverlight (version 5) never had that issue.Then there's the stream of articles every Patch Tuesday...

As a matter of fact, I've been reading Ars through RSS for years; that being the main reason I have very few comments. The other is that Ars is no immune to its comment sections becoming pointless flamewars (like this one, which degenerated into a closed vs open discussion) and those are better left alone as much as possible. But thank you for the unnecessarily and wrongly condescending remark.

About your actual point: I don't know which sites you follow but, from the ones I do, articles like this are more common in Ars (which my post was specific about). And, for what it's worth, the last article about a security issue in, for example, MS in here that I remember was the one about the hole in every version of IE and that was high profile enough to be comented everywhere, not just in here.

Also I never said that Ars covering issues like this was a bad thing, for that matter, it's one of the main reasons I read this site. Given that I prefer to use open source software, I like beign aware of these things.

Well, everybody knows security is hard.Using secure development processes is hard even if you're getting paid to do it, expecting people to use them when they're working for free is much harder.I don't think this is directly related to open source, but I do think it is related to the for free mentality. You just can't get quality products for free.

Sorry but that's a cop-out.

There is plenty of quality free software out there that demonstrates that it is possible.

Yes, it's hard. But when your software is the core component of implementing secure device to device encryption, you be paranoid. If it isn't secure, it is entirely pointless.

No one wants an SSL/TLS library that is "mostly OK", as the entire reason for it existing is to protect data.

Yes, I get that mistakes happen and that security is hard. But the number and nature of security problems in these SSL/TLS libraries is just insane.

If you don't know how to write secure C code (and I would argue that would be 99% of developers out there) then don't use C. Use something type-safe with protections against the sort of security problem C is well renowned for.

Is it slower? Sure. Fuck performance. Cleartext is faster than any SSL implementation, why not just use that if you don't care about security?

The amazing thing is that the original developer(s), and all who have worked on the project since decided to choose a language renowned for buffer overflows and other security problems when plenty of alternatives are available, and then didn't bother to integrate such checks into their test suite.

A million eyes on the code, and not a damn one of them understands it.

That, right there, is probably the single biggest weakness in the open-source community.

What's the point of conflating the open-source developer community and the general open-source user community this way? In this context, it's just a flagrant cheap shot (a.k.a. trolling) -- it adds nothing to the discusion but some pointless snark.

Why is this editors choice? It's a whine because someone has pointed out the lack of clothes on the emperor. The problem, so visible in this thread, is hubris and that is generally exhibited by the OSS community as a whole. The open-source community had a field day with Apples go to fail issues, though there wasnt as much fall out with that as with the OSS security fails. This is the 3rd such issue that OSS security projects have seen in as many months, so there clearly is a flaw in the methodology.

I agree that it probably didn't merit an Editor's Pick. (That's OK, some of my worthier posts somehow escaped that honor )

But it's a funny turn in the discussion... Aren't we, ironically enough, actually discussing a significant flaw, a vulnerability that might have been the basis of a Zero-Day exploit -- which was actually found out, and fixed, precisely because anyone can look at the code?