Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

New submitter williamyf writes "According to this article at Ars Technica, '[A] bug in the GnuTLS library makes it trivial for attackers to bypass secure sockets layer (SSL) and Transport Layer Security (TLS) protections available on websites that depend on the open source package. Initial estimates included in Internet discussions such as this one indicate that more than 200 different operating systems or applications rely on GnuTLS to implement crucial SSL and TLS operations, but it wouldn't be surprising if the actual number is much higher. Web applications, e-mail programs, and other code that use the library are vulnerable to exploits that allow attackers monitoring connections to silently decode encrypted traffic passing between end users and servers.' The coding error may have been present since 2005."

The GPL has an exemption for linking with libraries that are part of the operating system.

OpenSSL is not part of the "system libraries" on all platforms. Programs would have to have some sort of shim layer to use either OpenSSL on platforms with it or something else on platforms without it. If you distribute a client-side application designed for POSIX-like systems, most people aren't going to be willing to switch to a Mac or install Linux or FreeBSD in VirtualBox just to run it. (Windows is still preinstalled on the vast majority of desktop PCs sold in industrialized English-speaking countries.

Looking across more of their APIs, I see that the code makes liberal use of strlen and strcat, when it needs to be using counted-length data blobs everywhere. In short, the code is fundamentally broken; most of its external and internal APIs are incapable of passing binary data without mangling it. The code is completely unsafe for handling binary data, and yet the nature of TLS processing is almost entirely dependent on secure handling of binary data.

Incredible that GnuTLS is used anywhere at all. It's just mind boggling.

Since all machine code is potentially brittle, the argument for using "safety aware languages" is itself brittle. For instance, Ada is safe because it doesn't allow deallocation unless you use ada.unchecked_deallocation(), or in the alternate, build nothing on the heap, or just hope that the Ada implementation has garbage collection, or..., or... etc.

_Someone_ has to do the work to protect whatever the brittleness is at issue.

For years I have used "struct Buffer { char * start, char * end};" instead of just

Damnit, it sounds like you are saying that software development is hard. And required diligence. And time.

That is NOT what my pointy-haired boss wants to hear.

He wants to hear that we can whip out software using cheap graduates of questionable schools, while distracting these developers with inane meetings, stupid corporate requirements (have you filled out you quarterly performance objectives?), and also making them the first-line software helpdesk and general IT support.

The bug requires a carefully-crafted certificate. That certificate will verify as valid and trusted when it should not be. The connection will still be secure, it will just be with an untrusted person.

So basically it allows a very dedicated attacker to forge a cert and become a MitM attack.

We all know governments have done this for years. It is widely known that root CA certificates have been violated by spy agencies. A few searches on Google will show bunches of news stories where attackers (all types, government attackers, ID theft attackers, etc) have made fake certificates, abused the CA model, and engaged in similar MitM attacks to what this allows.

SSL/TLS communications are just as secure as they always were. If you have personally verified and trusted the certificates the attack wouldn't work, it is only when your trust model allows a cert that you don't personally trust to be used in authentication, and even then it still allows a secure connection but to a wrongly-trusted individual.

The flaw is the trust model and using a cert that you don't personally trust to be valid, which is a well-known issue.

What are you smoking? A connection with a MITM is not "secure". This is WORSE than sending data in plaintext.

No, he is right. If the NSA is the man-in-the-middle, then you created a secure connection to the NSA, and the NSA will be friendly enough to create another secure connection to your original destination. It's not the secure connection you wanted, but it is secure. Nobody but the man-in-the-middle can listen in.

CA model is much more important than the public CA "trust". There is nothing stopping an application designer from using private CAs for their application. This bug breaks the trust to any CAs, including the private ones.

Let's think about it (as a thought experiment) what is required for this to be an effective attack.

SSL spoofing is already a common attack. Not just France [zdnet.com] and the NSA [cnet.com] but also regular old password-sniffers [blogspot.com]. This vulnerability falls under the same class of attack as SSL spoofing; a trusted certificate is secretly replaced by an untrusted certificate.

There were some common examples right after unicode was allowed in domain names and people came up with similar-looking links for major companies with unicode

I'm not sure if only "many eyes make bugs shallow" is enough, but that also professional, thorough code audits (like OpenBSD does) are needed to produce the most secure open source software. Any comments?

Only GnuTLS is not a default part of Linux, its an optional library used by some packages... Most packages seem to use OpenSSL instead, some offer a choice at compile time but most distros build for openssl by default.

An awful lot of stuff links to it. Browsers, flash, everything that dials out uses it on xubuntu. Are you saying that they are linking to it, but not using it? Or are they linking to it and then its using a wrapper to openssl.

May it also be, the "coding error" was not an error at all, but a deliberately introduced bug? Government agencies always wanted to read our — and each other's — communications. Sometimes even for legitimate reasons...

So when Apple's proprietary encryption software suffered a problem, Apple users could do nothing but wait for Apple to deliver a fix; there's nobody else that are allowed to fix Apple's proprietary software but Apple. And when that fix ostensibly arrived, Apple users had to hope it wasn't bundled with some malware too (as is often in proprietary software [gnu.org]).

This bug was caught during an audit [gnutls.org]—"The vulnerability was discovered during an audit of GnuTLS for Red Hat.". Nobody but the proprietor can audit proprietary software. But with free software, users have the freedom to audit the code they run, patch that code, and run their patched code; users can choose to fix bugs themselves or get someone else to fix bugs for them. And users don't have to always trust the same people to do work on their behalf. Users can also choose to wait for a fix to be distributed, and then they can choose to check that fix to make sure it doesn't contain malware. For all we know some users have long spotted and fixed this bug in GNUTLS. Since all complex software has bugs bugs are unavoidable. We're better off depending on people we choose to trust. Software freedom is better for its own sake.

The Apple library itself was open source, right (although rebuilding the OS files would be precarious in OS X and outright impossible in iOS)? The mess with libraries like this (proprietary or not) is all other code (proprietary or not) that not only link to shared objects provided with the OS, but roll their own, sometimes even modified, build of the library. Now, thanks to the fact that it's GPL it cannot be hidden in a blob without at least a license notice, but tracking it down everywhere will be a mess

Apple's code was based on something "open source" but that does Apple's users no good because of what I already said: Apple's distributed code to its users are proprietary. Better to have the alleged "mess" to track down than to know there's no point in tracking down anything because what you'll find is something you're not allowed to inspect, modify, or share. Here you're really highlighting the difference between free software and open source: open source advocates don't want to talk about how people ough

Apple may have known about the issue for a while and not talked about it until it could release whatever proprietary blob alleges to be a fix. Apple's users might have known Apple's software was buggy too, but not been able to do anything about fixing Apple's code, since that's the nature of proprietary software. Apple has sat on exploitable security issues before [telegraph.co.uk]; in that case, governments used that iTunes security hole to invade people's computers (as RMS points out [stallman.org]). So in that case, apparently multiple

I think it was MS who had a bug in the past where if I got a certificate issues for "google.com\0.attacker.com", I could present that certificate for a request to "google.com" (due to DNS hijacking or a MitM attack) and it would pass validation because the CN was handled as a C-style string and treated the null byte as a terminator. Fixed long ago, but still. People have been messing up cert validation for as long as it's been around.

The scary thing is how many mobile apps just don't *do* cert validation. Either it's completely disabled, or they crippled it in some way (I've seen both not checking the trust chain and not checking that the cert is valid for the target site). The usual reasons are "oh, we just did that for testing" (but I'm looking at your release version...) or "yeah, one of the servers it connects to uses a self-signed cert" (fine, add explicit trust *for that cert* but don't just disable chain-of-trust checks!) Another common problem is leaving completely broken or outdated options enabled (export ciphers - 40-bit symmetric crypto, easily breakable with a home PC - ot SSLv2 or other such similarly stupid things). Even if your platform/framework/library has a perfectly bug-free TLS implementation, few people ever seem to actually use it correctly.

Yeah, force people to write a big pile of nested bracket spaghetti and manually back their way out of every case. Make them introduce a bunch of otherwise useless flag variables and extra conditional statements to keep track of it all.

The best part of it all: When all that extra obfuscation causes bugs, it would be harder to pin the root cause on a simplistic generalization like "goto === bad".

And the program eats the function/method/message call overhead, the overhead of passing all local variables as arguments, and the overhead of constructing and destroying an object through which to return multiple values from each function call.

And the program eats the function/method/message call overhead, the overhead of passing all local variables as arguments, and the overhead of constructing and destroying an object through which to return multiple values from each function call.

I think you need to be introduced to a modern optimizing compiler. It will handle the first two for you, just fine, as long as you are in the same compilation unit (or doing fancier global optimziation). Since you just refactored this from a single function, you are supposedly still in the same compilation unit. If you pack the data in something like a stack-allocated struct even the last one will be reduced or completely avoided.

Yes this is the usual way of doing it. I usually see people using goto clauses when they have to cleanup some resource on the error handling part of the code. e.g. deallocating memory or closing up files. I prefer to use a helper function for that and replicating the cleanup function call each time. Some times the cleanup code isn't the same. Using goto labels isn't any better than calling a function in terms of programming complexity.

Sure, but the reason for goto:clanup specifically is "ease of code review". You want to make it easy to demonstrate that every open has a matching close, every alloc has a matching free, and so on. WHen the code base end up with 1000 allocs and 999 frees, the faster and easier you can spot the matching bookends, the better.

I've actually used an oddball pattern where Foo() is nothing but the error checking, allocs, and frees, and in the middle it calls _Foo(...) which can then return from the middle. But

It also makes the case for using at least a minimum of C++ over bare C just for the RAII capabilities constructors/destructors afford you. Even if you don't want to take advantage of templates, the expanded (but limited) library.

I called it spaghetti because the resulting mass of brackets looks just like a big steaming dish of spaghetti, and the extraneous control statements are almost as annoying as gotos to more than a single "error" label.

Nested blocks are refactorable into smaller functions. That's the way to cut them down to size, not to use gotos.

Some are, some not so much. Many situations call for a long list of sequential checks, which can be cleanly and clearly coded as a bunch of if.... return statements. If you put each case in a function you still have the following problems:- If you do it the obvious way, you still need a deeply

1. "nested brackets" (blocks) are by definition not spaghetti. Spaghetti is exclusively the result of gotos and their control equivalents (like the early return).

Bullshit. One of the projects at my last job had a single function in C++ that was over 50 printed pages. 5-deep nested loops, not even counting conditionals. On a 1280p resolution monitor, 8pt font, 4 space-tabbing and properly indented code, the start of the deepest nested blocks were 4/5s or more across the screen. A lot of the crap was due to avoiding goto's. That is spaghetti. By using a few judicial goto's, I was able to reduce the code by a third alone. Goto's are not evil. Like any language construc

People should remember that "Go To Considered Harmful" was written in the times of FORTRAN, when GO TO and a DO-LOOP where the only ways to do control flow. A simple if / else / endif required two GO TOs. So at that time, programmers _had_ to use GO TO in a harmful way. Nowadays they don't.

C++ makes using goto very hard. The replacement pattern: do {... } while (0); with break statements in the right places. It's equivalent to using goto in a structured way, and it compiles as C++ code.

On a 1280p resolution monitor, 8pt font, 4 space-tabbing and properly indented code, the start of the deepest nested blocks were 4/5s or more across the screen.

Sorry to be pedantic, but why would you give only the number of vertical lines (1280)? Since 2276x1280 is such an unusual resolution (I can only assume 16:9 when using the ???p notation), it would be clearer to give the number of pixels in both directions. Another piece of info missing is the DPI, without which one can't relate "pt" to pixels. [at least we know it's a progressive scan monitor, thank god you don't have to code on an interlaced display]

The linux kernel is full of gotos. Assembly is bereft blocks and that sort of structure. So "goto" isn't the source of all evil.

Consier this example of the linux goto paradigm below. When taking locks and establsihing component preconditions you can write an optimal routine that does the stepwise creation, and includes the non-conditional cleanup. Then skipping the cleanup if all the parts succede. The example below is trivial, but when it comes to preserving locking orders it solves a hard problem very sim

Both these bugs are caused by people using 'goto' like morons. Using 'goto' should start throwing compile-time errors to start forcing people off this relic of flow control.

Problem there is that it would break very old programs that just need recompiled and thus require a rewrite. A better way would be to have it disabled in the compiler by default so you have to enable a flag to override it so you are aware that it is there.

Proper C-style code depends heavily on the "goto:cleanup" pattern. You either write all exception-safe, all the time (not in C, obviously), or every function "allocates at the top and frees at the bottom".

The poster that you replied to mistakenly believed that using "goto" was the problem. It wasn't. The problem was code of the form

ÂÂÂÂif (condition)
ÂÂÂÂÂÂÂÂstatement;
ÂÂÂÂÂÂÂÂstatement;

which was duplicating a line of code by mistake, and which would be a problem with almost any statement - statement is executed conditionally once and then executed unconditionally. Whether it is "goto fail;" (which is very

Snowden:(v) Adding a bit of code, hardware, or operation you know you shoudln't because an authority requires you do so."Hey honey, I'll be late for dinner, I have to snowden the latest release of firefox."

At least they are rare enough that it is news worthy. As compared to Windows where new exploits hardly ever get any attention because they are so frilling common as to be passé.

Well, Slashdot seems to report on every vulnerability popping up on my Apple watchlist (often more than once), but not on all popping up on the RedHat watchlist. Draw your own conclusions from what you just said.

At least they are rare enough that it is news worthy. As compared to Windows where new exploits hardly ever get any attention because they are so frilling common as to be passé.

Well, Slashdot seems to report on every vulnerability popping up on my Apple watchlist (often more than once), but not on all popping up on the RedHat watchlist. Draw your own conclusions from what you just said.

"Open Source Software is more secure because the code can be reviewed."

That's why this bug has existed since 2005. gg, guys. Thumbs up.

What do you mean? The many eyes found said bug that is why we are reading about it if thay had not it would still be sitting there undiscovered. Ever wonder how many bug go completely unnoticed in proprietary software because no one actually reads said code? Like for example a Windows bug affecting all 32 bit Windows OS's for 17 years: http://www.computerworld.com/s... [computerworld.com].

That may be, but once the behavior was observed, the observer didn't have to find the owner of the code to get it diagnosed. They may have, but the point is that anybody who found this behavior could've gone into the code and found out what caused the problem. Of course, if a black hat happened to be the one that found the bad behavior, they could've gone into the code to figure out how best to exploit it. So, the situation's not perfect, but still, it's probably a good thing that there were lots of eyes allowed to diagnose and fix the problem once it displayed itself.

The many eyes found said bug that is why we are reading about it if thay had not it would still be sitting there undiscovered.

This bug wasn't found from being open source. Those "many eyes" missed this bug for nearly a decade. Security testing tools uncovered incorrect validation behavior in the compiled library, just like they would with a closed source library. The only difference is that the public can see the incorrect code and correct it immediately; that is what you should be citing as an advantage of

"Open Source Software is more secure because the code can be reviewed."

That's why this bug has existed since 2005. gg, guys. Thumbs up.

What do you mean? The many eyes found said bug that is why we are reading about it if thay had not it would still be sitting there undiscovered. Ever wonder how many bug go completely unnoticed in proprietary software because no one actually reads said code? Like for example a Windows bug affecting all 32 bit Windows OS's for 17 years: http://www.computerworld.com/s... [computerworld.com].

Um no, code review didn't find this - at least not the people that are supposed to. The bad guys apparently found and have been using this bug for quite some time. So obviously the black hats are more motivated to review the code than the white hats.

It was only a couple of years ago someone found a significant bug in Unix that had been around since 1986, a 32-bit x 32-bit multiply routine that returned a 32-bit answer. It had been in Linux since the start in the early 90s and nobody had noticed it.

How is this insightful? The only way this could be insightful is if the OP had said "This bug has existed since 2005, clearly we need greater adoption of open source software, to get more people interested in testing for bugs", because the option is closed software that has bugs no one can look at or fix.

I already have the the security update to this bug on all my machines, but if I had closed source who know when, if ever, a patch would have come.

How is this insightful? The only way this could be insightful is if the OP had said "This bug has existed since 2005, clearly we need greater adoption of open source software, to get more people interested in testing for bugs", because the option is closed software that has bugs no one can look at or fix.

Well, that's not true. Apple had a rather bad and embarrassing security bug, and someone could look at it and fix it - just had to be an Apple employee, who was paid for it.

Well, if your starting point is that "open source doesn't lead to bugs being identified and disclosed" then those very posters you are complaining against are partially right, in part. Consider:Open source: anyone can read the code, but (based on our premise) this doesn't lead to identification and disclosure of problems. It can allow a prospective attacker to identify problems and not disclose.Closed source: only internal staff can read the code, but (based on our premise) having many eyes looking doesn't

If you can't find a remotely on-topic way to undo your moderation, you deserve to modded into oblivion. And why do you think the Offtopic mod exists in the first place? To only mod down offtopic posts that don't admit to be un-mod-posts?