Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Was this badly translated from another language, or have I been out of system administration too long?

Allow me to translate from buzz-ard to sysopian:

SSL-Ping Data Exfiltration Exploit: Detection and mitigation even a flaming lamer that can't patch OpenSSL can use

"Since this 0-day vuln was published skiddies have been exploiting it to leak data available to OpenSSL 64KB at a time via running one of the pre-written exploit proof-of-concept sources (as skiddies are wont to do) against a bunch of affected Internet facing services. This SNAFU is particularly FUBAR since all the distros that noobs use are building an ancient OpenSSL ver so they can't even push out a simple patch, obviously. We fingered the exploit in use and have a signature so your punk-buster scripts can detect the crackers and ATH0 before your cipher keys get the five-finger discount."

It is some folks trying to drag IDS back out from the grave. The issue is that generally, IDS does work extremely poorly and causes extreme operations effort (somebody has to look at all the alerts). For this specific thing for once IDS can be used to detect the problem and the whole story revolves about that. Of course the approach is fundamentally flawed: If you patch management is so bad that you cannot fix all affected OpenSSL installations pretty fast, then you are doomed anyways security-wise.

To be fair, nobody knows if this was exploited in the wild or not already - so the "mess" was going to happen anyway (unless you planned to patch your server, assuming your certificate was still good, and not tell any of your users that their passwords may have been exposed in the last couple years).

For people who didn't follow the link chain [seacat.mobi], it has since been updated:

Important update (10th April 2014): Original content of this blog entry stated that one of our SeaCat server detected Heartbleed bug attack prior its actual disclosure. EFF correctly pointed out that there are other tools, that can produce the same pattern in the SeaCat server log (see http://blog.erratasec.com/2014... [erratasec.com] ). I don't have any hard data evidence to support or reject this statement. Since there is a risk that our finding is false positive, I have modified this entry to neutral tone, removing any conclusions. There are real honeypots in the Internet that should provide final evidence when Heartbleed has been broadly exploited for a first time.

Important update (10th April 2014): Original content of this blog entry stated that one of our SeaCat server detected Heartbleed bug attack prior its actual disclosure. EFF correctly pointed out that there are other tools, that can produce the same pattern in the SeaCat server log (see http://blog.erratasec.com/2014... [erratasec.com] ). I don't have any hard data evidence to support or reject this statement. Since there is a risk that our finding is false positive, I have modified this entry to neutral tone, removing any conclusions. There are real honeypots in the Internet that should provide final evidence when Heartbleed has been broadly exploited for a first time.

News about a vulnerability should never be delayed longer than a workaround is known. That is, if there is a way to defend your servers, you need to let people know about it so they can defend their servers. Attackers don't wait for disclosure.

In this case, there was a simple fix, recompiling OpenSSL with the proper flag and going, so letting people know as soon as possible is the best option. Those who are serious about security don't wait for Ubuntu to update their apt servers.

But the problem here is that no downstream distributions (either Linux or *BSD) were notified in advance. As a result there were no patch available for older versions that just had bugfixes backported, and no binary updates.

Yes, there are some people who are incapable of compiling their own software who will have to wait until the patch comes through. Those people shouldn't be managing security for a large website (or any website really, in an ideal world).

Yes, there are some people who are incapable of compiling their own software who will have to wait until the patch comes through. Those people shouldn't be managing security for a large website (or any website really, in an ideal world).

Nonsense. I'd want only vendor supplied fixes applied, unless the vendor is so slow as to be incompetent (but then, why would you be using them?)

Why? Because user applied fixes tend to be forgotten, and if the library isn't managed by the package system (you've uninstalled the package you're overwriting, right?) you might miss subsequent important updates.

Ubuntu did provide apt patches for all affected versions, including those not supported anymore (12.10 comes to mind). They did it right. If you had configured your security patches to install automatically, it was even transparent. I don't see a problem there.

> On Tue, Apr 08, 2014 at 15:09, Mike Small wrote:> > nobody gmail.com> writes:> >> >> "read overrun, so ASLR won't save you"> >> > What if malloc's "G" option were turned on? You know, assuming the> > subset of the worlds' programs you use is good enough to run with that.>> No. OpenSSL has exploit mitigation countermeasures to make sure it's> exploitable.

What Ted is saying may sound like a joke...

So years ago we added exploit mitigations counter measures to libcmalloc and mmap, so that a variety of bugs can be exposed. Suchmemory accesses will cause an immediate crash, or even a core dump,then the bug can be analyed, and fixed forever.

Some other debugging toolkits get them too. To a large extent thesecome with almost no performance cost.

But around that time OpenSSL adds a wrapper around malloc & free sothat the library will cache memory on it's own, and not free it to theprotective malloc.

You can find the comment in their sources...

#ifndef OPENSSL_NO_BUF_FREELISTS/* On some platforms, malloc() performance is bad enough that you can't just

OH, because SOME platforms have slow performance, it means even if youbuild protective technology into malloc() and free(), it will beineffective. On ALL PLATFORMS, because that option is the default,and Ted's tests show you can't turn it off because they haven't testedwithout it in ages.

So then a bug shows up which leaks the content of memory mishandled bythat layer. If the memoory had been properly returned via free, itwould likely have been handed to munmap, and triggered a daemon crashinstead of leaking your keys.

Back in my day this wouldn't have been an issue since we ran a host of different custom interfaces and clients. We had to organize our own cross country backhaul via overlapping local calling networks, and orchestrated email routing networks using outdials. Probably only hackers used clients with encrypted links for their BBSs.

I don't know what you're talking about with that fed-speak. I never heard of any crazy lossy crap like duct-taping payphones together neither, but there may have been a few railroa

I'm running Linux Mint Olivia -- the next to current version -- an no openssl patch is yet available as of this afternoon. I image there are quite a few similar distros. Since I have actual work to do, and can't risk wasting two hours on a potentially borked upgrade, I'm stuck to trying not to use programs affected by the exploit for the duration.

While something tells me this exploit is somewhat overblown, what really ticks me off is that this is all the result of delegating memory management to C pointers and basically mmap. As far as I'm concerned, in this day and age, that amounts to spaghetti code and I can't say it endears me to the reliability of openssl.

Please, we need SSL to be secure, not fast. Just use a less efficient method to make things more secure.

There is well written C, and there is poorly written C. I've been through the bowels of OpenSSL, and there are parts of it that frighten me. Ninety percent of the issues in OpenSSL could be solved by adopting a modern coding style and using better static analysis. While static analysis tools can't find vulnerabilities, they can root out code smell that hides vulnerabilities. If, for instance, I followed the advice of two of the quality commercial static analyzers that I ran against the OpenSSL code base, I would have been forced to refactor the code in such a way that this bug would have either been obvious to anyone casually reviewing it, if the refactor did not eliminate the bug all together.

C and C++ are not necessarily the problem. It's true that higher level languages solve this particular kind of vulnerability, but they are not safe from other vulnerabilities. To solve problems like these, we need better coding style in critical open source projects.

In my experience, focusing on "coding style" makes code quality drop since it creates a culture where "review" is simply making sure you dotted the i's and crossed the t's without actually reading the sentence.

If there is one common belief held by all developers it is that their style is "correct" while everyone else is "wrong". The only difference is now the define wrong: "ugly", "inconsistent", "unclear", "confusing", "hard to maintain", "brittle", etc. If you want to see what they actually mean, ask t

Style, or the lack thereof, is absolutely related to this issue. It created the festering environment that this bug hid in for two years before it was discovered.

Style is about more than pretty print formatting. It's about avoiding the god-awful raw pointer math found in this function. It's about properly bounding values. It's about enforcing the sorts of checks that come naturally to programmers with more experience and less bravado. You may not appreciate the need for good style yet, but I bet you that the OpenSSL team is rethinking this now. To know that such a sophomoric mistake lingered for two years, even though hundreds of eyes passed over that code, is the epitome of why good programming style matters. The people who looked at this code are likely much smarter than you or I. They could not follow the logic of this code, because their eyes glossed right over this glaring bug. That's bad style. Everything else is window dressing.

C and C++ are not necessarily the problem. It's true that higher level languages solve this particular kind of vulnerability, but they are not safe from other vulnerabilities. To solve problems like these, we need better coding style in critical open source projects.

It's better to remove a very large class of bugs by the language making them impossible rather than insisting that a certain coding style will save you, "This time for sure!"

I meant that the refactor would make the bug obvious. However, as is the case with any bit of refactoring, one often finds bugs, writes test cases to capture these bugs, and then comes back to eliminate them. While the pedantic would argue that refactoring keeps functionality the same, refactoring is just one step in a larger process of code stewardship that includes the isolation and elimination of bugs. When a refactor makes a bug obvious, I contend that the refactor helps to eliminate that bug.

While something tells me this exploit is somewhat overblown, what really ticks me off is that this is all the result of delegating memory management to C pointers and basically mmap. As far as I'm concerned, in this day and age, that amounts to spaghetti code and I can't say it endears me to the reliability of openssl.

It has nothing to do with mmap or C pointers per se. The issue is simply bad programming. Someone wrote code that trusted unvalidated user input and they got bit in the ass. Whomever performed the code review should have known better, even if the developer didn't..

It was Robin Seggelmann that submitted this bit of buggy openssl code. He either works for the NSA or is grossly incompetent...

Or he made a dumb mistake, as 100% of programmers have done and will do again in the future. Anyone who expects programmers (even the best programmers) to never make mistakes is guaranteed to be disappointed.

The real issue here is that the development process did not detect the mistake and correct it in a timely manner. Code that is as security-critical as OpenSSL should really be code-reviewed and tested out the wahzoo before it is released to the public, so either that didn't happen, or it did happen and the process didn't detect this fault; either way a process-failure analysis and process improvements are called for.

This is not a memory management issue per se, and has nothing to do with mmap or malloc. In fact, the malloc succeeds just fine. Rather than just explaining in text, it might be easier if i simplify the issue in C parlance (this would look neater if slashdot allowed better code formatting):

Due to the fact that this code works more or less exactly as designed, the exploit functions across architectures and operating systems. This bug is so amateurish, i almost find it difficult to believe that it was unintentional.

This is not a memory management issue per se, and has nothing to do with mmap or malloc.

But what the grandparent post said still applies. It's how C treats memory via pointers. The issue, from looking at the code you posted, is that memcpy() copies from beyond the length of rec_p. In a sane language that doesn't treat memory as free-for-all, this isn't possible.

Due to the fact that this code works more or less exactly as designed, the exploit functions across architectures and operating systems. This bug is so amateurish, i almost find it difficult to believe that it was unintentional.

It's the kind of mistake programmers make all the time in C. Sure, you can tell me battle-hardened, conscientious, professional programmers wouldn't make this mistake. Whatever, we've seen this kind of thing too many times for this sent

This is not a memory management issue per se, and has nothing to do with mmap or malloc.

But what the grandparent post said still applies. It's how C treats memory via pointers. The issue, from looking at the code you posted, is that memcpy() copies from beyond the length of rec_p. In a sane language that doesn't treat memory as free-for-all, this isn't possible.

No, that's not the issue, in fact there really isn't any significant pointer arithmetic used here. Yeah, it does use a bit to pull the size field out of the incoming request, but there's nothing wrong with that part of the code.

The issue is that the code allocates a buffer of a size specified by the user, without validating it, and doesn't zero the allocated memory. Yes, many languages automatically zero heap-allocated arrays, which is good, but it's also a performance cost which is often unnecessary and

This has little to do with any C-specific. If you were re-using a buffer in some managed runtime, you would still see the same problem.

The problem is related to a missing check on a user-provided value. It is a pretty common kind of bug, really, since it is isn't often obvious which level of the stack was supposed to check it (hence why assertions are helpful - this would have been a crash, rather than a security hole).

The unfortunate thing is that this kind of bug detection isn't easily automated (since

This has little to do with any C-specific. If you were re-using a buffer in some managed runtime, you would still see the same problem.

Most managed runtimes perform bounds checks, C does not. As a result, the same bug couldn't happen in Java or C#. Of course, bounds checks come with a cost, and one that most people wouldn't want from low level code, which means that C/C++ developers must be extra vigilant.

This has nothing to do with unmanaged languages. It has to do with somebody actively sidestepping security devices that are already in place because they don't grok the way the world works outside of their test bench.

What do you think Python was written in? Here's a hint, it wasn't another managed language.

JVM's are written in C and C++, the CLR is the same. Which managed language do you suggest to use that was not built with C?

The point isn't to eliminate C code entirely, but to minimize the number of lines of C code that are executed.

If (statistically speaking) there will are likely to be N memory-error bugs per million lines of C code, then the number of memory-error bugs in a managed language will be proportional to the size of the interpreter, rather than proportional to the size of the program as a whole.

Add to that the fact that interpreters are generally written by expert programmers, and then they receive lots and lots of testing and debugging, and then (hopefully) become mature/stable shortly thereafter; whereas application code is often written by mediocre programmers and often receives only minimal testing and debugging.

Conclusion: Even if the underlying interpreter is written in C, using a managed language for security-critical applications is still a big win.

Add to that the fact that interpreters are generally written by expert programmers, and then they receive lots and lots of testing and debugging, and then (hopefully) become mature/stable shortly thereafter; whereas application code is often written by mediocre programmers and often receives only minimal testing and debugging.

I'd wager that most of those writing/maintaining OpenSSL are not only expert programmers, but, overall, are more security concious than the authors/maintainers of interpreters. You point would be completely valid if the topic was some builitin board / wiki / chat program / etc. Sadly, that's not the case at hand.

I agree 100%, since there have never been bugs in languages like Java.

Also, managed languages like Java and.NET are written in other managed languages running bytecode, making them extra secure. At no time do any of these languages use libraries or environments written in lower level languages such as C++, C, or assembler. So to the GP's credit, programmers who know those languages are okay to die off since we do not need them anyway.

To be fair, not many of the security bugs in Java are caused by Java code. Off the top of my head the only recent one was an early version of Java 7 that allowed untrusted code to bypass the security manager.

Most of it comes from the Java Browser plugin, which is written in C++, and why you should never run Java code in a browser.

there are scripts that can scan for the vulnerability. I'm amused that many major banks, credit card companies and a certain well known pay-your-friend site (at least a couple of their URL, not all their services) have neither acknowledged the bug, nor patched it.

There have been a number of sites.SSLLabs scanner has been updated to check for Heartbleed, and also will report when the cert validity starts (handy if you want to see whether they're using a new cert). https://www.ssllabs.com/ssltes... [ssllabs.com]LastPass has a pretty decent scanner that just focuses on Heartbleed (without all the other info that you get from SSLLabs): https://lastpass.com/heartblee... [lastpass.com]There are some others out there as well, of course.

There's even one for client-side testing (almost as critical):Pacemaker is an awesome little POC script (python 2.x) for testing whether a *client* is vulnerable (many that use OpenSSL are...). https://github.com/Lekensteyn/... [github.com]

The only client side tool I've encountered is at http://filippo.io/Heartbleed/ [filippo.io]
Can't speak to the implementation or even if it actually checks. But it purports to check in real time and if you trust it you can check sites prior to changing passwords.

One of my current roles is to provide technical support/advice for a group of project managers and business analysts. This morning a few of them had watched the Crash News Network over breakfast and came in convinced that privacy, as we know it, had come to an end. My job is to talk them off the ledge (and I actually enjoy it, they're smart people and as long as I explain it correctly, they get it... I've found that's pretty rare).

1. The issue only exposes 64k at a time. Let's assume that the average enterprise application has at least a 1G footprint (and that's actually on the low end of most applications I work with). That's 1,048,576K. At best, this means that this exploit can access 0.006% of memory of an applications memory at one time.

Ahh you say, I will simple make 16,667 requests and I will retrieve all the memory used by the application.

2. The entire basis of this issue is that programs reuse memory blocks. The function loadAllSecrects may allocate a 64k block, free it and then that same block is used by the heartbeat code in question. However, this code will also release this same block which means that the block is free for use again. Chances are very good (with well optimized code), that the heartbeat will be issued the same 64k block of memory on the next call. Multi-threaded/multi-client apps perturb this but the upshot is that it's NOT possible to directly walk all of the memory used by an application with this exploit. You can make a bazillion calls and you will never get the entire memory space back. (You're thinking of arguments to contrary, your wrong... you wont.)

Congratulations, much success... you have 64k internet.

3. Can you please tell me where the passwords are in this memory dump:

There will be contextual clues (obvious email addresses, usernames, etc) but unless you know the structure of the data, a lot of time will be spent with brute force deciphering. Even if you knew for a fact that they were using Java 7 build 51 and Bouncy Castle 1.50, you still don't know if the data you pulled down is using a BC data structure or a custom defined one and you aren't sure where the boundaries start and end. The fact that data structures may or may not be contiguous complicates matters. A Java List does not have to store all members consecutively or on set boundaries (by design, this is what distinguishes it from a Vector).

Long story short. Yes, there is a weakness here. However, it's very hard to _practically_ exploit... especially on a large scale (no one is going to use this to walk away with the passwords for every gmail account... they'd be very, very lucky to pull a few dozen).

This guy has retracted part of his analysis based on comments, but tries to make a case that passwords and cookies in the http headers are more likely to be exposed than keys. Remember, http-auth is still used a lot. http://blog.erratasec.com/2014... [erratasec.com]

Can you guess where the password is, now? (And those didn't even take that many tries)

I have not seen actual SSL private keys floating around just yet, but given that the original researchers said [heartbleed.com] they managed to get private keys from their own servers, I think it is reasonable to assume that some production servers must have already leaked them.

What is the point of IDS? If you detect an attack, your private keys are compromised and the game is over.

And then you try to recover, you make new keys, renew certificates, revoke the old one... but since certificate revocation is quite broken, you never recover. An attacker that stole your old private key will still be able to masquerade as the legitimate server.

Follow the proposed specification at http://heartbleedheader.com [heartbleedheader.com] to tell your users when you've patched your servers. This eliminates the guessing: "is it OK to update my password now? Do I even need to? Can I trust that I'm not being MITMed with their old SSL key that an attacker stole?" It's bad enough using the tools at hand to detect that information from a single site, let alone the hundreds you might have in your password manager.

You can revoke keys, change passwords, and patch the software, but you can't revoke the data that was already sent with them (and can now be decoded) no more than you can you revoke the bits of data that could have been stolen.

You can't unsend that data, but perfect forward secrecy [wikipedia.org] means that old data can't be decrypted even if the SSL key leaks, and new data can only be decrypted with an active MITM.

...if only people would actually turn it on.

Of course, this particular vulnerability is even worse than just exposure of on-wire traffic. It also exposed potentially anything in memory for the past two years, including the things you didn't even want to send to other people -- and it exposed them to anybody on the internet, not just

I think you completely missed my point. The hand wringing is useless. Fix it, mitigate it, and try to move on. Any damage that has been done is one. All that cane be done now is to patch and mitigate. All the wrangling going on on the 'net is amusing. The past can't be changed. We can learn from it and move on. There are plenty of ways to stop the bleeding. People are acting like the sky is falling. It's truly sad that you're one of them.

I work for a large financial organization. While fixing the hole itself was easy- having to tell a bunch (I can't even legally give you a ballpark, but its a lot) of customers to change their passwords (or forcing them to change) is very bad PR. Plus we don't know if any financial data was accessed. The data could literally bankrupt very large companies or my own company. This is no small problem!

There are many organizations that not only can't patch, do not know how to patch, or simply haven't completed patching, but also don't _have_ an IPS or IDS in place. In fact, even if a company is in a position (and has the know-how) to install one, using either one of these options may come with what is perceived as an unacceptable performance impact.

I managed to write an exploit for this issue within about 30 minutes. The bug is almost trivial to exploit. In my meager tests, i gathered usernames, passwo

trivial? excellent then you can show us how to trivially identify what data has been leaked/exposed and what needs to be reported to the various authorities that require reports on exposed privacy data.

What if you work for an organization that has hundreds or thousands of users who connect to a SSL VPN? Re-issuing a single certificate isn't so bad, but re-issuing many certs (and working with end users to roll them out) sounds like a nightmare.
Many businesses are also responsible for more than one website, and / or are heavily regulated. Just getting lots of users to change their passwords is bad enough, but if you have to tell them that their credit card number or medical information may have been com

Well, Microsoft's CAPI (CryptoAPI) actually, not IIS. IIS uses CAPI, but IIS is no more a crypto toolkit than Apache or lighttpd are. A vuln in CAPI (they've happened before) could also affect clients (IE, Outlook, anything else using the platform APIs...).

Besides, we're still waiting on a NSS issue. NSS isn't so much *broadly* used - I know of only a few product families that use it - as it is *heavily* used. The product families in question are Mozilla anything (Firefox, mostly; the N stands for "Netscape

0.9.8 doesn't support any protocol newer than TLS 1.0, so while it's safe from heartbleed it's also old and verging on deprecated.

Also, it's not that rare for software to use its own copy of OpenSSL, either is a bundled library or statically compiled into the program. I don't actually know of any Mac software that I'm sure does this, but that's not saying much since I don't use a Mac. Things I would expect to find it in are cross-platform programs that use OpenSSL but want a newer branch than 0.9.8 (Python