Is this a remote exploit? Does this mean my client can be overrun if a server throws me a bad packet or two?
I guess my other question is, how can the most utilized utility on a system still have unchecked overflows? It has to have been audited about a trillion times?
Please help, half assed linux admins want to know!

Truly, the OpenSSL code is quite hideous. I mean, from what I saw they basically wrap calls to realloc and that was part of the problem because they did it half-assedly.

But even the API is a horrible nightmare to use and documentation is scant. Sure you can find a billion *examples* that have all copied from the examples given in the source distribution (which were third-party contributed) but actually finding out what's necessary or important is almost impossible from the documentation. You literally have to either copy the (pretty much undocumented) example, or roll your own and hope for the best.

I wrote some code using OpenSSL a while back to verify two PEM certificates (one signing the other). God, was that a trek through uncharted territory. In the end, I just made a "broken" certificate chain with fake / broken certificates in every way I could imagine and just kept testing my code until I'd taken account of every problem with the certs that I could reasonably generate myself. Even getting the plain certificate name out of certificate can be an exercise in guesswork.

I'm not at all surprised, to be honest. And that it was something as simple and obvious (and hidden behind deliberate casts that would stop the compiler warnings) is hardly a shock, once you've tried to plough through their code.

Right. I've posted elsewhere that the documentation, what there is of it, is obscure and minimal. I'd probably get the O'Reilly book if I had to work with it again - not sure how good that is but it has to be better than the docs.

Have you ever looked at the OpenSSL code? It could have the Ark of the Covenant hidden in all that mess somewhere for all we know and we'd never find it.

That's one reason OpenSSH has been moving towards more restricted/careful use of OpenSSL, and I believe in this case it actually makes OpenSSH not vulnerable, because this is (yet another) bug in the ASN.1 parser, and OpenSSH doesn't use the OpenSSL ASN.1 parser anymore. Sometime a few years ago they replaced it with a minimal, special-cased, audited internal version, which can't handle full ASN.1, but can handle the subset used in OpenSSH. See section 3.2 of this paper (pdf) [openbsd.org] for a bit more.

This is because ASN.1 is an insanely complex, wasteful and redundant standard that should have never been adopted for security related standards where simplicity is an important contributor to writing secure code.

ASN.1 has been the bane of all the standards that ever adopted it (SNMP anyone?) and should have been shot years ago.

ASN.1 has been the bane of all the standards that ever adopted it (SNMP anyone?) and should have been shot years ago.

So what is the alternative?While I grasp teh main concepts of snetwork security, I'm relatively ignorant of the nitty gritty details of endpoint security negotiation, what, if any, is a better alternative to ASN and is there a URL that might hint how one might configure it (indicate a preference) for SSH or SSL connections.

There's not much to do about it as an end user; it's part of the protocol. I think the parent poster is just arguing that adopting ASN.1 in the definition of your protocol is a bad idea, so future protocols should avoid doing so.

The advisory [openssl.org] says that SSL/TLS code is not affected, and only software that parses ASN.1/DER structures from BIO or FILE streams could be impacted. Parsing ASN.1 from memory is also not affected. That would appear to restrict the vulnerable software quite a bit.

Whether you have a remote vulnerability or not would seem to depend very highly on what software you're running.

Those claims are correct. While an SSL BIO could be a network socket, the ASN1 code never talks directly to the SSL BIO code. The SSL protocol has to be parsed first to find the ASN1 structures, and by that time, they're not in the SSL BIO any more.

My assumption (and no, I'm not looking at the code) would be that the SSL/TLS handshake would involve parsing certificates (which contain ASN.1) supplied by the remote host, which you'd be reading from a socket wrapped in a BIO stream. Is that not the case?

I guess it's not unlikely that by happenstance, the SSL/TLS handshake reads the entire certificate into memory then parses that. But is that confirmed?

I guess my other question is, how can the most utilized utility on a system still have unchecked overflows?

Have you ever looked at the OpenSSL code? It could have the Ark of the Covenant hidden in all that mess somewhere for all we know and we'd never find it.

No kidding. I've seen a lot of horrible messes in my career, but OpenSSL tops them all. There have to be hundreds of serious security bugs lurking in there... the only thing saving us is that it's so nasty not even the black hats want to dig in there to find them. Good security code should be as simple and straightforward as possible, to make it easy to verify. The authors of OpenSSL took a... different approach.

There have to be hundreds of serious security bugs lurking in there... the only thing saving us is that it's so nasty that the black hats who put in the effort to find the bugs for their governments or corporate espionage clients get paid damn well for their work and wouldn't dream of disclosing their findings.

Have you ever looked at the OpenSSL code? It could have the Ark of the Covenant hidden in all that mess somewhere for all we know and we'd never find it.

Yeah. OpenSSL has a problem -- it's good enough.

It's poorly documented. It triggers all kinds of compiler warnings if you turn them on. Valgrind throws up all kinds of complaints. The code is really hard to grok.

As a user of the API, there are all kinds of gotchas. Best practice isn't reflected in the defaults; you have to pick up the best flags to pass in by examining how other people have done it, or asking around.

But, it's good enough that so far nobody's thought it's worth the effort to write a new SSL library from scratch.It's good enough that so far nobody's thought it's worth the effort to really firm up the free documentation (so, buy the O'Reilly book instead).

After all, you go through a bunch of pain understanding enough of OpenSSL to put it in your app -- but you only go through that pain once, and after that, it works.

But, it's good enough that so far nobody's thought it's worth the effort to write a new SSL library from scratch.

Err, yes, apart from GnuTLS, Mozilla's NSS, Gutmann's cryptlib, yaSSL (there's enough to name one yet another...), Polar SSL, and probably more -- and that's only counting C libraries available under an open source license.

OK, let me revise that. It's good enough that so far no replacement SSL library has been any near as widely adopted.

It's "Nobody ever got fired for using OpenSSL" territory.

Years ago when I worked for IBM, we used IBM's internal GSKIT SSL libraries -- at around the time I left, they were bringing OpenSSL code into GSKIT, and many of their products were adopting OpenSSL instead of GSKIT.

Thanks for drawing NSS to my attention. I had always assumed Netscape and Mozilla used OpenSSL.

I'm guessing it's the fact a random hacker looking to "add some security" to their project has heard of OpenSSL and already has it on their system.

Most developers are not security experts so will assess a library on awareness, features, reputation etc.; assessing security is not easy so the choice is almost certainly made on the other factors, and it's rational not to trust a library you've not heard of before for security. The other libraries have their own limitations/missing features or other warts whic

hehe, I have trouble looking at the changelogs let alone the code:).
I guess ignorance is bliss, I was thinking openssl was a nice simple protocol that made everything 100% secure. It's nice living in my bubble where nothing can harm me!:).
P.S. I always thought the arc of the Covenant was in AtheOs.:P

Does this mean my client can be overrun if a server throws me a bad packet or two?

Yes.

Based on the advisory, I can't fully agree with either of these statements.The advisory states:

Any application which uses BIO or FILE based functions to read untrusted DERformat data is vulnerable.

DER is a format for the certificate key. For the most part it's relatively rare to handle untrusted certificate keys. I suppose it's possible if you're doing some form of authenticating the client end as well as the server end via SSL.

Please correct me if I'm wrong, but I don't see much evidence this vulnerability is anything worth worrying about for the vast majority of people.

For the most part it's relatively rare to handle untrusted certificate keys. I suppose it's possible if you're doing some form of authenticating the client end as well as the server end via SSL.

It really depends. If you're using OpenSSL purely as an SSL server and never use client certs then you should be OK (there are some weird-ass things involving OCSP response pinning where you can still get the server if you can impersonate the CA that it gets the OCSP info from, but that's getting a bit esoteric and I'm not sure how far OpenSSL supports that stuff yet). OTOH if you use client certs, or use it to run an OCSP server, or a CA, or do any kind of cert processing (including the relatively common

For some applications, it will be, please see the advisory [openssl.org]

Any application which uses BIO or FILE based functions to read untrusted DER
format data is vulnerable. Affected functions are of the form d2i_*_bio or
d2i_*_fp, for example d2i_X509_bio or d2i_PKCS12_fp.
Applications using the memory based ASN1 functions (d2i_X509, d2i_PKCS12 etc)
are not affected. In particular the SSL/TLS code of OpenSSL is *not* affected.
Applications only using the PEM routines are not affect

"Only" a problem for systems where size_t is different from int. So the 15% of you still running in a 32 bit world can rest easy.
This also means that on a mixed 32/64 bit system, you could use 32bit libraries until you get around to patching everything.
Remember, a whole bunch of stuff uses ssl. Have fun fixing your Java jars.

I don't think that's accurate. According to the incident report, the problem is passing a signed int to a function expecting an unsigned int. That means passing unsigned values > 2^(n-1)-1 will cause unexpectedly large allocations leading to a heap overflow regardless of whether n is 32, 64, or 8.

According to the incident report: Producing DER data to demonstrate this is relatively easy for both x86 and x64 architectures.

"Only" a problem for systems where size_t is different from int. So the 15% of you still running in a 32 bit world can rest easy. This also means that on a mixed 32/64 bit system, you could use 32bit libraries until you get around to patching everything. Remember, a whole bunch of stuff uses ssl. Have fun fixing your Java jars.

This advice is wrong. The problem exists in 32 and 64bit libraries. Using 32-bit binaries will NOT protect you from this problem.

The issue is a signed/unsigned mismatch when the unsigned number reaches 2^31 and gets passed to a signed variable it is treated as a negative number with catastrophic results.

If you handle on-disk certificates using a program (e.g. Apache, which reads them from/etc/ssl), there's a potential for arbitrary code execution (literally, the attacker writing what they want to the heap).

Now think about browser's cached certificates, or a browser that might write them to disk and then read them from there rather than the network, or utilities that "do things" with PEM certificates, or basically anything that uses SSL with an on-disk certificate that could come from a malicious source.

No, your browser's SSL session is probably still quite safe, but it's far from being a non-issue from a security standpoint.

"The old data is always copied over, regardless of whether the new size will beenough. This allows us to turn this truncation into what is effectively:

memcpy(heap_buffer, , );"

Letting the attacker write to arbitrary/unexpected memory is always a security issue... [I guess it might not be easily exploitable in all cases based on system setup/random memory allocation, etc though]