How To Protect Against Heartbleed And Other Vulnerabilities

posted in Mozilla, Whatever by kumar on Wednesday Apr 9th, 2014 at 12:01a.m.

The OpenSSL heartbleed bug was a serious kick to the Internet's collective
ass.
This video provides a quick overview if you want the details.
In summary, an attacker could craft a payload with a fake size (up to 64k) and
trick openssl into sending a random chunk of server memory. WTF?!
To understand how bad this was I spent a minute hacking on this script
that was going around.
I pointed it at login.yahoo.com (which is no longer vulnerable) and tried to see
if I could catch a username and password flying by. I had one within 30 seconds.
That's how bad it was; you could read random parts of the server's memory which
may contain passwords, private keys, or whatever else OpenSSL was
processing for current site visitors.

I had stolen someone's credentials. Game over, right? How do you protect
yourself against something as bad as this?

So, I logged into Yahoo (just for
research purposes) and first I saw a captcha. Ah, nice, Yahoo detected that
the user was logging in from somewhere new. Naturally I entered the captcha (just doing
research!) but next it presented me with one of the user's security questions. Cool.
Yahoo had prevented the attack.

This is how you protect yourself against security vulnerabilities.
Put up as many barriers as possible (within reason). Heartbleed wasn't the first
serious SSL/TLS vulnerability and it won't be the last. Developers make stupid
mistakes. Software is written by humans. It's easy to forget to check on
something or to miscalculate anothing thing when designing the security of a system.
You need to
protect yourself from your own stupidity. BTW, that links to a good article from Ben
Adida on the same topic which is worth a read.

I've been working on an internal API at Mozilla that
signs APKs (Android packages) for the Firefox Marketplace.
It would be very bad if someone compromised this because
they could spread malware like wildfire. With help from our security team
we arrived at a loosely coupled system where each service has a specific role.
You'd have to compromise several parts of the system to start doing real damage.
Also, not all developers work on all systems which is important since a typical
way to break into a system is to break into employee machines.

Even though the signer is an internal API that we can firewall off we
decided to use Hawk (based on Oauth 1.0) for communication; it has a lot of
useful security properties.
(Out of this grew Mohawk for Python if you're interested.)
In Hawk, two parties who want to communicate use a shared
private key to sign requests.
When you make a request you have to sign it. The receiving end will
ignore it if the signature doesn't match. Someone asked me, well, how is that different than
logging in and passing a session token around over SSL? It's safer because an intercepted
signature is useless. There's nothing to intercept.
Besides the signature which prevents tampering, there is a timestamp and nonce to protect
against replay attacks. If you were to somehow intercept a username, a password, or even
a session token you could do a lot of damage. Hawk also lets the client verify the
signature of the response making it a two-way secure channel.

Of course, something could always go wrong at each step of the way. A bug like
heartbleed would still be catastrophic.
It just helps to have many barriers in many different places because most
vulnerabilities only allow partial access to a system.
Putting up many barriers like this is known as defense in depth.

It's easy to go overboard and add complexity to your project though.
In our payments system we made a dedicated service whose sole responsibility
is to handle sensitive payment API credentials (i.e. it makes the money flow).
This is good. It's isolated; only a few other internal servers can talk to it.
For the few moments when the service actually sends login credentials to payment APIs
we added a feature where the service
proxies itself, parses the result, and responds to the client as if no proxying
occurred.
Did that even make sense? Probably not because it's confusing. This
self-proxying feature makes the
system a bit hard to work with. I guess it's nice because if some
vulnerability pops up in a running server then the proxy might
stand more of a chance since it runs less frequently. I'm still not sure the
added complexity is worth the defense. However, for something like heartbleed it
would have been nice since the ultra sensitive credentials are NOT held in the main server's memory. They are in another server entirely. Hmmm.