Heartbleed, the Branding of a Bug, and the Internet of Things

One week later, and the Heartbleed Bug news cycle is winding down without any known reports of catastrophic damage. A case of security wonks crying wolf? No, says cryptographer and security expert Bruce Schneier, who is known for measured, thoughtful responses to vulnerabilities and called this one “catastrophic.” HBR spoke with Schneier about what he considers the surprisingly effective response to Heartbleed, how difficult security is because of humans, and why he’s happy Heartbleed wasn’t discovered in the near future, when the Internet of Things will make it much more difficult to fix bugs.

You’re not known for hyperbole, but on your blog you called Heartbleed ‘catastrophic’ and said that on a scale of 1 to 10, it’s an 11. What makes it so bad? Heartbleed is a vulnerability that affected an enormous number of servers on the Internet, and affected them in unpredictable but potentially disastrous ways. Turning the vulnerability into viable attack code was trivial — a few lines of scripting code is all you need — and could be executed without leaving a trace. Stealing the SSL key of a site is an enormous deal, and one that affects all of the site’s users. Fixing it was hard, and required multiple steps and coordination between people. In that way, the fix was both technical and procedural. Basically, it was so bad because there was so much uncertainty. We didn’t even know how to quantify the risk.

Has anything changed in your opinion about how bad it is? Yes and no. One site suggested it may not be as easy to get private SSL keys as we thought, which would make it less dire. And the process of patching the vulnerability and regenerating keys and certificates is going smoother than anticipated. But we’re finding the vulnerability in unpatchable hardware systems, and we haven’t yet seen how criminals have taken advantage of this.

It appears that the introduction of this bug into the OpenSSL encryption system was an honest mistake. Can we afford to have honest mistakes when coding encryption? Unfortunately, everything will always have the risk of mistakes. People are fallible, and everything we do involves people.

But we ought to come as close as we can to eliminating such mistakes. When websites say they are secure, what can we expect that to mean? We can expect it to be more marketing than anything else. Secure isn’t an on-off binary property. It’s relative and situational. I feel secure in my home, even though it’s vulnerable. I feel secure on airplanes, even though they occasionally crash. Websites are no different.

Do people understand this risk the way they do those others? Are they aware of the hazards of being so ubiquitously connected? People definitely don’t understand SSL and what it does and does not protect. But, in general, Internet security is pretty good. The Internet is surprisingly safe. We’re able to work and play on the Internet without many problems. Of course there’s a lot of cybercrime, but it’s minor.

The social Internet seems like the perfect medium to create overreaction and hysteria about a bug like this. Surprisingly it hasn’t happened. It has all felt rather orderly and measured. Why is that? We in the security community are generally terrible about communicating information about vulnerabilities to the general public. Heartbleed has been an exception; the researchers did an excellent job explaining the problem and the fix. They had a slick and informative website. And they gave the vulnerability a cool name and a logo. That logo worked; all the news outlets used it, and it gave people a visual reminder of the story. It created broad awareness in a smart way.

In other words, it was branded. Yes. There’s a risk that we’re going to be accused of “crying wolf.” If there isn’t blood on the streets or planes colliding in midair, people are going to wonder what all the fuss was about — like Y2K. If you slap logos on every vulnerability, people will ignore them and distrust your motives. But it’s like storms. The bad ones get names for a reason.

What else are we learning from Heartbleed? We’ve learned how hard the human aspects of a security system are to coordinate. We’re learning that we don’t have the infrastructure necessary to quickly revoke millions of certificates and issue new ones. We’re learning that some of our critical open-source software is maintained by volunteers who have busy lives, and that often no one else is evaluating that software’s security. We’re learning how complicated the process of disclosing a vulnerability of this magnitude is. Some larger companies got advance warning so they could fix their sites. Those that didn’t get advanced warning are understandably annoyed, but if everyone gets advance warning then it isn’t advance warning anymore. We’re learning how difficult it is to build security involving people.

On a distributed system like the Internet, how can we ensure near-total eradication of vulnerable systems? We can’t, but we can monitor progress. We can scan the entire Internet and compile a list of vulnerable sites in less than half an hour. Many groups are doing this, and we’re learning that most sites have patched and re-secured their systems. I worry less about them, and more about the embedded systems — like cable modems and routers — that don’t have a means of upgrading. With devices like those, fixing the vulnerability involves a trash can, a credit card, and a trip to the computer store.

Beyond cable modems and routers, there’s the Internet of Things. Should we be thinking about Heartbleed in the context of that phenomenon? Yes. I recently wrote an essay that talked about the difficulty of securing all of the low-cost embedded computer systems that are going to become common in our lives over the next few years. These are devices that are made cheaply with very low margins, and the companies that make them don’t have the expertise to secure them. Heartbleed would have been much worse in a world of Internet enabled thermostats, refrigerators, cars, and everything else, and that’s the world we’re headed toward.

It sounds like we’re going to need some way to classify infrastructure as critical and non-critical. Or we’ll need to license the people who are allowed to tinker with critical code like OpenSSL? Are we moving toward a more deeply regulated environment? Should we be? A lot of our critical computer infrastructure is in private hands, both corporate and community. There’s value in having regulations surrounding this code, but there are risks as well. Better is to build resilient systems that are better able to survive things like Heartbleed. And remember, this is a singular event. It’s not like this kind of thing has been happening every month, or even every year. This is the worst vulnerability the Internet has had to weather in a long time.

When you hear that OpenSSL, which is considered critical infrastructure, is being developed by four underfunded developers, are you surprised? Should we be shocked to know that this critical piece of security is an economically challenged, somewhat neglected coding project? Yes, it was surprising. And again, this is where resilience is important. It’s going to be a long time before we are sophisticated enough to prevent these kind of vulnerabilities. We need to learn how to thrive despite them.

Partner Center

The email and password entered aren’t matching to our records. Please try again, or reset your password. If you have a username from our previous site, start by using that. Please See our FAQ for more.

If you are signing in for the first time on the new HBR.org but have an existing account, please enter your existing user name and password to migrate your account.Please see Frequently Asked Questions for more information.