Debugging The Myths Of Heartbleed

Does Heartbleed really wreak havoc without a trace? The media and many technical sites seemed convinced of this, but some of us were skeptical.

Now that IT organizations across the globe have had time to recover from the recent Heartbleed flaw, what can we learn from this incident? The vulnerability was discovered in an OpenSSL library used by thousands of websites on public and private networks and had gone unnoticed for years. Attackers could force a web server to reveal data from inside an SSL session, completely bypassing encryption. As if that weren't bad enough, initial reports claimed that the Heartbleed attack on the TLS/DTLS heartbeat extension occurred "without a trace."

It would not be an understatement to characterize Heartbleed as one of the creepiest security vulnerabilities ever to lurk across the Internet. Here's a quick summary of its timeline:

2011 -- A German coder accidently creates a security vulnerability in an OpenSSL extension with a simple line of code

2011 to 2014 -- Years go by and no one notices this vulnerability; despite the code being open source, it will become a problem for millions of users

March 21, 2014 -- The vulnerability is discovered independently by Google engineer Neel Mehta and the Finnish security firm Codenomicon

March 21 to April 7 -- Google, CloudFlare, Akamai, Red Hat, and Facebook complete unannounced patching of their OpenSSL libraries

April 7 -- MITRE Organization officially reports the Heartbleed bug in CVE-2014-0160 and the OpenSSL Project immediately issues version 1.0.1g that fixes the vulnerable code

April 7 until now -- Vendors of products that use OpenSSL scramble into a frenzy to identify, diagnose, and update their products.

What's in this bug and what's at stakeLet's be clear: this isn't a flaw in SSL/TLS or in the heartbeat extension (RFC 6520). The vulnerability exists within the OpenSSL implementation of the extension. Heartbleed exploit code allows attackers to force a web server to reveal 64 KB chunks of certain memory regions by overflowing a buffer. While it isn't possible to predict what might be revealed, successful attacks have obtained session keys, passwords, and other information that should normally remain confidential.

Finding the hidden evidenceDoes Heartbleed really wreak its havoc while leaving nary a trace? The media and many technical sites seemed convinced of this, but some of us were skeptical. The Heartbleed attacks surely leave some evidence behind: packets. Packets almost always tell a detailed story of what has really happened, including in the case of Heartbleed. The trick, of course, is to have the packets.

It's true that a server attacked with a Heartbleed exploit is unlikely to reveal any evidence. Stored packets, meanwhile, do tell the story of a successful Heartbleed exploit even after the adversary has stopped an active attack.

Detecting a prior Heartbleed exploitContinuous monitoring of a network can reveal active Heartbleed attacks. But even more importantly, with a sufficiently large rolling buffer of packet capture data, it becomes possible to look back in time, before the public disclosure of the Heartbleed vulnerability. An investigation of this data may reveal whether an actual exploit of vulnerable servers has occurred.

A Berkeley Packet F (BPF) placed in the network can automatically flag larger-than-normal TLS heartbeat responses from servers. Wireshark, tcpdump, and other tools can analyze the captured packets for confirmation of an attack.

Why BPF? BPF engines are fast -- something of a requirement given the sheer amount of traffic passing through modern networks. Important for the use case here, a BPF capture is a common format for packet processing, understood by the majority of operating systems and packet-analyzing software. The wide availability of BPF is the main reason it can become easy to detect Heartbleed attacks.

BPF engines are available for Linux, for Mac OS, for Windows (via WinPcap), and can be placed on cloud computing instances. BPF and tcpdump are present on most network appliances (such as firewalls, load balancers, and application delivery controllers) and often accessible through an administrative console for troubleshooting purposes. And, of course, packet analysis and storage engines in products designed for supporting network performance management almost all support BPF.

As a result, many individuals in the technical community have responded with plenty of resources to develop a BPF filter appropriate for detecting Heartbleed attacks. Most of these filters examine traffic from port 443/tcp (the default HTTPS port). The usual size threshold is 69 bytes; this can be adjusted upwards or downwards to reduce false positives or false negatives if necessary.

Final thoughtsHaving a nimble awareness of the data in your network, a basic understanding of how secure services should normally operate, and the ability to investigate anomalies can inoculate you from the unavoidable hype. Packets do not lie -- but you have to capture them to reveal their truths.

A large rolling buffer of packet capture data establishes an ideal forensics basis. From this, you can determine what has actually occurred during the time when vulnerability exploits run wild. Certainty is always better than mere speculation over hypothetical breaches in security.

Steve actively works to raise awareness of the technical and business benefits of Riverbed's performance optimization solutions, particularly as they relate to accelerating the enterprise adoption of cloud computing. His specialties include information security, compliance, ... View Full Bio

This is a very good rundown of Heartbleed with great information about how to uncover possible breaches. Was surprised to heaer it was used against CHS given the amount of publicity and the fact that they are not some small business.

Reading a related article about Heartbleed and Community Heath Systems by Sara Peters, "Heartbleed Not Only Reason For Health Systems Breach", also on Dark Reading, I see many things that added to what was taken through Heartbleed. According to the detailed anaylysis of the attack one user's credentials were taken through Heartbleed and the patient data was taken using those credentials.

The expanded attack was aided by weaknesses on their security posture, namely;

They only watched what came in, and ignored outgoing traffic;

Stange behaviour by a user logging in from a new remote location should have raised flag, but did not for the duration of the attack;

VPN logins that only use user names and passwords when there are many ways to add security to VPN (like trusted devices information added to user credentials);

Monolithic data structure, with the same level of security applied to the entire structure.

To me it appears that they threw a firewall up, installed spyware and virus scanning, and called it secure. The problem is that once a way is found through the firewall almost all the data is there for the taking. While no health or payment data was taken this time, I have seen nothing that indicates it was more secure... and personal experience tells me it most likely was not.

I am not the biggest fan of encryption, but it helps secure data at rest and on the fly, especilay when coupled with two-factor authentication - such as user credetnials coupled with trusted device information. While it is not impossible to emulate a trusted machine, if basic security processes are followed where the machine info is double hashed before it is sent, it becomes much harder.

CHS is not a non-profit, so it seems like they should be abble to afford better security.

Sloppiness? Inattention? Lack of awareness or concern for risk? It perplexes me that people don't take this stuff seriously. Heartbleed was discovered on April 7. Juniper issed security advisory JSA10623 and knowlegebase article KB29004. We don't know when these were first issued -- all we can see are "last updated" dates of April 30 and May 6, respectively. Some reports indicate Juniper issued patches only three days after April 7. But regardless, it's now late August. The hospital didn't update their stuff after more than three (or four) months? That's inexcusable, especially for something like a VPN server, a device intended to deposit users on the inside of a corporate network.

I understand that IT departments have processes and change windows. But when a vendor issues an out-of-band patch for a flaw the vendor labels as critical, it's time to throw the change window, well, out the window. Install the patch right away. Emergency but controlled downtime dealing with a patch is much preferable to the disastrous and devastating downtime caused by an attack!

Published: 2015-03-31The build_index_from_tree function in index.py in Dulwich before 0.9.9 allows remote attackers to execute arbitrary code via a commit with a directory path starting with .git/, which is not properly handled when checking out a working tree.