The Heartbleed Bug

(The Heartbleed Bug)

WE OFTEN see connectedness and capacity to transfer information from one party to another within seconds as a major benefit to society, especially when it is free; however, this may not always be the case. Using OpenSSL, a web encryption technique, as a case study we are able to understand the implications that are derived from Open Source software. OpenSSL is meant to provide companies, and by relation, their users with a service- facilitating the secure transmission of data from one end to the other. Unfortunately, when bugs exist within software code, such as the Heartbleed bug, we see social, economic and political issues arise

Background: The Implications of OpenSSL

When users of the Internet participate in transactions of any sort, safety and security of personal information is paramount. Companies are aware of this, and thus turn to software to provide this sort of insurance; it is often OpenSSL, an “open source implementation of SSL and TLS protocol.” that is employed (Gujrathi, 2014). To put this into context, when Internet users arrive at a website that uses https:// they are using a secure form. When they venture online, and engage in transactions, there a few qualifications that must be met to ensure secure data transmission: the vendor and the customer must be who they say they are, the messages that are transferred between the user and server cannot be manipulated and an imposter cannot gain access to sensitive information. Both SSL and TLS are involved in securing transmissions between the user and the server, known as transport layer protocol, and are often used in tandem with reliable transport protocol such as TCP (Transport Control Protocol) (Gujrathi, 2014).

What is Open Source Software?

OPEN SOURCE SOFTWARE IS UNIQUE. TYPICALLY, THE SOURCE CODE FOR SOFTWARE CAN BE ENHANCED OR MODIFIED BY PEOPLE WHO MAY WANT TO VIEW IT, COPY IT, LEARN FROM IT AND EVEN POTENTIALLY SHARE IT- ALL OF WHICH CAN BE ACCESSED BY ANY MEMBER OF THE PUBLIC AND SHARED FROM ONE PARTY TO ANOTHER (what is open source?).

In 2014, Google engineer, Neel Mehta discovered a bug, "Heartbleed” that existed within OpenSSL; however, this was after the technology had been implemented and utilized by major companies such as Google, Yahoo, Facebook and Twitter (Gujrathi, 2014). In fact, OpenSSL was, and continues to be used by many institutions that rely on connecting with their users such as eCommerce, Internet Banking and eGovernment. According the Standard for Information Security Vulnerability Names maintained by MITRE, the Heartbleed bug is also known as CVE-2014-0160, where CVE stands for Common Vulnerabilities and Exposures. This bug caused serious problems as it gave hackers the ability to access passwords, usernames and cryptographic keys, which were meant to stay secure (The Heartbleed Bug). Ultimately when there exists bugs in software, the perceived benefits software offers can offer can not only render themselves insignificant, but the negative effects can become quite serious.

The Heartbleed Bug becomes involved when we observe Dr. Seggelman, a German developer, who was fixing some bugs that existed in OpenSSL, and was actually able to create a Heartbeat function, which ultimately did away with the handshake delay. Meaning, instead of having to log off and then re-establish an encrypted session later to transfer more data he was able to "maintain an active link at the remote end of the session". (Rash, 2015)

TECHNICAL DESCRIPTION

As mentioned, transactions occur between users and servers, at almost any instant in the day. Specifically when information is passed from the user to the server, and we typically hope that this is facilitated in a secure manner. To illustrate, when Amazon shoppers are purchasing a book, they would want to do so in safe way, as they are transferring personal information including credit card and bank data. They would want to send their credit card information in a secure way, so that their personal information is protected, as web traffic is still open. This would involve encryption as well as public and private keys. The very act of buying a book, and paying for it online, is an example of a transaction (Cooper).

Image 1a) (An Overview of the SSL or TLS Handshake)

When analyzing secure communication between the server and the user, the “Heartbeat” is in fact related to the overall secure communication. Users and servers are involved in sessions, where information is encrypted and transmitted, when users engage with encrypted web pages. The Heartbeat extension allows for TLS sessions to continue running, even if data has not been sent from the user to the server in a while. The overall process begins with a Handshake, which involves public and private keys as well as encryption. Once the negotiation occurs, heartbeat requests can be sent to confirm the connectivity, and the TLS client and server to communicate with one another (An overview of the SSL or TLS Handshake).

WHAT IS A HANDSHAKE?

When clients and servers communicate with one another, they use keys to do so. The handshake allows for the establishment of these keys. These keys, are involved in the encryption process that allows for sensitive and private information to be sent from the client to the server- both symmetric and asymmetric encryption. For more information on the handshake process visit the IBM webpage (An Overview of the SSL or TLS Handshake).

Each heartbeat request is compromised of the following: one-byte type filed a two-byte payload length field, a payload and at least 16 bytes of random padding. When the request is received, the endpoint sends back a response, containing the HeartbeatRequest payload and its own padding. The bug in the OpenSSL implementation of the Heartbeat Extension essentially allowed the end point to gain access to data that preceded the payload message, coming from the peer’s memory. To do this, the hackers had to specify a payload length that was larger than the amount of data that was in the Heartbeat Request message. (Durumeric, 2015) This length field is never checked, which means that the server returns the exact amount of bytes the caller specifies, even if in reality the actual payload is less than the n bytes specified by the caller. In fact the peer can respond with up to 2^16 bytes of memory (64kb)- much of which can contain private memory, including information that has been transferred from one secure channel to another, in turn doing the opposite was what OpenSSL was meant to do in the first place (Cooper).

To detail the process of accessing secure information, here are the following steps that the hacker would have to engage in:

Hacker sends a heartbeat request to their target, with a payload and an erroneous size, too large for the information that is being requested.

When the target machine receives the heartbeat request, it must respond with the same payload, and size.

Information from the Heap memory from the open SSL is extracted when there are remaining bytes that need to be completed.

In a sense this “pads” the payload. This is a massive issue because this can let hackers access information from servers that include data that aren’t meant to be shared such as cryptographic keys and login credentials (Durumeric, 2015). This occurred because Seggelman failed to indicate an end variable, which then gave leeway for up to 64 kilobytes of data in the memory of the server at the remote end. Once this bug was realized, the code was patched. There are now checks that discard the Heartbeat Request if the payload length field exceeds the length of the payload. (Gujrathi, 2014)

Cost of the Heartbleed Bug

The Heartbleed bug’s effect was quite pervasive due to the nature of Open Source Software, and the extensive implementation of OpenSSL, the reasons for which are stated above. It seems as though attention was mainly directed towards large companies that were affected by this bug including Google, Yahoo, and DropBox to name a few public HTTP web services.

The scope of the bug was in fact, quite shocking, including an estimated 22-55% of HTTPS servers in the Alexa Top 1 Million. This poses major issues for users and companies that house sensitive information and data that is not meant to be shared, and so some companies such as Google and other Alexa Top 100 sites were quick to respond to the discovery of this bug. These companies were at risk of having their information leaked, and stolen by hackers, much of which was very valuable, and should not have been exposed to the public. (Durumeric, 2015)

It’s important to note that these attacks were not only centric to the US, but also other parts of the world and resulted in major social and economic implications. For example, the Canadian Revenue Agency released a statement informing the global public that around 900 people’s private information have been stolen, including their social insurance numbers (SIN). Although their website, did not necessarily deal with financial data, people may use the same passwords and usernames on several websites, and hackers may end up using these later on. This illustrates how the information retrieved with the bug can in fact be manipulated, and used for further hacking. (Heartbleed Hacking Spree).

Not only did the bug affect the security of many websites and services, it also affected many devices and increased the vulnerability of products as well. These embedded systems such as security cameras or even video conferencing systems were affected. Communication servers such as Polycom and Cisco videoconference products as well as IceWarp messaging systems were compromised. Additionally, we saw Heartbleed’s impact span across mail servers, the Tor network, bitcoin, android and wireless networks. (Durumeric, 2015)

Although it is difficult to define the exact scope and cost of this bug there exist economic, social and potentially legal implications for many of these companies that were affected by Heartbleed. Then, for victims, we see much of their personal information, no longer being deemed as personal or secure. This then leads us into, who exactly is to blame for these costs. (Durumeric, 2015)

Who Was Responsible?

There have been plenty of speculations in regards to who’s at fault for this bug. Some also blame Dr. Seggelman for failing to test his code for bugs, and making a simple yet disastrous mistake in the process of coding, but others take a more global approach and en route, address the implications of the Open Source community, and how its climate was not conducive to developing software without bugs. Thus, it takes some consideration of the community itself and how it functions in order to assess and assign responsibility.

Ideally, Open Source software would have a cohesive community that cooperates, where people are checking each other’s code over, and collaborating with one another, increasing productivity and decreasing the amount of errors; however this is not entirely the case. In fact, with Open Source software, there is not a lot of checking that goes because there are not many people in the community that understand the nature of this software. When people just “assume that open source is magically validated and must be OK” we see issues arise. (Rash, 2015) So, we begin to wonder where exactly the blame can be placed, if the institutions that would typically cause us to place fair blame on the engineer, are non-existent.

We begin to consider the users of the Open Source software, and turn to companies such as Yahoo, Google and Dropbox alike, who have a high concentration of users and money. The argument is often made, that companies such as these should have contributed time and resources to improving this technology, creating a two-way relationship, instead of simply leaching off these resources. In fact, it seems as though these institutions have taken note, and assumed economic responsibility. Technology giants, post Heartbleed took part in a 3.6 million dollar effort to support open source projects, which in many cases are underfunded, although they are integral to internet security (Rayman 2014).

To Conclude

Now that the Heartbleed Bug has been dissected and analyzed, we can appreciate the educational value that it too, presents. Software bugs can be dangerous, and they will not be found unless enough checks are carried out by the developers themselves as well as peers in their community. We have identified that, unfortunately this sense of collaboration and community lacks with Open Source software. However, this does not mean all hope is lost, but instead users of Open Source software must consider the allocation of their resources, and ensure that enough is being contributed so that these bugs no longer exist -- and not just for a few months after a scare, but a consistent effort must be maintained. These dangers can permeate through barriers, checks and other safeguards that are have been instituted, which as we have seen can result in personal and sensitive information being stolen or tampered with. We must make an effort to safeguard Open Source software so that we can continue collaborating and reaping the benefits that come from communities such as this.