Andrew Koenig

Dr. Dobb's Bloggers

Social Processes And Heartbleed, Part 2

Heartbleed is the most recent in a long string of security problems that come from buffer overruns. Probably the most common kind of buffer overrun happens when

One part of a system passes a pointer to another part, perhaps along with a length.

The second part of the system ignores or otherwise misuses the length when it figures out how much data it can store in the memory to which the pointer points.

An important reason why this kind of program structure tends to cause buffer overruns is that the caller allocates the memory, but it is not until the data have been read that the program can know how much memory to allocate. In other words, the idea that a function should allocate memory and then call another function that fills that memory is intrinsically dangerous.

Even if this danger is successfully avoided by checking bounds correctly, that check is likely to have negative consequences of its own. As an example, a former colleague once created a text file that comprised a single line with tens of thousands of characters. He tried giving that file as input to a wide variety of compilers, text-processing programs, and other utilities. Almost all of these programs misbehaved in one way or another, typically either by crashing or by quietly ignoring the last part of the input.

I think that there is a simple solution to such problems: Any part of a program that reads variable-length input should be responsible for allocating enough memory to contain that input. In C++, of course, this strategy is trivial to implement by using the standard library. In C, however, there is no easy way to write code that reads a single line of input, regardless of length, and returns a pointer to memory containing that input. Any such attempt is likely to have at least some negative consequences.

Once upon a time, I set out to add a solution to this problem to the C library used by the organization at which I worked at the time. My thought was that if anyone wanted to send code elsewhere that used my function, they would be free to distribute a copy of that function as part of their code. My function was called readline, and was intended to be simple to use: Give it a file pointer (e.g., stdin) as input, and it would read an entire line of input, regardless of length, and return a pointer to the initial character of a null-terminated string representing that line. If it reached end of file, it would return a null pointer.

Of course, any C function that allocates memory and returns a pointer to that memory has a problem: When is the memory freed? I thought about making the caller responsible for freeing the memory, but felt that doing so would make it likely that some callers would simply forget, turning programs with buffer overruns into programs with memory leaks.

Ultimately, I decided on a strategy that I had seen elsewhere: readline would return a pointer to memory that was guaranteed to remain unchanged until the next time someone called readline. This strategy not only saved users a lot of worry, but also allowed for an easy implementation: The program stored a static pointer to a (dynamically allocated) buffer, which would grow in size as needed to contain the line being read. This approach made the most common use case both simple and safe:

Of course, this approach had its problems. For example, using readline twice in a single expression would cause undefined behavior, as would forgetting to copy the data out of the returned memory if the programmer wished to save the data after the next call to readline. Moreover, this code would waste at least as much memory as was needed to contain the last line of input. In practice, it wasted enough memory to contain the longest line of input, because although I reallocated the buffer whenever it was shorter than an input line, I never made the buffer smaller. I felt that the extra speed from avoiding reallocations would compensate for what would be a small amount of wasted memory under what I thought would be rare circumstances.

Apparently I overestimated people's willingness to tolerate memory-allocation overhead: When I went back to look at the code a few months later, I learned that someone had completely replaced my version of readline by one with a fixed 4096-character buffer. As far as I know, his motivation was to avoid the overhead of runtime memory allocation completely. In other words, to avoid what in most circumstances would be a single call to the memory allocator, he quietly broke every program that might have used readline whenever that program might encounter a line longer than 4096 characters.

I have gone into this story at such length because it shows several points that I think are important:

Buffer overruns often come about when one part of a program allocates memory to contain an amount of data known only to another part.

Allocating memory in the same part of the program that fills it solves the allocation problem at the cost of forcing memory to be allocated in one part of the program and freed in another part.

This separation between allocation and freeing can cause usability problems that are hard to circumvent without support as part of the programming language.

Even if users are willing to accept the usability problems as the price for safety and versatility, they may not be willing to accept the perceived overhead of dynamic memory allocation.

I think that programmers' reluctance to accept runtime overhead as a price for safety is a major reason why security bugs are so widespread. We shall learn more about this phenomenon next week.

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task.
However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Video

This month's Dr. Dobb's Journal

This month,
Dr. Dobb's Journal is devoted to mobile programming. We introduce you to Apple's new Swift programming language, discuss the perils of being the third-most-popular mobile platform, revisit SQLite on Android
, and much more!