Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

dwheeler (321049) writes "Heartbleed was bad vulnerability in OpenSSL. My article How to Prevent the next Heartbleed explains why so many tools missed it... and what could be done to prevent the next one. Are there other ways to detect these vulnerabilities ahead-of-time? What did I miss?"

It could have been discovered with static analysis if anyone had the foresight to implement a specific check ahead of time (although it's unknown whether anyone could have thought of checking this specific case before Heartbleed was discovered):

OpenSSL was static analyzed with Coverity. However, Coverity did not discover this, as is a parametric bug, which depends on variable content.

The reaction from Coverity was to issue a patch to find this kind of problem, but in my opinion, the "fix" throws the baby out with the bath water. The fix causes all byte swaps to mark the content as tainted. Which surely would have detected this bug, but it also leads to an enormous amount of false positives for development where swabs are common, like cross- or multi-platform development.And while it finds "defects" this way, it's not the real problem it finds.So in my opinion, 0 out of 10 points to Coverity for this knee-jerk reaction.

In my opinion, what's wrong here is someone with a high level language background submitting patches in a lower level language than what he's used to. The problems that can cause are never going to be 100% (or even 50%) caught by static analysis. Lower level languages does give you enough rope to hang yourself with. It's the nature of the beast. In return, you have more control over the details of what you do. That you're allowed to do something stupid also means you are allowed to do something brilliant.But it requires far more discipline - you cannot make assumptions, but have to actually understand what is done to variables at a low level.Unit tests and fuzzing helps. But even that is no substitute for thinking low level when using a low level language.

There are also other statical analysis tools like splint [splint.org]. The catch is that it produces a large volume of data which is tedious to sift through, but once done you will have found the majority of the bugs in your code.

However the root cause is that the language itself permits illegal and bad constructs. It's of course a performance trade-off, but by coding part of the code in a high level language and leave the performance critical parts to a low level may lower the exposure and force focus on the problems t

Like I fixed my fusebox that always blew by putting a nail in across the contacts. It never blows a fuse anymore.

(disclaimer: I didn't really. don't do this)

When it comes to highbrow bugs like this, everyone jump sup and down and demands to know what you're doing to stop the next one - ie stopping this bug from ever occurring again. What they really need to worry about is the next unknown bug that we will find. they are out there, we will find it in production one day, it will bite us, and no I don;t think

The reaction from Coverity was to issue a patch to find this kind of problem, but in my opinion, the "fix" throws the baby out with the bath water. The fix causes all byte swaps to mark the content as tainted. Which surely would have detected this bug, but it also leads to an enormous amount of false positives for development where swabs are common, like cross- or multi-platform development.

Yes, that solution is complete and utter crap. Claiming that marking all byte swaps as tainted will help you find thi

You just called kernel and base library developers full retards. Which goes to show that a little knowledge is dangerous.

When you write low-level code, yes, you often do. You may have to both be frugal with both memory and cycles. Or you may require guarantees that an allocation request will succeed no matter what. Or you may need to take alignment and endianness into account. On NUMA systems, you may try to ensure that memory is assigned from a bank reachable by another CPU without copying/invalidatin

Every industry goes through this. At one point it was aviation, and the "hot shot pilot" was the Real Deal. But then they figured out that even the Hottest Shot pilots are human and sometimes forget something critical and people die, so now, pilots use checklists all the time for safety. No matter how awesome they might be, they can have a bad day, etc. And this is also why we have two pilots in commercial aviation, to cross check each other.

In programming something as critical as SSL it's long past time for "macho programming culture" to die. First off, it needs many eyes checking. Second, there needs to be an emphasis on using languages that are not susceptible to buffer overrunning. This isn't 1975 any more. No matter how macho the programmer thinks s/he is, s/he is only human and WILL make mistakes like this. We need better tools and technologies to move the industry forward.

Last, in other engineering professions there is licensing and engineers are held accountable for mistakes they make. Maybe we don't need that for some $2 phone app, but for critical infrastructure it is also past time, and programmers need to start being held accountable for the quality of their work.

It's things the "brogrammer" culture will complain BITTERLY about, their precious playground being held to professional standards. But it's the only way forward. It isn't the wild west any more. The world depends on technology and we need to improve the quality and the processes behind it.

Yes, I'm prepared to be modded down by those cowboy programmers who don't want to be accountable for the results of their poor techniques... But that is exactly the way of thinking that our industry needs to shed.

I actually agree with both of you. The Open SSL guys gave out their work for free for anybody to use. Anybody should be free to do that without repercussions. Code is a kind of literature and thus should be protected by free speech laws.

However, if you pay peanuts (or nothing at all) then likewise you shouldn't expect anything other than monkeys. The real fault here is big business using unverified (in the sense of correctness!) source for security critical components of their system.

"businesses with a turn over $x million dollars should be required to use software developed only by the approved organisations."

That would just lead to regulatory capture. The approved organisations would use their connections and influence to make it very hard for any other organisations to become approved - and once this small cabal have thus become the only option, they can charge as much as the like.

This problem was caused by a simple missed parameter check, nothing more. Stop acting like the cultural problem is with the developers when it is with the leaches who consumer their work.

I do not believe you. If this were an isolated case, then you'd be right. But no, this kind of "oops, well now it is fixed" things happens all the time, over and over again. The culture of the programming never improves due to the error - no matter how simple, no matter that it should have been noticed earlier, no matter what.

I am willing to bet that after next hole the excuses will be same "it was simple, now it is fixed, should up" and "why don't you make better, shut up" or just "you don't understand, sh

Shit happens to the best programmers. The only thing to prevent such things is to check the code. Therefore, you need another person trying to test the code and you need a specification for the code so you can really check the code against another artifact. But obviously nobody bothered. That's why in housing the architect plans the building and at least two structural designer check the design (at least in Germany that is).

Depends on the amonut of auditing. C has huge problems, but OpenBSD shows it can be safe.

How so? OpenBSD says they audit their operating system (which includes code that they did not write). OpenBSD was affected [openbsd.org] by Heartbleed, which means OpenBSD's audit did not catch this bug, and they were affected just like everybody else.

Also, most of the bugs on their advisory page are for typical C memory problems, such as use after free and buffer overruns.

> programmers need to start being held accountable for the quality of their work.They are.

But I guess you mean that people who aren't paying for your work, and companies which aren't paying for the processes and professional services necessary for some level of quality, should hold programmers who don't have any kind of engineering or financial relationship with them accountable.

In programming something as critical as SSL it's long past time for "macho programming culture" to die.

Yeah, but it's kind of going the other way, with more and more companies going to continuous deployment. Facebook is just pit of bugs.

programmers need to start being held accountable for the quality of their work.

OK, I'm with you that quality needs to improve, but if I have a choice between working where I get punished for my bugs and where I don't; I'm working for a place where I don't get punished for my bugs. I test my code carefully but sometimes they slip through anyway.

* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GO

Umm... yes it did. Details here [existentialize.com]. It is a classic buffer overread. The client sends some heartbeat data, plus the length of the data. The server copies as many bytes from the payload as specified by the user, even if it is only one byte long.

You have missed the malloc call. See what is being passed as size to the malloc call. That will show you that the it does not cross the size allocated by the malloc call (the malloc for this call - not everything allocated by malloc).

Unless you are trying to switch to pascal-style strings instead of null-terminated ones you have limited ways to automatically check buffer overruns, just as you have limited ways to do garbage collecting or, for that matter, almost anything automagical with pointers. The compiler alone cannot enforce that policy, one could try to enforce it in the standard library or a framework. The difference between low and middle level languages and high level languages is the magic that happens behind the language. C

But they do give the programmer control of where the checking happens.

If you have a function CalculatePasswordHash(char *pass, int len) that in turns calls functions sha1, memcpy, rotatebit and xor fifty times each passing that len parameter, then you can check it is = the space allocated for *pass just once, rather than doing it for every function and thus needing two hundred and one checks minimum.

So there's nothing inherently unsafe about C. Its just that most implementations haven't bothered to deal with the problem.

C is inherently unsafe because the default mode is unsafe. History has shown that expecting implementations to add security after the fact does not lead to secure programs. C's builtin strings, which are null-terminated and prone to security flaws, are a perfect example of C's insecure defaults.

Also, the Heartbleed over-read could have happened in Java. Plenty of high-performance Java projects use buffer pools that look identical to what OpenSSL was doing. They do it to cut down on garbage churn.

Could have, yes, but you have to go out of your way in Java to fall to this kind of bug. There's a huge difference.

Maybe deep inside some kernel routine we cannot afford the 0.5nanosecond it takes to check the buffer size,

Speaking of weird remnants of the past, I've seen claims about the kernel needing to be uber-efficient before, but does that really make any sense? How much time does the average machine spend executing kernel code, besides the idle loop? If kernel was 10 times as slow, would it still be a significant amount?

and we can always have a pragma that disables the checking for that piece of code.

A lot of kernel stuff is very time-sensitive. Got to get the next block of sound to the audio device before the ring buffer catches up, got to get the display memory updated before the screen refresh kicks in, got to calculate the next LBA address read before the disc spins around to whereever it may lie.

Because of Amdahl's law, a 1% increase in time could cause an unbounded amount of slowdown. You may go from having a cap of 32 cpus of performance to 4 cpus of performance because context switching takes longer, which causes some threads to hold locks longer. In the case of multi-threading, 1% can turn into 10,000%.

This. It is high time that by default C compilers did buffer overrun check.

It has been claimed that due to OpenSSL's own memory management, this wasn't actually a buffer overrun. If you allocate 10 bytes for X, 5,000 bytes for Y, and 50,000 bytes for Z, but your proprietary allocator puts all these items into a 1MB malloc block, then copying 50,000 bytes from X isn't a buffer overrun to the compiler.

The real problem was that the code tried to respond to requests that it shouldn't have responded to. In this particular case, trying to respond could have triggered a buffer overflo

Point taken, I've heard the same thing. This is also a problem with ancient languages: they have really primiitve malloc routines that call the kernel every time there is a malloc. The consequence is people roll out their own memory management routines.

Don't get me wrong, I used C heavily and really liked it, back in the late 70s and early 80s. Thirty years later is long on the tooth and with very little progress in between. The original version was released in 1973, the first revision took place 16 years

This was not a bug that would have been found in testing. It doesn't _attack_ the software. The software was totally unaffected. You could have a very specific test.for this problem, but if you thought of that test, you might as well have looked at the code and immediately spotted the problem.

Let's remember the good old bug that plagued (and probably still does) many libraries that read graphic files such as TIFF. The classic scenario was that the programmer was reading the expected image size from the TIFF file header, allocated memory for this size, then read the reminder of the file into said buffer, until end of file. Instead of reading as many bytes as he has allocated. Now for a correct file this would work, however if the file is maliciously crafted to indicate one size in the header while having a much larger real size, you would do a classic buffer overrun. This is pretty much similar to what the SSL programmer did. And no tools were ever able to predict this type of errors, whether TIFF or SSL.

BTW the last famous one with TIFF files was pretty recent:http://threatpost.com/microsoft-to-patch-tiff-zero-day-wait-til-next-year-for-xp-zero-day-fix/103117

Considering how many times you need to do this (read the length of a block of data, then the data) it's strange that we haven't implemented a standard variable length encoding like with UTF-8. Example:

Buffer overruns can be statically prevented at compile time without any runtime penalty.

All that is required is that the type system of the target programming language enforces a special type for array indexes and that any integer can be statically promoted to such an array index type by a runtime check that happens outside of an array access loop.

Array indexes are essentially pointer types that happen to be applicable to a specific memory range we call an array. Memory itself is just an array, but for tha

Rigorous coding should be held to approximately the same standard as engineering and math. Code should be both proven correct and tested for valid and invalid inputs. It has not happened yet because in many cases code is seen as less critical (patching is cheap, people usually don't die from software bugs etc). As soon as bugs start costing serious money, the culture will change.

Anyway, I'm not a pro coder but I do write code for academic purposes, so I am not subjected to the same constraints. Robust code

Rigorous testing is helpful, but I think it's the wrong approach. The problem here was lack of requirements and/or rigorous design. In the physical engineering disciplines, much effort is done to think about failure modes of designs before they are implemented. In software, for some reason, the lack of pre-implementation design and analysis is endemic. This leads to things like Heartbleed - not language choice, not tools, not lack of static testing.

I would also go as far as saying if you're relying on testing to see if your code is correct (rather than verify your expectations), you're already SOL because testing itself is meaningless if you don't know the things you have to test - which means up-front design and analysis.

That said, tools and such can help mitigate issues associated with lack of design, but the problem is more fundamental than a "coding error."

The problem lies with how the "software industry" evolved over time, and the complete lack of user/consumer protection legislation regarding software products.
If the physical products manufacturers have a design fault, they will have to fix those products, during warranty period, at their own expense. If on top of it the defect is safety related, they'll have to fix it even beyond the standard warranty period. Whether the product is a car or a coffee grinder, they'll have to recall it period.
Now contrast

Rigorous testing is helpful, but I think it's the wrong approach. The problem here was lack of requirements and/or rigorous design.

The real problem is the horrible OpenSSL code, where after reading 10 lines, or 20 lines if you're really hard core, your eyes go just blurry and it's impossible to find any bugs.

There is the "thousands of open eyes" theory, where thousands of programmers can read the code and find bugs, and then they get fixed. If thousands of programmers tried to read the OpenSSL code with the degree of understanding necessary to declare it bug free, you wouldn't end up with any bugs found, but with thousands of progra

Wouldn't the best course of action be to zero important memory after it's use just like on disk. After something like a password is loaded in memory it should always be followed by memset with zeros in C/C++. That way if an unchecked read is followed all that would be read is null.

Of course, we should find ways to improve quality control in open source software. But the next Heartblee is going to happen. It's like asking, "How can we prevent crime from happening?" Sure, you can and should take measures to prevent it, but there will always be unexpected loopholes in software, that allow unwanted access.

No doubt. So why didn't YOU take steps to prevent the Heartbleed vulnerability? The same reason everybody else didn't: time. Finding bugs takes time. Sure, you can automate, but that automation also takes time. So we are caught between two desires: 1) the desire to add or improve functionality, and 2) the desire to avoid vulnerabilities. The two desires compete for the amount of time that is available, so it becomes a trade-off.

It's also an arms race. There is real financial incentive for finding vul

Don't use C and its variants like C++. C is an extremely unsafe, low-level language that is just one step above assembly language. This makes it great for low-level, performance sensitive programs like OSes, compilers, etc. but the low-levelness also increases bug count for general purpose applications.

Instead use safer languages like Pascal, Eiffel (design by contract), Ada, etc. These languages guard against buffer overflows and don't have the slowness and bloat associated with garbage collected languages

Or you just learn how to code properly. This particular vulnerability wasn't because there was a mistake, it was because they opted to bypass a function that was meant to keep people safe. It's a bit like bolting the fire escapes closed then wondering why everybody died after the fire.

It's astonishing to me that somebody would put code into a production environment that asked for a certain length of response without bothering to do any validation.

If that really worked, there would be no QA dept. for software. Unless you can formally prove your software is correct, you should assume there are bugs. And no one has the time, money or ability to formally prove hundreds of millions of lines of code.

It's astonishing to me that somebody would put code into a production environment that asked for a certain length of response without bothering to do any validation.

> If that really worked, there would be no QA dept. for software.No, that's just poor reasoning.

Quality must be built-in, not added-on. QA expectations and improvement scope are largely imposed on any QA department, therefore the level of 'quality' reached can never be an absolute bar.

Developers in general need to minimise the vector product of bug count/severity that could be exposed before it gets to QA. This allows the bar to be raised, and focus to be spent on where it should be rather than catching

Instead use safer languages like Pascal, Eiffel (design by contract), Ada, etc. [...] The problem usually is, few people know these languages and they are not portable from one platform to another.

Agreed regarding both the solution and the problem with the solution.

It's probably reasonable to use [insert-super-secure-machine-verifiable-language-here] to develop libraries that are as security-critical as OpenSSL. However, it's unlikely that such libraries will be widely used if they aren't easily callable from the more popular languages (C/C++/ObjectiveC/etc).

Given that, I wonder how difficult it would be to write a library in (e.g.) Ada, but have the Ada compiler compile the code in such a way that

I have personally ported OpenSSL to at least 6 embedded systems, one of which was so proprietary they wrote their own C/C++ compiler. Good luck finding an Ada compiler for that.

his makes it great for low-level, performance sensitive programs like OSes, compilers,

Aaand... performance sensitive like, say... crypto? There isn't much code more performance sensitive than crypto libraries, which is one of OpenSSL's main uses. In fact, there are a whole bunch of native assembler implementations for x86, MIPS, ARM, PPC, etc to achieve that low level performance. Clearly you have never actually looked at the OpenSSL code base...

Adacore [adacore.com] has a perfectly good implementation of a high-security Ada compiler, which produces executables for multiple platforms. There's nothing difficult about finding such tools. What's difficult is finding programmers and developers who are willing to take the time to actually develop their code to take advantage of the strict typing which is one of Ada's strengths.

John Barnes, author of one of the most-used Ada texts, outlined the meanings of "safe" and "secure" software in a very straightforward manner

Performance sensitive? really? most crypto is NOT performance sensitive at all and you could easily sacrifice some performance for more secure/reviewed code. I would imagine there are very few mostly fringe cases where the performance is more critical in which cases they should be uses modified versions not having hacks put into the main code stream.

First: how do YOU know whether crypto is performance sensitive or not "at all", because it's entirely dependent on the use of it.

Second: yes, it's absolutely performance sensitive because the trend is becoming to use HTTPS for everything. On a server that means the whole front end can greatly benefit from faster crypto, and on client side one of the most popular current Internet applications - video streaming - often uses crypto for DRM so the entire video stream needs to be decrypted in real time. Sorr

The US Army will swear that I was once, many moons ago, officially certified in Ada, whether that means anything or not. I never liked it much, even though I did turn in successful code a few times, and I really have a problem with Ada for open source applications - Yes, in theory, Ada has some very strong security functions by design, but it's definitely not going to result in the 'many eyes make all bugs shallow' effect. I actually read your post as deliberately tongue in cheek at first, what with phrases such as 'extremely unsafe'.
But as I think more about it, one of the problems revealed by Heartbleed is open sourcing the target code didn't result in a lot of properly trained eyes passing over that code. I never thought I'd encourage anyone to learn Ada after I got out of the service (just as I never thought I'd encourage anyone to start a cult worshipping many-tentacled, eldritch, blasphemous horrors from beyond space-time as we delusionally try to limit our conceptions of it to preserve our fundamental human sanity, and for much the same reasons), but I have to admit, you may have a damned good argument for Ada there.I don't know if the extensive compile time checking of Ada 2012 could have automatically caught the bug that made Heartbleed possible - the last version of Ada I've really used is 95, but I'd be really interested to hear from someone who's current if they think Ada is just about totally bulletproof against this sort of bug, because even the older versions I recall had some features that would have made it hard to make this sort of mistake.

But as I think more about it, one of the problems revealed by Heartbleed is open sourcing the target code didn't result in a lot of properly trained eyes passing over that code.

My experience is that reading code isn't a very good way to catch bugs, mainly because reviewers tend not to read it as carefully as the person who wrote it. If you want to find bugs, it's more effective to do white/black box testing of some sort.

My experience is that reading code isn't a very good way to catch bugs, mainly because reviewers tend not to read it as carefully as the person who wrote it. If you want to find bugs, it's more effective to do white/black box testing of some sort.

That depends. Your reading of code can have three possible results: 1. "There are no bugs". 2. "There are bugs A, B, C and D; go and fix them". 3. "I can't understand the code to a degree that I can say it is bug free".

In case 3, the code should be rejected unless it is code handling some really hard problem that needs a better reviewer. The area where the Heartbleed bug happened was in no way difficult, so code that is hard to understand should have been rejected. If that happens, reviews reduce the num

I never liked it much, even though I did turn in successful code a few times, and I really have a problem with Ada for open source applications

Can you express what you didn't like and why? Perhaps it's a bit verbose and overly strict. But the strictness means you find many bugs during compilation and basic testing. Of course, compiler and runtime errors frustrate many programmers, which is why many prefer C -- fewer warnings and errors. Let the customers deal with the errors.

My experience is that people who trust in their language to keep their code bug free inevitably have more bugs in their code. It's amazing how many memory leaks I've found in Java, by programmers who swore such things were impossible. Another entertaining situation is people who manage to get around deadlock detection by creating live-locks.

I think it's clear to everyone who's actually looked at the situation that the problem here wasn't the language, it was the people who were using the language. They w

A quote from the "Insane Coding" blog, which in turn quotes from the book "cryptography engineering":

The issues with higher level languages being used in cryptography are:- Ensuring data is wiped clean, without the compiler optimizations or virtual machine ignoring what they deem to be pointless operations.- The inability to use some high-level languages because they lack a way to tie in forceful cleanup of primitive data types, and their error handling mechanisms may end up leaving no way to wipe data, or data is duplicated without permission.- Almost every single thing which may be the right way of doing things elsewhere is completely wrong where cryptography is concerned.

Yes. Also, the problem with Rap is that it is in English. If they just wrote their masogenistic statement in a different language all would be well!

Seriously, please stop with the ridiculous claim that the language is the problem. The problem is that nobody is perfect, no process is perfect, and mistakes will always happen. They will happen far more often when the system is implemented by people who understand so little about software development that they thi

We cannot write complex bug-free software. PERIOD. OpenSSL is not windows. Headlines about OpenSSL bugs are not such a common occurrence. One bug happened at the wrong time, wrong place. This could have happened even if the world had opted for a proprietary library for this critical role. The only difference is that there would have been somebody to sue. Big consolation.

New theories come out of IT faculties around the world at regular intervals, that promise, if strictly followed, the holy grail of bug-free software. All of them eventually prove non-effective.

The only concrete effect of all these tactics is that the job of the programmer becomes more tedious, less interesting. One thing I can tell you from direct experience is that, the lowest the level of interest of the programmer, the higher the possibility will be that bugs may slip into his or her code.

Actually, it's possible to remove all errors and imperfections, if you would be satisfied with being boring. That's one thing I got from Douglas Crockford's Programming Style and Your Brain. [youtube.com] Sometimes, especially for security-related software, "boring" is exactly what you want.

Unfortunately, SSL is anything but boring. It's barely standardized, and it's prone to getting new features. But just because the standard is exciting, doesn't mean the code has to be exciting. The OpenSSL developers may have received

Actually, it's possible to remove all errors and imperfections, if you would be satisfied with being boring.

No. Software for which you can guarantee that no error exist is not only boring: it is useless.

To prevent the next Heartbleed, it's more productive to donate to LibreSSL.

You do not get my point. You may succeed in rendering it less probable. But you cannot prevent it.

I do get your point, and I disagree. Perhaps my point is not so clear, so I'll rephrase it: For a protocol as complicated as SSL, it's difficult to guarantee that a program is free of bugs, but it is possible to create a program free of exploits. With sufficient discipline [microsoft.com] in specific domains, it's also possible to create bug-free specifications. Computer programs are just math, and a lot of math can be proved. The key is to decompose programs into pieces that humans can reason about. That's what Crockford

The LLVM static analyzer finds this bug. So would warning about dead code, since the code past the point of the second goto...

Um, no. You're talking about the Apple "goto fail; goto fail;" vulnerability. That's a different vulnerability in a different program. They're both vulnerabilities in TLS/SSL implementations, but they are different programs.

I'm really glad you're trying to think of alternatives. However, when you say: 1). Initialize all allocated memory. Routinely and automatically.... They did. But the Heartbleed bug let you see currently-active memory. In particular, you have to have the private key available somewhere so you can use it.

Some of the weirdness was due to the spec itself (RFC 6520). I agree that error avoidance is better than parameter-checking, but it's not clear that parameter-checking could have been avoided in this

Profiling w/ 100% code coverage would have caught this bug. - No, code coverage would not have worked in this case. Since the problem was that code was missing, you can run every line or branch without triggering the vulnerability. For more, see: http://www.dwheeler.com/essays... [dwheeler.com]

Input fuzzing in the unit tests under memtest could have located this bug even faster. - No, not in this case. Fuzzers were countered because OpenSSL had its own set of memory allocators. When fuzzing you often are looking for