In essence, I created a text file containing just "hello" and asked the fuzzer to keep feeding it to a program that expects a JPEG image (djpeg is a simple utility bundled with the ubiquitous IJG jpeg image library; libjpeg-turbo should also work). Of course, my input file does not resemble a valid picture, so it gets immediately rejected by the utility:

Such a fuzzing run would be normally completely pointless: there is essentially no chance that a "hello" could be ever turned into a valid JPEG by a traditional, format-agnostic fuzzer, since the probability that dozens of random tweaks would align just right is astronomically low.

Luckily, afl-fuzz can leverage lightweight assembly-level instrumentation to its advantage - and within a millisecond or so, it notices that although setting the first byte to 0xff does not change the externally observable output, it triggers a slightly different internal code path in the tested app. Equipped with this information, it decides to use that test case as a seed for future fuzzing rounds:

At this point, the fuzzer managed to synthesize the valid file header - and actually realized its significance. Using this output as the seed for the next round of fuzzing, it quickly starts getting deeper and deeper into the woods. Within several hundred generations and several hundred million execve() calls, it figures out more and more of the essential control structures that make a valid JPEG file - SOFs, Huffman tables, quantization tables, SOS markers, and so on:

The first image, hit after about six hours on an 8-core system, looks very unassuming: it's a blank grayscale image, 3 pixels wide and 784 pixels tall. But the moment it is discovered, the fuzzer starts using the image as a seed - rapidly producing a wide array of more interesting pics for every new execution path:

Of course, synthesizing a complete image out of thin air is an extreme example, and not necessarily a very practical one. But more prosaically, fuzzers are meant to stress-test every feature of the targeted program. With instrumented, generational fuzzing, lesser-known features (e.g., progressive, black-and-white, or arithmetic-coded JPEGs) can be discovered and locked onto without requiring a giant, high-quality corpus of diverse test cases to seed the fuzzer with.

The cool part of the libjpeg demo is that it works without any special preparation: there is nothing special about the "hello" string, the fuzzer knows nothing about image parsing, and is not designed or fine-tuned to work with this particular library. There aren't even any command-line knobs to turn. You can throw afl-fuzz at many other types of parsers with similar results: with bash, it will write valid scripts; with giflib, it will make GIFs; with fileutils, it will create and flag ELF files, Atari 68xxx executables, x86 boot sectors, and UTF-8 with BOM. In almost all cases, the performance impact of instrumentation is minimal, too.

Of course, not all is roses; at its core, afl-fuzz is still a brute-force tool. This makes it simple, fast, and robust, but also means that certain types of atomically executed checks with a large search space may pose an insurmountable obstacle to the fuzzer; a good example of this may be:

In practical terms, this means that afl-fuzz won't have as much luck "inventing" PNG files or non-trivial HTML documents from scratch - and will need a starting point better than just "hello". To consistently deal with code constructs similar to the one shown above, a general-purpose fuzzer would need to understand the operation of the targeted binary on a wholly different level. There is some progress on this in the academia, but frameworks that can pull this off across diverse and complex codebases in a quick, easy, and reliable way are probably still years away.

PS. Several folks asked me about symbolic execution and other inspirations for afl-fuzz; I put together some notes in this doc.

15 comments:

Interesting post! I had no heard of afl until today, it seems pretty interesting. I released Binspector as open source a month or so back, and one of its features is the ability to analyze a known-good file for weak points that might be fuzzed/exploited.If there's something of value here, please get in touch: http://binspector.github.io/blog/2014/10/13/a-hairbrained-approach-to-security-testing/

Does the starting point "matter"? That is, could I expect similar results using the same string and/or should I expect different results using a different string, or is it likely the first valid jpg would always be like your first one?

In this experiment, the starting point doesn't matter in any fundamental sense, since the final output contains no remaining bits from the original file. It helps a bit that the file is small.

In normal fuzzing, you would want to start with initial test cases that actually make sense to the tested library, just don't necessarily stress-test every possible code path. When doing that, the choice of starting files matters more; there are some tips on that in README.

This is fascinating stuff, the sort I've wondered about from time to time, but never quite knew where to start with.

Thanks for writing it up with such an accessible example.

I couldn't wait to recreate the experiment so I went ahead and wrote up a Dockerfile to build and run it. It's all available on GitHub and Docker Hub as a automated build in case anyone else has the same idea.

I'm not really a big fan of fuzzing despite how effective it can be at triggering bugs in huge codebases...But the results of this experiment generating valid jpegs seeded from completely invalid garbage is super awesome.

Also, due to the tool's lacking of impractical complexities such as symbolic execution, it's minimal amount of configuration, and it's performance give this tool the right amount of elegance. Mad props. :)

Think the same type of instrumentation can be applied to a closed source library, perhaps? Maybe after rewriting some export addresses, updating some relocations, promoting short branches every once in a while in order to move some code-blocks around? Although..I suppose if one is okay with throwing perf out the window you can probably duplicate the instrumentation you're performing with any old debugging api, right? or maybe one could even stash the branch via IA32_DEBUGCTL_MSR's BTF (single-step-on-branch) or something...

Well, the simplest approach is just to use something like DynamoRIO, since if you do that, you don't really have to do any disassembly or assembly rewriting yourself. But yeah, there's plenty of ways to do it, it shouldn't be very difficult for non-obfuscated binaries.

Very nice post. Could you clarify how you derived which inputs were valid images? Did you just re-run everything that afl-fuzz put in the queue/ folder through djpeg? Is there a simple way to tell afl-fuzz to stop as soon as it has found an input on which the instrumented binary exits with 0 instead of 1?

In other words, can I use afl-fuzz as a tool to find invariant violations in a program? Essentially I'd add a "return (invariant_holds() ? 1 : 0)" at the end of my program, and I'd tell afl_fuzz to stop as soon as it has found a test case in which the return value is 0 (in which case I'd get an input where the invariant doesn't hold).

For your other question - the simplest way to find failed assertions with afl-fuzz is just to use assert() or abort(); this way, the program will effectively crash when an assertion is violated, and AFL will de-dupe and flag that for your review.

Why can't the fuzzer get the long string in strcmp? If strcmp is implemented byte by byte, the fuzzer's binary instrumentation could be able to see that a different code path is taken if the first character is correct, then the second, and so on to get the whole string.

That's basically the answer: you need to special-case and reimplement such builtins. This is doable, but actually doesn't yield clear-cut benefits in typical fuzzing jobs, since the vast majority of strcmp() calls are not relevant to anything of note; and plenty of other "hard" comparisons are happening without using these builtins.

Instead of doing this, AFL takes a somewhat different approach:https://lcamtuf.blogspot.com/2015/01/afl-fuzz-making-up-grammar-with.htmlhttps://lcamtuf.blogspot.com/2015/04/finding-bugs-in-sqlite-easy-way.html