LibFuzzer is linked with the library under test, and feeds fuzzed inputs to the
library via a specific fuzzing entrypoint (aka “target function”); the fuzzer
then tracks which areas of the code are reached, and generates mutations on the
corpus of input data in order to maximize the code coverage.
The code coverage
information for libFuzzer is provided by LLVM’s SanitizerCoverage
instrumentation.

This installs the Clang binary as
./third_party/llvm-build/Release+Asserts/bin/clang)

The libFuzzer code resides in the LLVM repository, and requires a recent Clang
compiler to build (and is used to fuzz various parts of LLVM itself).
However the fuzzer itself does not (and should not) depend on any part of LLVM
infrastructure and can be used for other projects without requiring the rest
of LLVM.

The first step in using libFuzzer on a library is to implement a
fuzz target – a function that accepts an array of bytes and
does something interesting with these bytes using the API under test.
Like this:

Then build the fuzzing target function and the library under test using
the SanitizerCoverage option, which instruments the code so that the fuzzer
can retrieve code coverage information (to guide the fuzzing). Linking with
the libFuzzer code then gives a fuzzer executable.

You should also enable one or more of the sanitizers, which help to expose
latent bugs by making incorrect behavior generate errors at runtime:

UndefinedBehaviorSanitizer (UBSAN) detects the use of various features of C/C++ that are explicitly
listed as resulting in undefined behavior. Use -fsanitize=undefined -fno-sanitize-recover=undefined
or any individual UBSAN check, e.g. -fsanitize=signed-integer-overflow -fno-sanitize-recover=undefined.
You may combine ASAN and UBSAN in one build.

MemorySanitizer (MSAN) detects uninitialized reads: code whose behavior relies on memory
contents that have not been initialized to a specific value. Use -fsanitize=memory.
MSAN can not be combined with other sanirizers and should be used as a seprate build.

Coverage-guided fuzzers like libFuzzer rely on a corpus of sample inputs for the
code under test. This corpus should ideally be seeded with a varied collection
of valid and invalid inputs for the code under test; for example, for a graphics
library the initial corpus might hold a variety of different small PNG/JPG/GIF
files. The fuzzer generates random mutations based around the sample inputs in
the current corpus. If a mutation triggers execution of a previously-uncovered
path in the code under test, then that mutation is saved to the corpus for
future variations.

LibFuzzer will work without any initial seeds, but will be less
efficient if the library under test accepts complex,
structured inputs.

The corpus can also act as a sanity/regression check, to confirm that the
fuzzing entrypoint still works and that all of the sample inputs run through
the code under test without problems.

If you have a large corpus (either generated by fuzzing or acquired by other means)
you may want to minimize it while still preserving the full coverage. One way to do that
is to use the -merge=1 flag:

To run the fuzzer, first create a Corpus directory that holds the
initial “seed” sample inputs:

mkdir CORPUS_DIRcp /some/input/samples/* CORPUS_DIR

Then run the fuzzer on the corpus directory:

./my_fuzzer CORPUS_DIR # -max_len=1000 -jobs=20 ...

As the fuzzer discovers new interesting test cases (i.e. test cases that
trigger coverage of new paths through the code under test), those test cases
will be added to the corpus directory.

By default, the fuzzing process will continue indefinitely – at least until
a bug is found. Any crashes or sanitizer failures will be reported as usual,
stopping the fuzzing process, and the particular input that triggered the bug
will be written to disk (typically as crash-<sha1>, leak-<sha1>,
or timeout-<sha1>).

Each libFuzzer process is single-threaded, unless the library under test starts
its own threads. However, it is possible to run multiple libFuzzer processes in
parallel with a shared corpus directory; this has the advantage that any new
inputs found by one fuzzer process will be available to the other fuzzer
processes (unless you disable this with the -reload=0 option).

This is primarily controlled by the -jobs=N option, which indicates that
that N fuzzing jobs should be run to completion (i.e. until a bug is found or
time/iteration limits are reached). These jobs will be run across a set of
worker processes, by default using half of the available CPU cores; the count of
worker processes can be overridden by the -workers=N option. For example,
running with -jobs=30 on a 12-core machine would run 6 workers by default,
with each worker averaging 5 bugs by completion of the entire process.

To run the fuzzer, pass zero or more corpus directories as command line
arguments. The fuzzer will read test inputs from each of these corpus
directories, and any new test inputs that are generated will be written
back to the first corpus directory:

./fuzzer [-flag1=val1 [-flag2=val2 ...] ] [dir1 [dir2 ...] ]

If a list of files (rather than directories) are passed to the fuzzer program,
then it will re-run those files as test inputs but will not perform any fuzzing.
In this mode the fuzzer binary can be used as a regression test (e.g. on a
continuous integration system) to check the target function and saved inputs
still work.

The most important command line options are:

-help

Print help message.

-seed

Random seed. If 0 (the default), the seed is generated.

-runs

Number of individual test runs, -1 (the default) to run indefinitely.

-max_len

Maximum length of a test input. If 0 (the default), libFuzzer tries to guess
a good value based on the corpus (and reports it).

-timeout

Timeout in seconds, default 1200. If an input takes longer than this timeout,
the process is treated as a failure case.

-rss_limit_mb

Memory usage limit in Mb, default 2048. Use 0 to disable the limit.
If an input requires more than this amount of RSS memory to execute,
the process is treated as a failure case.
The limit is checked in a separate thread every second.
If running w/o ASAN/MSAN, you may use ‘ulimit -v’ instead.

If positive, indicates the maximum total time in seconds to run the fuzzer.
If 0 (the default), run indefinitely.

-merge

If set to 1, any corpus inputs from the 2nd, 3rd etc. corpus directories
that trigger new code coverage will be merged into the first corpus
directory. Defaults to 0. This flag can be used to minimize a corpus.

-minimize_crash

If 1, minimizes the provided crash input.
Use with -runs=N or -max_total_time=N to limit the number of attempts.

-reload

If set to 1 (the default), the corpus directory is re-read periodically to
check for new inputs; this allows detection of new inputs that were discovered
by other fuzzing processes.

-jobs

Number of fuzzing jobs to run to completion. Default value is 0, which runs a
single fuzzing process until completion. If the value is >= 1, then this
number of jobs performing fuzzing are run, in a collection of parallel
separate worker processes; each such worker process has its
stdout/stderr redirected to fuzz-<JOB>.log.

-workers

Number of simultaneous worker processes to run the fuzzing jobs to completion
in. If 0 (the default), min(jobs,NumberOfCpuCores()/2) is used.

If 1, generate only ASCII (isprint``+``isspace) inputs. Defaults to 0.

-artifact_prefix

Provide a prefix to use when saving fuzzing artifacts (crash, timeout, or
slow inputs) as $(artifact_prefix)file. Defaults to empty.

-exact_artifact_path

Ignored if empty (the default). If non-empty, write the single artifact on
failure (crash, timeout) as $(exact_artifact_path). This overrides
-artifact_prefix and will not use checksum in the file name. Do not use
the same path for several parallel processes.

-print_pcs

If 1, print out newly covered PCs. Defaults to 0.

-print_final_stats

If 1, print statistics at exit. Defaults to 0.

-detect_leaks

If 1 (default) and if LeakSanitizer is enabled
try to detect memory leaks during fuzzing (i.e. not only at shut down).

-close_fd_mask

Indicate output streams to close at startup. Be careful, this will
remove diagnostic output from target code (e.g. messages on assert failure).

LibFuzzer supports user-supplied dictionaries with input language keywords
or other interesting byte sequences (e.g. multi-byte magic values).
Use -dict=DICTIONARY_FILE. For some input languages using a dictionary
may significantly improve the search speed.
The dictionary syntax is similar to that used by AFL for its -x option:

# Lines starting with '#' and empty lines are ignored.# Adds "blah" (w/o quotes) to the dictionary.kw1="blah"# Use \\ for backslash and \" for quotes.kw2="\"ac\\dc\""# Use \xAB for hex valueskw3="\xF7\xF8"# the name of the keyword followed by '=' may be omitted:"foo\x0Abar"

With an additional compiler flag -fsanitize-coverage=trace-cmp
(see SanitizerCoverageTraceDataFlow)
libFuzzer will intercept CMP instructions and guide mutations based
on the arguments of intercepted CMP instructions. This may slow down
the fuzzing but is very likely to improve the results.

EXPERIMENTAL.
With -fsanitize-coverage=trace-cmp
and extra run-time flag -use_value_profile=1 the fuzzer will
collect value profiles for the parameters of compare instructions
and treat some new values as new coverage.

The current imlpementation does roughly the following:

The compiler instruments all CMP instructions with a callback that receives both CMP arguments.

The callback computes (caller_pc&4095) | (popcnt(Arg1 ^ Arg2) << 12) and uses this value to set a bit in a bitset.

Every new observed bit in the bitset is treated as new coverage.

This feature has a potential to discover many interesting inputs,
but there are two downsides.
First, the extra instrumentation may bring up to 2x additional slowdown.
Second, the corpus may grow by several times.

The target code uses a PRNG seeded e.g. by system time and
thus two consequent invocations may potentially execute different code paths
even if the end result will be the same. This will cause a fuzzer to treat
two similar inputs as significantly different and it will blow up the test corpus.
E.g. libxml uses rand() inside its hash table.

In many cases it makes sense to build a special fuzzing-friendly build
with certain fuzzing-unfriendly features disabled. We propose to use a common build macro
for all such cases for consistency: FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION.

voidMyInitPRNG(){#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION// In fuzzing mode the behavior of the code should be deterministic.srand(0);#elsesrand(time(0));#endif}

LibFuzzer can be used together with AFL on the same test corpus.
Both fuzzers expect the test corpus to reside in a directory, one file per input.
You can run both fuzzers on the same corpus, one after another:

Once you implement your target function LLVMFuzzerTestOneInput and fuzz it to death,
you will want to know whether the function or the corpus can be improved further.
One easy to use metric is, of course, code coverage.
You can get the coverage for your corpus like this:

ASAN_OPTIONS=coverage=1 ./fuzzer CORPUS_DIR -runs=0

This will run all tests in the CORPUS_DIR but will not perform any fuzzing.
At the end of the process it will dump a single .sancov file with coverage
information. See SanitizerCoverage for details on querying the file using the
sancov tool.

You may also use other ways to visualize coverage,
e.g. using Clang coverage,
but those will require
you to rebuild the code with different compiler flags.

Binaries built with AddressSanitizer or LeakSanitizer will try to detect
memory leaks at the process shutdown.
For in-process fuzzing this is inconvenient
since the fuzzer needs to report a leak with a reproducer as soon as the leaky
mutation is found. However, running full leak detection after every mutation
is expensive.

By default (-detect_leaks=1) libFuzzer will count the number of
malloc and free calls when executing every mutation.
If the numbers don’t match (which by itself doesn’t mean there is a leak)
libFuzzer will invoke the more expensive LeakSanitizer
pass and if the actual leak is found, it will be reported with the reproducer
and the process will exit.

If your target has massive leaks and the leak detection is disabled
you will eventually run out of RAM (see the -rss_limit_mb flag).

This tool fuzzes the MC layer. Currently it is only able to fuzz the
disassembler but it is hoped that assembly, and round-trip verification will be
added in future.

When run in dissassembly mode, the inputs are opcodes to be disassembled. The
fuzzer will consume as many instructions as possible and will stop when it
finds an invalid instruction or runs out of data.

Please note that the command line interface differs slightly from that of other
fuzzers. The fuzzer arguments should follow --fuzzer-args and should have
a single dash, while other arguments control the operation mode and target in a
similar manner to llvm-mc and should have two dashes. For example:

First, we want this library to be used outside of the LLVM without users having to
build the rest of LLVM. This may sound unconvincing for many LLVM folks,
but in practice the need for building the whole LLVM frightens many potential
users – and we want more users to use this code.

Second, there is a subtle technical reason not to rely on the rest of LLVM, or
any other large body of code (maybe not even STL). When coverage instrumentation
is enabled, it will also instrument the LLVM support code which will blow up the
coverage set of the process (since the fuzzer is in-process). In other words, by
using more external dependencies we will slow down the fuzzer while the main
reason for it to exist is extreme speed.

Q. What about Windows then? The fuzzer contains code that does not build on Windows.¶

If the test inputs are validated by the target library and the validator
asserts/crashes on invalid inputs, in-process fuzzing is not applicable.

Bugs in the target library may accumulate without being detected. E.g. a memory
corruption that goes undetected at first and then leads to a crash while
testing another input. This is why it is highly recommended to run this
in-process fuzzer with all sanitizers to detect most bugs on the spot.

It is harder to protect the in-process fuzzer from excessive memory
consumption and infinite loops in the target library (still possible).

The target library should not have significant global state that is not
reset between the runs.

Many interesting target libraries are not designed in a way that supports
the in-process fuzzer interface (e.g. require a file path instead of a
byte array).

If a single test run takes a considerable fraction of a second (or
more) the speed benefit from the in-process fuzzer is negligible.

If the target library runs persistent threads (that outlive
execution of one test) the fuzzing results will be unreliable.

This Fuzzer might be a good choice for testing libraries that have relatively
small inputs, each input takes < 10ms to run, and the library code is not expected
to crash on invalid inputs.
Examples: regular expression matchers, text or binary format parsers, compression,
network, crypto.