This document serves as a high level summary of the optimization features that
LLVM provides. Optimizations are implemented as Passes that traverse some
portion of a program to either collect information or transform the program.
The table below divides the passes that LLVM provides into three categories.
Analysis passes compute information that other passes can use or for debugging
or program visualization purposes. Transform passes can use (or invalidate)
the analysis passes. Transform passes all mutate the program in some way.
Utility passes provides some utility but don’t otherwise fit categorization.
For example passes to extract functions to bitcode or write a module to bitcode
are neither analysis nor transform passes. The table of contents above
provides a quick summary of each pass and links to the more complete pass
description later in the document.

This is a simple N^2 alias analysis accuracy evaluator. Basically, for each
function in the program, it simply queries to see how the alias analysis
implementation answers alias queries between each pair of pointers in the
function.

This is inspired and adapted from code by: Naveen Neelakantam, Francesco
Spadini, and Wojciech Stryjewski.

This pass, only available in opt, prints the control flow graph into a
.dot graph, omitting the function bodies. This graph can then be processed
with the dot tool to convert it to postscript or some other suitable
format.

This pass, only available in opt, prints the dominator tree into a .dot
graph, omitting the function bodies. This graph can then be processed with the
dot tool to convert it to postscript or some other suitable format.

This pass, only available in opt, prints the post dominator tree into a
.dot graph, omitting the function bodies. This graph can then be processed
with the dot tool to convert it to postscript or some other suitable
format.

This simple pass provides alias and mod/ref information for global values that
do not have their address taken, and keeps track of whether functions read or
write memory (are “pure”). For this simple (but very common) case, we can
provide pretty accurate and useful information.

This pass statically checks for common and easily-identified constructs which
produce undefined or likely unintended behavior in LLVM IR.

It is not a guarantee of correctness, in two ways. First, it isn’t
comprehensive. There are checks which could be done statically which are not
yet implemented. Some of these are indicated by TODO comments, but those
aren’t comprehensive either. Second, many conditions cannot be checked
statically. This pass does no dynamic instrumentation, so it can’t check for
all possible problems.

Another limitation is that it assumes all code will be executed. A store
through a null pointer in a basic block which is never reached is harmless, but
this pass will warn about it anyway.

Optimization passes may make conditions that this pass checks for more or less
obvious. If an optimization pass appears to be introducing a warning, it may
be that the optimization pass is merely exposing an existing condition in the
code.

This code may be run before instcombine. In many
cases, instcombine checks for the same kinds of things and turns instructions
with undefined behavior into unreachable (or equivalent). Because of this,
this pass makes some effort to look through bitcasts and so on.

This analysis is used to identify natural loops and determine the loop depth of
various nodes of the CFG. Note that the loops identified may actually be
several natural loops that share the same header node... not just a single
natural loop.

An analysis that determines, for a given memory operation, what preceding
memory operations it depends on. It builds on alias analysis information, and
tries to provide a lazy, caching interface to a common kind of alias
information query.

This pass, only available in opt, prints out call sites to external
functions that are called with constant arguments. This can be useful when
looking for standard library functions we should constant fold or handle in
alias analyses.

The RegionInfo pass detects single entry single exit regions in a function,
where a region is defined as any subgraph that is connected to the remaining
graph at only two spots. Furthermore, an hierarchical region tree is built.

The ScalarEvolution analysis can be used to analyze and catagorize scalar
expressions in loops. It specializes in recognizing general induction
variables, representing them with the abstract and opaque SCEV class.
Given this analysis, trip counts of loops and other important properties can be
obtained.

This analysis is primarily useful for induction variable substitution and
strength reduction.

This pass promotes “by reference” arguments to be “by value” arguments. In
practice, this means looking for internal functions that have pointer
arguments. If it can prove, through the use of alias analysis, that an
argument is only loaded, then it can pass the value into the function instead
of the address of the value. This can cause recursive simplification of code
and lead to the elimination of allocas (especially in C++ template code like
the STL).

This pass also handles aggregate arguments that are passed into a function,
scalarizing them if the elements of the aggregate are only loaded. Note that
it refuses to scalarize aggregates which would require passing in more than
three operands to the function, because passing thousands of operands for a
large array or structure is unprofitable!

Note that this transformation could also be done for arguments that are only
stored to (returning the value instead), but does not currently. This case
would be best handled when and if LLVM starts supporting multiple return values
from functions.

This pass combines instructions inside basic blocks to form vector
instructions. It iterates over each basic block, attempting to pair compatible
instructions, repeating this process until no additional pairs are selected for
vectorization. When the outputs of some pair of compatible instructions are
used as inputs by some other pair of compatible instructions, those pairs are
part of a potential vectorization chain. Instruction pairs are only fused into
vector instructions when they are part of a chain longer than some threshold
length. Moreover, the pass attempts to find the best possible chain for each
pair of compatible instructions. These heuristics are intended to prevent
vectorization in cases where it would not yield a performance increase of the
resulting code.

This pass is a very simple profile guided basic block placement algorithm. The
idea is to put frequently executed blocks together at the start of the function
and hopefully increase the number of fall-through conditional branches. If
there is no profile information for a particular function, this pass basically
orders blocks in depth-first order.

Break all of the critical edges in the CFG by inserting a dummy basic block.
It may be “required” by passes that cannot deal with critical edges. This
transformation obviously invalidates the CFG, but can update forward dominator
(set, immediate dominators, tree, and frontier) information.

This pass munges the code in the input function to better prepare it for
SelectionDAG-based code generation. This works around limitations in its
basic-block-at-a-time approach. It should eventually be removed.

Merges duplicate global constants together into a single constant that is
shared. This is useful because some passes (i.e., TraceValues) insert a lot of
string constants into the program, regardless of whether or not an existing
string is available.

This pass deletes dead arguments from internal functions. Dead argument
elimination removes arguments which are directly dead, as well as arguments
only passed into function calls as dead arguments of other functions. This
pass also deletes dead arguments in a similar way.

This pass is often useful as a cleanup pass to run after aggressive
interprocedural passes, which add possibly-dead arguments.

A simple interprocedural pass which walks the call-graph, looking for functions
which do not access or only read non-local memory, and marking them
readnone/readonly. In addition, it marks function arguments (of
pointer type) “nocapture” if a call to the function does not create any
copies of the pointer value that outlive the call. This more or less means
that the pointer is only dereferenced, and not returned from the function or
stored in a global. This pass is implemented as a bottom-up traversal of the
call-graph.

This transform is designed to eliminate unreachable internal globals from the
program. It uses an aggressive algorithm, searching out globals that are known
to be alive. After it finds all of the globals which are needed, it deletes
whatever is left over. This allows it to delete recursive chunks of the
program which are unreachable.

This transformation analyzes and transforms the induction variables (and
computations derived from them) into simpler forms suitable for subsequent
analysis and transformation.

This transformation makes the following changes to each loop with an
identifiable induction variable:

All loops are transformed to have a single canonical induction variable
which starts at zero and steps by one.

The canonical induction variable is guaranteed to be the first PHI node in
the loop header block.

Any pointer arithmetic recurrences are raised to use array subscripts.

If the trip count of a loop is computable, this pass also makes the following
changes:

The exit condition for the loop is canonicalized to compare the induction
value against the exit value. This turns loops like:

for(i=7;i*i<1000;++i)into

for(i=0;i!=25;++i)

Any use outside of the loop of an expression derived from the indvar is
changed to compute the derived value outside of the loop, eliminating the
dependence on the exit value of the induction variable. If the only purpose
of the loop is to compute the exit value of some derived expression, this
transformation will make the loop dead.

This transformation should be followed by strength reduction after all of the
desired loop transformations have been performed. Additionally, on targets
where it is profitable, the loop could be transformed to count down to zero
(the “do loop” optimization).

Combine instructions to form fewer, simple instructions. This pass does not
modify the CFG. This pass is where algebraic simplification happens.

This pass combines things like:

%Y=addi32%X,1%Z=addi32%Y,1

into:

%Z=addi32%X,2

This is a simple worklist driven algorithm.

This pass guarantees that the following canonicalizations are performed on the
program:

If a binary operator has a constant operand, it is moved to the right-hand
side.

Bitwise operators with constant operands are always grouped so that shifts
are performed first, then ors, then ands, then xors.

Compare instructions are converted from <, >, ≤, or ≥ to
= or ≠ if possible.

All cmp instructions on boolean values are replaced with logical
operations.

addX,X is represented as mulX,2 ⇒ shlX,1

Multiplies with a constant power-of-two argument are transformed into
shifts.

… etc.

This pass can also simplify calls to specific well-known function calls (e.g.
runtime library functions). For example, a call exit(3) that occurs within
the main() function can be transformed into simply return3. Whether or
not library calls are simplified is controlled by the
-functionattrs pass and LLVM’s knowledge of
library calls on different targets.

This pass loops over all of the functions in the input module, looking for a
main function. If a main function is found, all other functions and all global
variables with initializers are marked as internal.

This pass implements an extremely simple interprocedural constant propagation
pass. It could certainly be improved in many different ways, like using a
worklist. This pass makes arguments dead, but does not remove them. The
existing dead argument elimination pass should be run after this to clean up
the mess.

Jump threading tries to find distinct threads of control flow running through a
basic block. This pass looks at blocks that have multiple predecessors and
multiple successors. If one or more of the predecessors of the block can be
proven to always cause a jump to one of the successors, we forward the edge
from the predecessor to the successor by duplicating the contents of this
block.

An example of when this can occur is code like this:

if(){...X=4;}if(X<3){

In this case, the unconditional branch at the end of the first if can be
revectored to the false side of the second if.

This is still valid LLVM; the extra phi nodes are purely redundant, and will be
trivially eliminated by InstCombine. The major benefit of this
transformation is that it makes many other loop optimizations, such as
LoopUnswitching, simpler.

This pass performs loop invariant code motion, attempting to remove as much
code from the body of a loop as possible. It does this by either hoisting code
into the preheader block, or by sinking code to the exit blocks if it is safe.
This pass also promotes must-aliased memory locations in the loop to live in
registers, thus hoisting and sinking “invariant” loads and stores.

This pass uses alias analysis for two purposes:

Moving loop invariant loads and calls out of loops. If we can determine
that a load or call inside of a loop never aliases anything stored to, we
can hoist it or sink it like any other instruction.

Scalar Promotion of Memory. If there is a store instruction inside of the
loop, we try to move the store to happen AFTER the loop instead of inside of
the loop. This can only happen if a few conditions are true:

The pointer stored through is loop invariant.

There are no stores or loads in the loop which may alias the pointer.
There are no calls in the loop which mod/ref the pointer.

If these conditions are true, we can promote the loads and stores in the
loop of the pointer to use a temporary alloca’d variable. We then use the
mem2reg functionality to construct the appropriate
SSA form for the variable.

This file implements the Dead Loop Deletion Pass. This pass is responsible for
eliminating loops with non-infinite computable trip counts that have no side
effects or volatile instructions, and do not contribute to the computation of
the function’s return value.

A pass wrapper around the ExtractLoop() scalar transformation to extract
each top-level loop into its own new function. If the loop is the only loop
in a given function, it is not touched. This is a pass most useful for
debugging via bugpoint.

This pass performs a strength reduction on array references inside loops that
have as one or more of their components the loop induction variable. This is
accomplished by creating a new value to hold the initial value of the array
access for the first iteration, and then creating a new GEP instruction in the
loop to increment the value by the appropriate amount.

This pass performs several transformations to transform natural loops into a
simpler form, which makes subsequent analyses and transformations simpler and
more effective.

Loop pre-header insertion guarantees that there is a single, non-critical entry
edge from outside of the loop to the loop header. This simplifies a number of
analyses and transformations, such as LICM.

Loop exit-block insertion guarantees that all exit blocks from the loop (blocks
which are outside of the loop that have predecessors inside of the loop) only
have predecessors from inside of the loop (and are thus dominated by the loop
header). This simplifies transformations such as store-sinking that are built
into LICM.

This pass also guarantees that loops will have exactly one backedge.

Note that the simplifycfg pass will clean up blocks
which are split out but end up being unnecessary, so usage of this pass should
not pessimize generated code.

This pass obviously modifies the CFG, but updates loop information and
dominator information.

This pass lowers atomic intrinsics to non-atomic form for use in a known
non-preemptible environment.

The pass does not verify that the environment is non-preemptible (in general
this would require knowledge of the entire call graph of the program including
any libraries which may not be available in bitcode form); it simply lowers
every atomic intrinsic.

This transformation is designed for use by code generators which do not yet
support stack unwinding. This pass converts invoke instructions to
call instructions, so that any exception-handling landingpad blocks
become dead code (which can be removed by running the -simplifycfg pass
afterwards).

This file promotes memory references to be register references. It promotes
alloca instructions which only have loads and stores as uses. An alloca is
transformed by using dominator frontiers to place phi nodes, then traversing
the function in depth-first order to rewrite loads and stores as appropriate.
This is just the standard SSA construction algorithm to construct “pruned” SSA
form.

This file implements a simple interprocedural pass which walks the call-graph,
turning invoke instructions into call instructions if and only if the callee
cannot throw an exception. It implements this as a bottom-up traversal of the
call-graph.

This pass reassociates commutative expressions in an order that is designed to
promote better constant propagation, GCSE, LICM, PRE, etc.

For example: 4 + (x + 5) ⇒ x + (4 + 5)

In the implementation of this algorithm, constants are assigned rank = 0,
function arguments are rank = 1, and other values are assigned ranks
corresponding to the reverse post order traversal of current function (starting
at 2), which effectively gives values in deep loops higher rank than values not
in loops.

This file demotes all registers to memory references. It is intended to be the
inverse of mem2reg. By converting to load
instructions, the only values live across basic blocks are alloca
instructions and load instructions before phi nodes. It is intended
that this should make CFG hacking much easier. To make later hacking easier,
the entry block is split into two, such that all introduced alloca
instructions (and nothing else) are in the entry block.

The well-known scalar replacement of aggregates transformation. This transform
breaks up alloca instructions of aggregate type (structure or array) into
individual alloca instructions for each member if possible. Then, if
possible, it transforms the individual alloca instructions into nice clean
scalar SSA form.

Note that this transformation makes code much less readable, so it should only
be used in situations where the strip utility would be used, such as reducing
code size or making it harder to reverse engineer code.

note that this transformation makes code much less readable, so it should only
be used in situations where the strip utility would be used, such as reducing
code size or making it harder to reverse engineer code.

This pass loops over all of the functions in the input module, looking for dead
declarations and removes them. Dead declarations are declarations of functions
for which no implementation is available (i.e., declarations for unused library
functions).

Note that this transformation makes code much less readable, so it should only
be used in situations where the ‘strip’ utility would be used, such as reducing
code size or making it harder to reverse engineer code.

Note that this transformation makes code much less readable, so it should only
be used in situations where the ‘strip’ utility would be used, such as reducing
code size or making it harder to reverse engineer code.

This file transforms calls of the current function (self recursion) followed by
a return instruction with a branch to the entry of the function, creating a
loop. This pass also implements the following extensions to the basic
algorithm:

Trivial instructions between the call and return do not prevent the
transformation from taking place, though currently the analysis cannot
support moving any really useful instructions (only dead ones).

This pass transforms functions that are prevented from being tail recursive
by an associative expression to use an accumulator variable, thus compiling
the typical naive factorial or fib implementation into efficient code.

TRE is performed if the function returns void, if the return returns the
result returned by the call, or if the function returns a run-time constant
on all exits from the function. It is possible, though unlikely, that the
return returns something else (like constant 0), and can still be TRE’d. It
can be TRE’d if all other return instructions in the function return the
exact same value.

If it can prove that callees do not access theier caller stack frame, they
are marked as eligible for tail call elimination (by the code generator).

This is a little utility pass that gives instructions names, this is mostly
useful when diffing the effect of an optimization because deleting an unnamed
instruction can change all other instruction numbering, making the diff very
noisy.

Verifies an LLVM IR code. This is useful to run after an optimization which is
undergoing testing. Note that llvm-as verifies its input before emitting
bitcode, and also that malformed bitcode is likely to make LLVM crash. All
language front-ends are therefore encouraged to verify their output before
performing optimizing transformations.

Both of a binary operator’s parameters are of the same type.

Verify that the indices of mem access instructions match other operands.

Verify that arithmetic and other things are only performed on first-class
types. Verify that shifts and logicals only happen on integrals f.e.

All of the constants in a switch statement are of the correct type.

The code is in valid SSA form.

It is illegal to put a label into any other type (like a structure) or to
return one.

Only phi nodes can be self referential: %x=addi32%x, %x is
invalid.

PHI nodes must have an entry for each predecessor, with no extras.

PHI nodes must be the first thing in a basic block, all grouped together.

PHI nodes must have at least one entry.

All basic blocks should only end with terminator insts, not contain them.

The entry node to a function must not have predecessors.

All Instructions must be embedded into a basic block.

Functions cannot take a void-typed parameter.

Verify that a function’s argument list agrees with its declared type.

It is illegal to specify a name for a void value.

It is illegal to have an internal global value with no initializer.

It is illegal to have a ret instruction that returns a value that does
not agree with the function return value type.

Function call argument types match the function prototype.

All other things that are tested by asserts spread about the code.

Note that this does not provide full security verification (like Java), but
instead just tries to ensure that code is well-formed.