The LLVM target-independent code generator is a framework that provides a
suite of reusable components for translating the LLVM internal representation to
the machine code for a specified target—either in assembly form (suitable
for a static compiler) or in binary machine code format (usable for a JIT
compiler). The LLVM target-independent code generator consists of five main
components:

Abstract target description interfaces which
capture important properties about various aspects of the machine, independently
of how they will be used. These interfaces are defined in
include/llvm/Target/.

Classes used to represent the machine code being
generated for a target. These classes are intended to be abstract enough to
represent the machine code for any target machine. These classes are
defined in include/llvm/CodeGen/.

Implementations of the abstract target description
interfaces for particular targets. These machine descriptions make use of
the components provided by LLVM, and can optionally provide custom
target-specific passes, to build complete code generators for a specific target.
Target descriptions live in lib/Target/.

The target-independent JIT components. The LLVM JIT is
completely target independent (it uses the TargetJITInfo structure to
interface for target-specific issues. The code for the target-independent
JIT lives in lib/ExecutionEngine/JIT.

The two pieces of the LLVM code generator are the high-level interface to the
code generator and the set of reusable components that can be used to build
target-specific backends. The two most important interfaces (TargetMachine and TargetData) are the only ones that are
required to be defined for a backend to fit into the LLVM system, but the others
must be defined if the reusable code generator components are going to be
used.

This design has two important implications. The first is that LLVM can
support completely non-traditional code generation targets. For example, the C
backend does not require register allocation, instruction selection, or any of
the other standard components provided by the system. As such, it only
implements these two interfaces, and does its own thing. Another example of a
code generator like this is a (purely hypothetical) backend that converts LLVM
to the GCC RTL form and uses GCC to emit machine code for a target.

This design also implies that it is possible to design and
implement radically different code generators in the LLVM system that do not
make use of any of the built-in components. Doing so is not recommended at all,
but could be required for radically different targets that do not fit into the
LLVM machine description model: FPGAs for example.

The LLVM target-independent code generator is designed to support efficient and
quality code generation for standard register-based microprocessors. Code
generation in this model is divided into the following stages:

Instruction Selection - This phase
determines an efficient way to express the input LLVM code in the target
instruction set.
This stage produces the initial code for the program in the target instruction
set, then makes use of virtual registers in SSA form and physical registers that
represent any required register assignments due to target constraints or calling
conventions. This step turns the LLVM code into a DAG of target
instructions.

SSA-based Machine Code Optimizations - This
optional stage consists of a series of machine-code optimizations that
operate on the SSA-form produced by the instruction selector. Optimizations
like modulo-scheduling or peephole optimization work here.

Register Allocation - The
target code is transformed from an infinite virtual register file in SSA form
to the concrete register file used by the target. This phase introduces spill
code and eliminates all virtual register references from the program.

Prolog/Epilog Code Insertion - Once the
machine code has been generated for the function and the amount of stack space
required is known (used for LLVM alloca's and spill slots), the prolog and
epilog code for the function can be inserted and "abstract stack location
references" can be eliminated. This stage is responsible for implementing
optimizations like frame-pointer elimination and stack packing.

Late Machine Code Optimizations - Optimizations
that operate on "final" machine code can go here, such as spill code scheduling
and peephole optimizations.

Code Emission - The final stage actually
puts out the code for the current function, either in the target assembler
format or in machine code.

The code generator is based on the assumption that the instruction selector
will use an optimal pattern matching selector to create high-quality sequences of
native instructions. Alternative code generator designs based on pattern
expansion and aggressive iterative peephole optimization are much slower. This
design permits efficient compilation (important for JIT environments) and
aggressive optimization (used when generating code offline) by allowing
components of varying levels of sophistication to be used for any step of
compilation.

In addition to these stages, target implementations can insert arbitrary
target-specific passes into the flow. For example, the X86 target uses a
special pass to handle the 80x87 floating point stack architecture. Other
targets with unusual requirements can be supported with custom passes as
needed.

The target description classes require a detailed description of the target
architecture. These target descriptions often have a large amount of common
information (e.g., an add instruction is almost identical to a
sub instruction).
In order to allow the maximum amount of commonality to be factored out, the LLVM
code generator uses the TableGen tool to
describe big chunks of the target machine, which allows the use of
domain-specific and target-specific abstractions to reduce the amount of
repetition.

As LLVM continues to be developed and refined, we plan to move more and more
of the target description to the .td form. Doing so gives us a
number of advantages. The most important is that it makes it easier to port
LLVM because it reduces the amount of C++ code that has to be written, and the
surface area of the code generator that needs to be understood before someone
can get something working. Second, it makes it easier to change things. In
particular, if tables and other things are all emitted by tblgen, we
only need a change in one place (tblgen) to update all of the targets
to a new interface.

The LLVM target description classes (located in the
include/llvm/Target directory) provide an abstract description of the
target machine independent of any particular client. These classes are
designed to capture the abstract properties of the target (such as the
instructions and registers it has), and do not incorporate any particular pieces
of code generation algorithms.

All of the target description classes (except the TargetData class) are designed to be subclassed by
the concrete target implementation, and have virtual methods implemented. To
get to these implementations, the TargetMachine class provides accessors that
should be implemented by the target.

The TargetMachine class provides virtual methods that are used to
access the target-specific implementations of the various target description
classes via the get*Info methods (getInstrInfo,
getRegisterInfo, getFrameInfo, etc.). This class is
designed to be specialized by
a concrete target implementation (e.g., X86TargetMachine) which
implements the various virtual methods. The only required target description
class is the TargetData class, but if the
code generator components are to be used, the other interfaces should be
implemented as well.

The TargetData class is the only required target description class,
and it is the only class that is not extensible (you cannot derived a new
class from it). TargetData specifies information about how the target
lays out memory for structures, the alignment requirements for various data
types, the size of pointers in the target, and whether the target is
little-endian or big-endian.

The TargetRegisterInfo class is used to describe the register
file of the target and any interactions between the registers.

Registers in the code generator are represented in the code generator by
unsigned integers. Physical registers (those that actually exist in the target
description) are unique small numbers, and virtual registers are generally
large. Note that register #0 is reserved as a flag value.

Each register in the processor description has an associated
TargetRegisterDesc entry, which provides a textual name for the
register (used for assembly output and debugging dumps) and a set of aliases
(used to indicate whether one register overlaps with another).

In addition to the per-register description, the TargetRegisterInfo
class exposes a set of processor specific register classes (instances of the
TargetRegisterClass class). Each register class contains sets of
registers that have the same properties (for example, they are all 32-bit
integer registers). Each SSA virtual register created by the instruction
selector has an associated register class. When the register allocator runs, it
replaces virtual registers with a physical register in the set.

The target-specific implementations of these classes is auto-generated from a TableGen description of the register file.

The TargetInstrInfo class is used to describe the machine
instructions supported by the target. It is essentially an array of
TargetInstrDescriptor objects, each of which describes one
instruction the target supports. Descriptors define things like the mnemonic
for the opcode, the number of operands, the list of implicit register uses
and defs, whether the instruction has certain target-independent properties
(accesses memory, is commutable, etc), and holds any target-specific
flags.

The TargetFrameInfo class is used to provide information about the
stack frame layout of the target. It holds the direction of stack growth,
the known stack alignment on entry to each function, and the offset to the
local area. The offset to the local area is the offset from the stack
pointer on function entry to the first location where function data (local
variables, spill locations) can be stored.

The TargetSubtarget class is used to provide information about the
specific chip set being targeted. A sub-target informs code generation of
which instructions are supported, instruction latencies and instruction
execution itinerary; i.e., which processing units are used, in what order, and
for how long.

The TargetJITInfo class exposes an abstract interface used by the
Just-In-Time code generator to perform target-specific activities, such as
emitting stubs. If a TargetMachine supports JIT code generation, it
should provide one of these objects through the getJITInfo
method.

At the high-level, LLVM code is translated to a machine specific
representation formed out of
MachineFunction,
MachineBasicBlock, and MachineInstr instances
(defined in include/llvm/CodeGen). This representation is completely
target agnostic, representing instructions in their most abstract form: an
opcode and a series of operands. This representation is designed to support
both an SSA representation for machine code, as well as a register allocated,
non-SSA form.

Target machine instructions are represented as instances of the
MachineInstr class. This class is an extremely abstract way of
representing machine instructions. In particular, it only keeps track of
an opcode number and a set of operands.

The opcode number is a simple unsigned integer that only has meaning to a
specific backend. All of the instructions for a target should be defined in
the *InstrInfo.td file for the target. The opcode enum values
are auto-generated from this description. The MachineInstr class does
not have any information about how to interpret the instruction (i.e., what the
semantics of the instruction are); for that you must refer to the
TargetInstrInfo class.

The operands of a machine instruction can be of several different types:
a register reference, a constant integer, a basic block reference, etc. In
addition, a machine operand should be marked as a def or a use of the value
(though only registers are allowed to be defs).

By convention, the LLVM code generator orders instruction operands so that
all register definitions come before the register uses, even on architectures
that are normally printed in other orders. For example, the SPARC add
instruction: "add %i1, %i2, %i3" adds the "%i1", and "%i2" registers
and stores the result into the "%i3" register. In the LLVM code generator,
the operands should be stored as "%i3, %i1, %i2": with the destination
first.

Keeping destination (definition) operands at the beginning of the operand
list has several advantages. In particular, the debugging printer will print
the instruction like this:

%r3 = add %i1, %i2

Also if the first operand is a def, it is easier to create instructions whose only def is the first
operand.

Machine instructions are created by using the BuildMI functions,
located in the include/llvm/CodeGen/MachineInstrBuilder.h file. The
BuildMI functions make it easy to build arbitrary machine
instructions. Usage of the BuildMI functions look like this:

The key thing to remember with the BuildMI functions is that you
have to specify the number of operands that the machine instruction will take.
This allows for efficient memory allocation. You also need to specify if
operands default to be uses of values, not definitions. If you need to add a
definition operand (other than the optional destination register), you must
explicitly mark it as such:

One important issue that the code generator needs to be aware of is the
presence of fixed registers. In particular, there are often places in the
instruction stream where the register allocator must arrange for a
particular value to be in a particular register. This can occur due to
limitations of the instruction set (e.g., the X86 can only do a 32-bit divide
with the EAX/EDX registers), or external factors like calling
conventions. In any case, the instruction selector should emit code that
copies a virtual register into or out of a physical register when needed.

For example, consider this simple LLVM example:

define i32 @test(i32 %X, i32 %Y) {
%Z = udiv i32 %X, %Y
ret i32 %Z
}

The X86 instruction selector produces this machine code for the div
and ret (use
"llc X.bc -march=x86 -print-machineinstrs" to get this):

By the end of code generation, the register allocator has coalesced
the registers and deleted the resultant identity moves producing the
following code:

;; X is in EAX, Y is in ECX
mov %EAX, %EDX
sar %EDX, 31
idiv %ECX
ret

This approach is extremely general (if it can handle the X86 architecture,
it can handle anything!) and allows all of the target specific
knowledge about the instruction stream to be isolated in the instruction
selector. Note that physical registers should have a short lifetime for good
code generation, and all physical registers are assumed dead on entry to and
exit from basic blocks (before register allocation). Thus, if you need a value
to be live across basic block boundaries, it must live in a virtual
register.

MachineInstr's are initially selected in SSA-form, and
are maintained in SSA-form until register allocation happens. For the most
part, this is trivially simple since LLVM is already in SSA form; LLVM PHI nodes
become machine code PHI nodes, and virtual registers are only allowed to have a
single definition.

After register allocation, machine code is no longer in SSA-form because there
are no virtual registers left in the code.

The MachineBasicBlock class contains a list of machine instructions
(MachineInstr instances). It roughly
corresponds to the LLVM code input to the instruction selector, but there can be
a one-to-many mapping (i.e. one LLVM basic block can map to multiple machine
basic blocks). The MachineBasicBlock class has a
"getBasicBlock" method, which returns the LLVM basic block that it
comes from.

The MachineFunction class contains a list of machine basic blocks
(MachineBasicBlock instances). It
corresponds one-to-one with the LLVM function input to the instruction selector.
In addition to a list of basic blocks, the MachineFunction contains a
a MachineConstantPool, a MachineFrameInfo, a
MachineFunctionInfo, and a MachineRegisterInfo. See
include/llvm/CodeGen/MachineFunction.h for more information.

Instruction Selection is the process of translating LLVM code presented to the
code generator into target-specific machine instructions. There are several
well-known ways to do this in the literature. LLVM uses a SelectionDAG based
instruction selector.

Portions of the DAG instruction selector are generated from the target
description (*.td) files. Our goal is for the entire instruction
selector to be generated from these .td files, though currently
there are still things that require custom C++ code.

The SelectionDAG provides an abstraction for code representation in a way
that is amenable to instruction selection using automatic techniques
(e.g. dynamic-programming based optimal pattern matching selectors). It is also
well-suited to other phases of code generation; in particular,
instruction scheduling (SelectionDAG's are very close to scheduling DAGs
post-selection). Additionally, the SelectionDAG provides a host representation
where a large variety of very-low-level (but target-independent)
optimizations may be
performed; ones which require extensive information about the instructions
efficiently supported by the target.

The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the
SDNode class. The primary payload of the SDNode is its
operation code (Opcode) that indicates what operation the node performs and
the operands to the operation.
The various operation node types are described at the top of the
include/llvm/CodeGen/SelectionDAGNodes.h file.

Although most operations define a single value, each node in the graph may
define multiple values. For example, a combined div/rem operation will define
both the dividend and the remainder. Many other situations require multiple
values as well. Each node also has some number of operands, which are edges
to the node defining the used value. Because nodes may define multiple values,
edges are represented by instances of the SDValue class, which is
a <SDNode, unsigned> pair, indicating the node and result
value being used, respectively. Each value produced by an SDNode has
an associated MVT (Machine Value Type) indicating what the type of the
value is.

SelectionDAGs contain two different kinds of values: those that represent
data flow and those that represent control flow dependencies. Data values are
simple edges with an integer or floating point value type. Control edges are
represented as "chain" edges which are of type MVT::Other. These edges
provide an ordering between nodes that have side effects (such as
loads, stores, calls, returns, etc). All nodes that have side effects should
take a token chain as input and produce a new one as output. By convention,
token chain inputs are always operand #0, and chain results are always the last
value produced by an operation.

A SelectionDAG has designated "Entry" and "Root" nodes. The Entry node is
always a marker node with an Opcode of ISD::EntryToken. The Root node
is the final side-effecting node in the token chain. For example, in a single
basic block function it would be the return node.

One important concept for SelectionDAGs is the notion of a "legal" vs.
"illegal" DAG. A legal DAG for a target is one that only uses supported
operations and supported types. On a 32-bit PowerPC, for example, a DAG with
a value of type i1, i8, i16, or i64 would be illegal, as would a DAG that uses a
SREM or UREM operation. The
legalize phase is responsible for turning
an illegal DAG into a legal DAG.

SelectionDAG-based instruction selection consists of the following steps:

Build initial DAG - This stage
performs a simple translation from the input LLVM code to an illegal
SelectionDAG.

Optimize SelectionDAG - This stage
performs simple optimizations on the SelectionDAG to simplify it, and
recognize meta instructions (like rotates and div/rem
pairs) for targets that support these meta operations. This makes the
resultant code more efficient and the select
instructions from DAG phase (below) simpler.

Legalize SelectionDAG - This stage
converts the illegal SelectionDAG to a legal SelectionDAG by eliminating
unsupported operations and data types.

Optimize SelectionDAG (#2) - This
second run of the SelectionDAG optimizes the newly legalized DAG to
eliminate inefficiencies introduced by legalization.

Select instructions from DAG - Finally,
the target instruction selector matches the DAG operations to target
instructions. This process translates the target-independent input DAG into
another DAG of target instructions.

SelectionDAG Scheduling and Formation
- The last phase assigns a linear order to the instructions in the
target-instruction DAG and emits them into the MachineFunction being
compiled. This step uses traditional prepass scheduling techniques.

After all of these steps are complete, the SelectionDAG is destroyed and the
rest of the code generation passes are run.

One great way to visualize what is going on here is to take advantage of a
few LLC command line options. The following options pop up a window displaying
the SelectionDAG at specific times (if you only get errors printed to the console
while using this, you probably
need to configure your system to
add support for it).

-view-dag-combine1-dags displays the DAG after being built, before
the first optimization pass.

-view-legalize-dags displays the DAG before Legalization.

-view-dag-combine2-dags displays the DAG before the second
optimization pass.

-view-isel-dags displays the DAG before the Select phase.

-view-sched-dags displays the DAG before Scheduling.

The -view-sunit-dags displays the Scheduler's dependency graph.
This graph is based on the final SelectionDAG, with nodes that must be
scheduled together bundled into a single scheduling-unit node, and with
immediate operands and other nodes that aren't relevent for scheduling
omitted.

The initial SelectionDAG is naïvely peephole expanded from the LLVM
input by the SelectionDAGLowering class in the
lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp file. The intent of this
pass is to expose as much low-level, target-specific details to the SelectionDAG
as possible. This pass is mostly hard-coded (e.g. an LLVM add turns
into an SDNode add while a getelementptr is expanded into the
obvious arithmetic). This pass requires target-specific hooks to lower calls,
returns, varargs, etc. For these features, the
TargetLowering interface is used.

The Legalize phase is in charge of converting a DAG to only use the types and
operations that are natively supported by the target. This involves two major
tasks:

Convert values of unsupported types to values of supported types.

There are two main ways of doing this: converting small types to
larger types ("promoting"), and breaking up large integer types
into smaller ones ("expanding"). For example, a target might require
that all f32 values are promoted to f64 and that all i1/i8/i16 values
are promoted to i32. The same target might require that all i64 values
be expanded into i32 values. These changes can insert sign and zero
extensions as needed to make sure that the final code has the same
behavior as the input.

A target implementation tells the legalizer which types are supported
(and which register class to use for them) by calling the
addRegisterClass method in its TargetLowering constructor.

Eliminate operations that are not supported by the target.

Targets often have weird constraints, such as not supporting every
operation on every supported datatype (e.g. X86 does not support byte
conditional moves and PowerPC does not support sign-extending loads from
a 16-bit memory location). Legalize takes care of this by open-coding
another sequence of operations to emulate the operation ("expansion"), by
promoting one type to a larger type that supports the operation
("promotion"), or by using a target-specific hook to implement the
legalization ("custom").

A target implementation tells the legalizer which operations are not
supported (and which of the above three actions to take) by calling the
setOperationAction method in its TargetLowering
constructor.

Prior to the existance of the Legalize pass, we required that every target
selector supported and handled every
operator and type even if they are not natively supported. The introduction of
the Legalize phase allows all of the cannonicalization patterns to be shared
across targets, and makes it very easy to optimize the cannonicalized code
because it is still in the form of a DAG.

The SelectionDAG optimization phase is run twice for code generation: once
immediately after the DAG is built and once after legalization. The first run
of the pass allows the initial code to be cleaned up (e.g. performing
optimizations that depend on knowing that the operators have restricted type
inputs). The second run of the pass cleans up the messy code generated by the
Legalize pass, which allows Legalize to be very simple (it can focus on making
code legal instead of focusing on generating good and legal code).

One important class of optimizations performed is optimizing inserted sign
and zero extension instructions. We currently use ad-hoc techniques, but could
move to more rigorous techniques in the future. Here are some good papers on
the subject:

The Select phase is the bulk of the target-specific code for instruction
selection. This phase takes a legal SelectionDAG as input, pattern matches the
instructions supported by the target to this DAG, and produces a new DAG of
target code. For example, consider the following LLVM fragment:

This LLVM code corresponds to a SelectionDAG that looks basically like
this:

(fadd:f32 (fmul:f32 (fadd:f32 W, X), Y), Z)

If a target supports floating point multiply-and-add (FMA) operations, one
of the adds can be merged with the multiply. On the PowerPC, for example, the
output of the instruction selector might look like this DAG:

(FMADDS (FADDS W, X), Y, Z)

The FMADDS instruction is a ternary instruction that multiplies its
first two operands and adds the third (as single-precision floating-point
numbers). The FADDS instruction is a simple binary single-precision
add instruction. To perform this pattern match, the PowerPC backend includes
the following instruction definitions:

The portion of the instruction definition in bold indicates the pattern used
to match the instruction. The DAG operators (like fmul/fadd)
are defined in the lib/Target/TargetSelectionDAG.td file.
"F4RC" is the register class of the input and result values.

The TableGen DAG instruction selector generator reads the instruction
patterns in the .td file and automatically builds parts of the pattern
matching code for your target. It has the following strengths:

At compiler-compiler time, it analyzes your instruction patterns and tells
you if your patterns make sense or not.

It can handle arbitrary constraints on operands for the pattern match. In
particular, it is straight-forward to say things like "match any immediate
that is a 13-bit sign-extended value". For examples, see the
immSExt16 and related tblgen classes in the PowerPC
backend.

It knows several important identities for the patterns defined. For
example, it knows that addition is commutative, so it allows the
FMADDS pattern above to match "(fadd X, (fmul Y, Z))" as
well as "(fadd (fmul X, Y), Z)", without the target author having
to specially handle this case.

It has a full-featured type-inferencing system. In particular, you should
rarely have to explicitly tell the system what type parts of your patterns
are. In the FMADDS case above, we didn't have to tell
tblgen that all of the nodes in the pattern are of type 'f32'. It
was able to infer and propagate this knowledge from the fact that
F4RC has type 'f32'.

Targets can define their own (and rely on built-in) "pattern fragments".
Pattern fragments are chunks of reusable patterns that get inlined into your
patterns during compiler-compiler time. For example, the integer
"(not x)" operation is actually defined as a pattern fragment that
expands as "(xor x, -1)", since the SelectionDAG does not have a
native 'not' operation. Targets can define their own short-hand
fragments as they see fit. See the definition of 'not' and
'ineg' for examples.

In addition to instructions, targets can specify arbitrary patterns that
map to one or more instructions using the 'Pat' class. For example,
the PowerPC has no way to load an arbitrary integer immediate into a
register in one instruction. To tell tblgen how to do this, it defines:

If none of the single-instruction patterns for loading an immediate into a
register match, this will be used. This rule says "match an arbitrary i32
immediate, turning it into an ORI ('or a 16-bit immediate') and an
LIS ('load 16-bit immediate, where the immediate is shifted to the
left 16 bits') instruction". To make this work, the
LO16/HI16 node transformations are used to manipulate the
input immediate (in this case, take the high or low 16-bits of the
immediate).

While the system does automate a lot, it still allows you to write custom
C++ code to match special cases if there is something that is hard to
express.

While it has many strengths, the system currently has some limitations,
primarily because it is a work in progress and is not yet finished:

Overall, there is no way to define or match SelectionDAG nodes that define
multiple values (e.g. ADD_PARTS, LOAD, CALL,
etc). This is the biggest reason that you currently still have to
write custom C++ code for your instruction selector.

There is no great way to support matching complex addressing modes yet. In
the future, we will extend pattern fragments to allow them to define
multiple values (e.g. the four operands of the X86
addressing mode, which are currently matched with custom C++ code).
In addition, we'll extend fragments so that a
fragment can match multiple different patterns.

We don't automatically infer flags like isStore/isLoad yet.

We don't automatically generate the set of supported registers and
operations for the Legalizer yet.

We don't have a way of tying in custom legalized nodes yet.

Despite these limitations, the instruction selector generator is still quite
useful for most of the binary and logical operations in typical instruction
sets. If you run into any problems or can't figure out how to do something,
please let Chris know!

The scheduling phase takes the DAG of target instructions from the selection
phase and assigns an order. The scheduler can pick an order depending on
various constraints of the machines (i.e. order for minimal register pressure or
try to cover instruction latencies). Once an order is established, the DAG is
converted to a list of MachineInstrs and
the SelectionDAG is destroyed.

Note that this phase is logically separate from the instruction selection
phase, but is tied to it closely in the code because it operates on
SelectionDAGs.

Live Intervals are the ranges (intervals) where a variable is live.
They are used by some register allocator passes to
determine if two or more virtual registers which require the same physical
register are live at the same point in the program (i.e., they conflict). When
this situation occurs, one virtual register must be spilled.

The first step in determining the live intervals of variables is to
calculate the set of registers that are immediately dead after the
instruction (i.e., the instruction calculates the value, but it is
never used) and the set of registers that are used by the instruction,
but are never used after the instruction (i.e., they are killed). Live
variable information is computed for each virtual register and
register allocatable physical register in the function. This
is done in a very efficient manner because it uses SSA to sparsely
compute lifetime information for virtual registers (which are in SSA
form) and only has to track physical registers within a block. Before
register allocation, LLVM can assume that physical registers are only
live within a single basic block. This allows it to do a single,
local analysis to resolve physical register lifetimes within each
basic block. If a physical register is not register allocatable (e.g.,
a stack pointer or condition codes), it is not tracked.

Physical registers may be live in to or out of a function. Live in values
are typically arguments in registers. Live out values are typically return
values in registers. Live in values are marked as such, and are given a dummy
"defining" instruction during live intervals analysis. If the last basic block
of a function is a return, then it's marked as using all live out
values in the function.

PHI nodes need to be handled specially, because the calculation
of the live variable information from a depth first traversal of the CFG of
the function won't guarantee that a virtual register used by the PHI
node is defined before it's used. When a PHI node is encounted, only
the definition is handled, because the uses will be handled in other basic
blocks.

For each PHI node of the current basic block, we simulate an
assignment at the end of the current basic block and traverse the successor
basic blocks. If a successor basic block has a PHI node and one of
the PHI node's operands is coming from the current basic block,
then the variable is marked as alive within the current basic block
and all of its predecessor basic blocks, until the basic block with the
defining instruction is encountered.

We now have the information available to perform the live intervals analysis
and build the live intervals themselves. We start off by numbering the basic
blocks and machine instructions. We then handle the "live-in" values. These
are in physical registers, so the physical register is assumed to be killed by
the end of the basic block. Live intervals for virtual registers are computed
for some ordering of the machine instructions [1, N]. A live interval
is an interval [i, j), where 1 <= i <= j < N, for which a
variable is live.

The Register Allocation problem consists in mapping a program
Pv, that can use an unbounded number of virtual
registers, to a program Pp that contains a finite
(possibly small) number of physical registers. Each target architecture has
a different number of physical registers. If the number of physical
registers is not enough to accommodate all the virtual registers, some of
them will have to be mapped into memory. These virtuals are called
spilled virtuals.

In LLVM, physical registers are denoted by integer numbers that
normally range from 1 to 1023. To see how this numbering is defined
for a particular architecture, you can read the
GenRegisterNames.inc file for that architecture. For
instance, by inspecting
lib/Target/X86/X86GenRegisterNames.inc we see that the 32-bit
register EAX is denoted by 15, and the MMX register
MM0 is mapped to 48.

Some architectures contain registers that share the same physical
location. A notable example is the X86 platform. For instance, in the
X86 architecture, the registers EAX, AX and
AL share the first eight bits. These physical registers are
marked as aliased in LLVM. Given a particular architecture, you
can check which registers are aliased by inspecting its
RegisterInfo.td file. Moreover, the method
TargetRegisterInfo::getAliasSet(p_reg) returns an array containing
all the physical registers aliased to the register p_reg.

Physical registers, in LLVM, are grouped in Register Classes.
Elements in the same register class are functionally equivalent, and can
be interchangeably used. Each virtual register can only be mapped to
physical registers of a particular class. For instance, in the X86
architecture, some virtuals can only be allocated to 8 bit registers.
A register class is described by TargetRegisterClass objects.
To discover if a virtual register is compatible with a given physical,
this code can be used:

Sometimes, mostly for debugging purposes, it is useful to change
the number of physical registers available in the target
architecture. This must be done statically, inside the
TargetRegsterInfo.td file. Just grep for
RegisterClass, the last parameter of which is a list of
registers. Just commenting some out is one simple way to avoid them
being used. A more polite way is to explicitly exclude some registers
from the allocation order. See the definition of the
GR register class in
lib/Target/IA64/IA64RegisterInfo.td for an example of this
(e.g., numReservedRegs registers are hidden.)

Virtual registers are also denoted by integer numbers. Contrary to
physical registers, different virtual registers never share the same
number. The smallest virtual register is normally assigned the number
1024. This may change, so, in order to know which is the first virtual
register, you should access
TargetRegisterInfo::FirstVirtualRegister. Any register whose
number is greater than or equal to
TargetRegisterInfo::FirstVirtualRegister is considered a virtual
register. Whereas physical registers are statically defined in a
TargetRegisterInfo.td file and cannot be created by the
application developer, that is not the case with virtual registers.
In order to create new virtual registers, use the method
MachineRegisterInfo::createVirtualRegister(). This method will return a
virtual register with the highest code.

Before register allocation, the operands of an instruction are
mostly virtual registers, although physical registers may also be
used. In order to check if a given machine operand is a register, use
the boolean function MachineOperand::isRegister(). To obtain
the integer code of a register, use
MachineOperand::getReg(). An instruction may define or use a
register. For instance, ADD reg:1026 := reg:1025 reg:1024
defines the registers 1024, and uses registers 1025 and 1026. Given a
register operand, the method MachineOperand::isUse() informs
if that register is being used by the instruction. The method
MachineOperand::isDef() informs if that registers is being
defined.

We will call physical registers present in the LLVM bitcode before
register allocation pre-colored registers. Pre-colored
registers are used in many different situations, for instance, to pass
parameters of functions calls, and to store results of particular
instructions. There are two types of pre-colored registers: the ones
implicitly defined, and those explicitly
defined. Explicitly defined registers are normal operands, and can be
accessed with MachineInstr::getOperand(int)::getReg(). In
order to check which registers are implicitly defined by an
instruction, use the
TargetInstrInfo::get(opcode)::ImplicitDefs, where
opcode is the opcode of the target instruction. One important
difference between explicit and implicit physical registers is that
the latter are defined statically for each instruction, whereas the
former may vary depending on the program being compiled. For example,
an instruction that represents a function call will always implicitly
define or use the same set of physical registers. To read the
registers implicitly used by an instruction, use
TargetInstrInfo::get(opcode)::ImplicitUses. Pre-colored
registers impose constraints on any register allocation algorithm. The
register allocator must make sure that none of them is been
overwritten by the values of virtual registers while still alive.

There are two ways to map virtual registers to physical registers (or to
memory slots). The first way, that we will call direct mapping,
is based on the use of methods of the classes TargetRegisterInfo,
and MachineOperand. The second way, that we will call
indirect mapping, relies on the VirtRegMap class in
order to insert loads and stores sending and getting values to and from
memory.

The direct mapping provides more flexibility to the developer of
the register allocator; however, it is more error prone, and demands
more implementation work. Basically, the programmer will have to
specify where load and store instructions should be inserted in the
target function being compiled in order to get and store values in
memory. To assign a physical register to a virtual register present in
a given operand, use MachineOperand::setReg(p_reg). To insert
a store instruction, use
TargetRegisterInfo::storeRegToStackSlot(...), and to insert a load
instruction, use TargetRegisterInfo::loadRegFromStackSlot.

The indirect mapping shields the application developer from the
complexities of inserting load and store instructions. In order to map
a virtual register to a physical one, use
VirtRegMap::assignVirt2Phys(vreg, preg). In order to map a
certain virtual register to memory, use
VirtRegMap::assignVirt2StackSlot(vreg). This method will
return the stack slot where vreg's value will be located. If
it is necessary to map another virtual register to the same stack
slot, use VirtRegMap::assignVirt2StackSlot(vreg,
stack_location). One important point to consider when using the
indirect mapping, is that even if a virtual register is mapped to
memory, it still needs to be mapped to a physical register. This
physical register is the location where the virtual register is
supposed to be found before being stored or after being reloaded.

If the indirect strategy is used, after all the virtual registers
have been mapped to physical registers or stack slots, it is necessary
to use a spiller object to place load and store instructions in the
code. Every virtual that has been mapped to a stack slot will be
stored to memory after been defined and will be loaded before being
used. The implementation of the spiller tries to recycle load/store
instructions, avoiding unnecessary instructions. For an example of how
to invoke the spiller, see
RegAllocLinearScan::runOnMachineFunction in
lib/CodeGen/RegAllocLinearScan.cpp.

With very rare exceptions (e.g., function calls), the LLVM machine
code instructions are three address instructions. That is, each
instruction is expected to define at most one register, and to use at
most two registers. However, some architectures use two address
instructions. In this case, the defined register is also one of the
used register. For instance, an instruction such as ADD %EAX,
%EBX, in X86 is actually equivalent to %EAX = %EAX +
%EBX.

In order to produce correct code, LLVM must convert three address
instructions that represent two address instructions into true two
address instructions. LLVM provides the pass
TwoAddressInstructionPass for this specific purpose. It must
be run before register allocation takes place. After its execution,
the resulting code may no longer be in SSA form. This happens, for
instance, in situations where an instruction such as %a = ADD %b
%c is converted to two instructions such as:

%a = MOVE %b
%a = ADD %a %c

Notice that, internally, the second instruction is represented as
ADD %a[def/use] %c. I.e., the register operand %a is
both used and defined by the instruction.

An important transformation that happens during register allocation is called
the SSA Deconstruction Phase. The SSA form simplifies many
analyses that are performed on the control flow graph of
programs. However, traditional instruction sets do not implement
PHI instructions. Thus, in order to generate executable code, compilers
must replace PHI instructions with other instructions that preserve their
semantics.

There are many ways in which PHI instructions can safely be removed
from the target code. The most traditional PHI deconstruction
algorithm replaces PHI instructions with copy instructions. That is
the strategy adopted by LLVM. The SSA deconstruction algorithm is
implemented in nlib/CodeGen/>PHIElimination.cpp. In order to
invoke this pass, the identifier PHIEliminationID must be
marked as required in the code of the register allocator.

Instruction folding is an optimization performed during
register allocation that removes unnecessary copy instructions. For
instance, a sequence of instructions such as:

%EBX = LOAD %mem_address
%EAX = COPY %EBX

can be safely substituted by the single instruction:

%EAX = LOAD %mem_address

Instructions can be folded with the
TargetRegisterInfo::foldMemoryOperand(...) method. Care must be
taken when folding instructions; a folded instruction can be quite
different from the original instruction. See
LiveIntervals::addIntervalsForSpills in
lib/CodeGen/LiveIntervalAnalysis.cpp for an example of its use.

The LLVM infrastructure provides the application developer with
three different register allocators:

Simple - This is a very simple implementation that does
not keep values in registers across instructions. This register
allocator immediately spills every value right after it is
computed, and reloads all used operands from memory to temporary
registers before each instruction.

Local - This register allocator is an improvement on the
Simple implementation. It allocates registers on a basic
block level, attempting to keep values in registers and reusing
registers as appropriate.

Linear Scan - The default allocator. This is the
well-know linear scan register allocator. Whereas the
Simple and Local algorithms use a direct mapping
implementation technique, the Linear Scan implementation
uses a spiller in order to place load and stores.

The type of register allocator used in llc can be chosen with the
command line option -regalloc=...:

To support tail call optimization in situations where the callee has more arguments than the caller a 'callee pops arguments' convention is used. This currently causes each fastcc call that is not tail call optimized (because one or more of above constraints are not met) to be followed by a readjustment of the stack. So performance might be worse in such cases.

On x86 and x86-64 one register is reserved for indirect tail calls (e.g via a function pointer). So there is one less register for integer argument passing. For x86 this means 2 registers (if inreg parameter attribute is used) and for x86-64 this means 5 register are used.

The X86 code generator lives in the lib/Target/X86 directory. This
code generator is capable of targeting a variety of x86-32 and x86-64
processors, and includes support for ISA extensions such as MMX and SSE.

LLVM follows the AIX PowerPC ABI, with two deviations. LLVM uses a PC
relative (PIC) or static addressing for accessing global values, so no TOC (r2)
is used. Second, r31 is used as a frame pointer to allow dynamic growth of a
stack frame. LLVM takes advantage of having no TOC to provide space to save
the frame pointer in the PowerPC linkage area of the caller frame. Other
details of PowerPC ABI can be found at PowerPC ABI. Note: This link describes the 32 bit ABI. The
64 bit ABI is similar except space for GPRs are 8 bytes wide (not 4) and r13 is
reserved for system use.

The size of a PowerPC frame is usually fixed for the duration of a
function’s invocation. Since the frame is fixed size, all references into
the frame can be accessed via fixed offsets from the stack pointer. The
exception to this is when dynamic alloca or variable sized arrays are present,
then a base pointer (r31) is used as a proxy for the stack pointer and stack
pointer is free to grow or shrink. A base pointer is also used if llvm-gcc is
not passed the -fomit-frame-pointer flag. The stack pointer is always aligned to
16 bytes, so that space allocated for altivec vectors will be properly
aligned.

An invocation frame is layed out as follows (low memory at top);

Linkage

Parameter area

Dynamic area

Locals area

Saved registers area

Previous Frame

The linkage area is used by a callee to save special registers prior
to allocating its own frame. Only three entries are relevant to LLVM. The
first entry is the previous stack pointer (sp), aka link. This allows probing
tools like gdb or exception handlers to quickly scan the frames in the stack. A
function epilog can also use the link to pop the frame from the stack. The
third entry in the linkage area is used to save the return address from the lr
register. Finally, as mentioned above, the last entry is used to save the
previous frame pointer (r31.) The entries in the linkage area are the size of a
GPR, thus the linkage area is 24 bytes long in 32 bit mode and 48 bytes in 64
bit mode.

32 bit linkage area

0

Saved SP (r1)

4

Saved CR

8

Saved LR

12

Reserved

16

Reserved

20

Saved FP (r31)

64 bit linkage area

0

Saved SP (r1)

8

Saved CR

16

Saved LR

24

Reserved

32

Reserved

40

Saved FP (r31)

The parameter area is used to store arguments being passed to a callee
function. Following the PowerPC ABI, the first few arguments are actually
passed in registers, with the space in the parameter area unused. However, if
there are not enough registers or the callee is a thunk or vararg function,
these register arguments can be spilled into the parameter area. Thus, the
parameter area must be large enough to store all the parameters for the largest
call sequence made by the caller. The size must also be mimimally large enough
to spill registers r3-r10. This allows callees blind to the call signature,
such as thunks and vararg functions, enough space to cache the argument
registers. Therefore, the parameter area is minimally 32 bytes (64 bytes in 64
bit mode.) Also note that since the parameter area is a fixed offset from the
top of the frame, that a callee can access its spilt arguments using fixed
offsets from the stack pointer (or base pointer.)

Combining the information about the linkage, parameter areas and alignment. A
stack frame is minimally 64 bytes in 32 bit mode and 128 bytes in 64 bit
mode.

The dynamic area starts out as size zero. If a function uses dynamic
alloca then space is added to the stack, the linkage and parameter areas are
shifted to top of stack, and the new space is available immediately below the
linkage and parameter areas. The cost of shifting the linkage and parameter
areas is minor since only the link value needs to be copied. The link value can
be easily fetched by adding the original frame size to the base pointer. Note
that allocations in the dynamic space need to observe 16 byte aligment.

The locals area is where the llvm compiler reserves space for local
variables.

The saved registers area is where the llvm compiler spills callee saved
registers on entry to the callee.

The llvm prolog and epilog are the same as described in the PowerPC ABI, with
the following exceptions. Callee saved registers are spilled after the frame is
created. This allows the llvm epilog/prolog support to be common with other
targets. The base pointer callee saved register r31 is saved in the TOC slot of
linkage area. This simplifies allocation of space for the base pointer and
makes it convenient to locate programatically and during debugging.