Generating Python Bytecode with peak.util.assembler

peak.util.assembler is a simple bytecode assembler module that handles most
low-level bytecode generation details like jump offsets, stack size tracking,
line number table generation, constant and variable name index tracking, etc.
That way, you can focus your attention on the desired semantics of your
bytecode instead of on these mechanical issues.

In addition to a low-level opcode-oriented API for directly generating specific
Python bytecodes, this module also offers an extensible mini-AST framework for
generating code from high-level specifications. This framework does most of
the work needed to transform tree-like structures into linear bytecode
instructions, and includes the ability to do compile-time constant folding.

Symbolic disassembly with full emulation of backward-compatible
JUMP_IF_TRUE and JUMP_IF_FALSE opcodes on Python 2.7 -- tests now
run clean on Python 2.7.

Support for backward emulation of Python 2.7's JUMP_IF_TRUE_OR_POP and
JUMP_IF_FALSE_OR_POP instructions on earlier Python versions; these
emulations are also used in BytecodeAssembler's internal code generation,
for maximum performance on 2.7+ (with no change to performance on older
versions).

Changes since version 0.5.1:

Initial support for Python 2.7's new opcodes and semantics changes, mostly
by emulating older versions' behavior with macros. (0.5.2 is really just
a quick-fix release to allow packages using BytecodeAssembler to run on 2.7
without having to change any of their code generation; future releases will
provide proper support for the new and changed opcodes, as well as a test
suite that doesn't show spurious differences in the disassembly listings
under Python 2.7.)

Changes since version 0.5:

Fix incorrect stack size calculation for MAKE_CLOSURE on Python 2.5+

Changes since version 0.3:

New node types:

For(iterable,assign,body) -- define a "for" loop over iterable

UnpackSequence(nodes) -- unpacks a sequence that's len(nodes) long,
and then generates the given nodes.

LocalAssign(name) -- issues a STORE_FAST, STORE_DEREF or
STORE_LOCAL as appropriate for the given name.

Function(body,name='<lambda>',args=(),var=None,kw=None,defaults=())
-- creates a nested function from body and puts it on the stack.

Code objects are now iterable, yielding (offset,op,arg) triples,
where op is numeric and arg is either numeric or None.

Code objects' .code() method can now take a "parent" Code object,
to link the child code's free variables to cell variables in the parent.

Added Code.from_spec() classmethod, that initializes a code object from a
name and argument spec.

Code objects now have a .nested(name,args,var,kw) method, that
creates a child code object with the same co_filename and the supplied
name/arg spec.

Fixed incorrect stack tracking for the FOR_ITER and YIELD_VALUE
opcodes

Ensure that CO_GENERATOR flag is set if YIELD_VALUE opcode is used

Change tests so that Python 2.3's broken line number handling in dis.dis
and constant-folding optimizer don't generate spurious failures in this
package's test suite.

Changes since version 0.2:

Added Suite, TryExcept, and TryFinally node types

Added a Getattr node type that does static or dynamic attribute access
and constant folding

Fixed code.from_function() not copying the co_filename attribute when
copy_lineno was specified.

The repr() of AST nodes doesn't include a trailing comma for 1-argument
node types any more.

Added a Pass symbol that generates no code, a Compare() node type
that does n-way comparisons, and And() and Or() node types for doing
logical operations.

The COMPARE_OP() method now accepts operator strings like "<=",
"notin", "exceptionmatch", and so on, as well as numeric opcodes.
See the standard library's opcode module for a complete list of the
strings accepted (in the cmp_op tuple). "<>" is also accepted as an
alias for "!=".

The dis() function in Python 2.3 has a bug that makes it show incorrect
line numbers when the difference between two adjacent line numbers is
greater than 255. (To work around this, the test_suite uses a later version
of dis(), but do note that it may affect your own tests if you use
dis() with Python 2.3 and use widely separated line numbers.)

If you find any other issues, please let me know.

Please also keep in mind that this is a work in progress, and the API may
change if I come up with a better way to do something.

Questions and discussion regarding this software should be directed to the
PEAK Mailing List.

To generate bytecode, you create a Code instance and perform operations
on it. For example, here we create a Code object representing lines
15 and 16 of some input source:

>>> from peak.util.assembler import Code
>>> c = Code()
>>> c.set_lineno(15) # set the current line number (optional)
>>> c.LOAD_CONST(42)
>>> c.set_lineno(16) # set it as many times as you like
>>> c.RETURN_VALUE()

You'll notice that most Code methods are named for a CPython bytecode
operation, but there also some other methods like .set_lineno() to let you
set the current line number. There's also a .code() method that returns
a Python code object, representing the current state of the Code you've
generated:

Python's built-in disassembler can be verbose and hard to read when inspecting
complex generated code -- usually you don't care about bytecode offsets or
line numbers as much as you care about labels, for example.

So, BytecodeAssembler provides its own, simplified disassembler, which we'll
be using for more complex listings in this manual:

As you can see, the line numbers and bytecode offsets have been dropped,
making it esier to see where the jumps go. (This also makes doctests more
robust against Python version changes, as dump() has some extra code to
make conditional jumps appear consistent across the major changes that were
made to conditional jump instructions between Python 2.6 and 2.7.)

Code objects have methods for all of CPython's symbolic opcodes. Generally
speaking, each method accepts either zero or one argument, depending on whether
the opcode accepts an argument.

Python bytecode always encodes opcode arguments as 16 or 32-bit integers, but
sometimes these numbers are actually offsets into a sequence of names or
constants. Code objects take care of maintaining these sequences for you,
allowing you to just pass in a name or value directly, instead of needing to
keep track of what numbers map to what names or values.

The name or value you pass in to such methods will be looked up in the
appropriate table (see Code Attributes below for a list), and if not found,
it will be added:

First, the CALL_FUNCTION(), CALL_FUNCTION_VAR(), CALL_FUNCTION_KW(),
and CALL_FUNCTION_VAR_KW() methods all take two arguments, both of which
are optional. (The _VAR and _KW suffixes in the method names indicate
whether or not a *args or **kwargs or both are also present on the
stack, in addition to the explicit positional and keyword arguments.)

The first argument of each of these methods, is the number of positional
arguments on the stack, and the second is the number of keyword/value pairs on
the stack (to be used as keyword arguments). Both default to zero if not
supplied:

Opcodes that perform jumps or refer to addresses can be invoked in one of
two ways. First, if you are jumping backwards (e.g. with JUMP_ABSOLUTE or
CONTINUE_LOOP), you can obtain the target bytecode offset using the
.here() method, and then later pass that offset into the appropriate
method:

But if you are jumping forward, you will need to call the jump or setup
method without any arguments. The return value will be a "forward reference"
object that can be called later to indicate that the desired jump target has
been reached:

The MAKE_CLOSURE method takes an argument for the number of default values
on the stack, just like the "real" Python opcode. However, it also has an
an additional required argument: the number of closure cells on the stack.
The Python interpreter normally gets this number from a code object that's on
the stack, but Code objects need this value in order to update the
current stack size, for purposes of computing the required total stack size:

Typical real-life code generation use cases call for transforming tree-like
data structures into bytecode, rather than linearly outputting instructions.
Code objects provide for this using a simple but high-level transformation
API.

Code objects may be called, passing in one or more arguments. Each
argument will have bytecode generated for it, according to its type:

As you can see, the above creates code that references an actual tuple as
a constant, rather than generating code to recreate the tuple using a series of
LOAD_CONST operations followed by a BUILD_TUPLE.

If the value wrapped in a Const is not hashable, it is compared by identity
rather than value. This prevents equal mutable values from being reused by
accident, e.g. if you plan to mutate the "constant" values later:

If the code object is not using "fast locals" (i.e. CO_OPTIMIZED isn't
set), local variables will be referenced using LOAD_NAME and STORE_NAME
instead of LOAD_FAST and STORE_FAST, and if the referenced local name
is a "cell" or "free" variable, LOAD_DEREF and STORE_DEREF are used
instead:

The Call wrapper takes 1-4 arguments: the expression to be called, a
sequence of positional arguments, a sequence of keyword/value pairs for
explicit keyword arguments, an "*" argument, and a "**" argument. To omit any
of the optional arguments, just pass in an empty sequence in its place:

This approach has the advantage of being easy to use in complex trees.
Label objects have attributes corresponding to every opcode that uses a
bytecode address argument. Generating code for these attributes emits the
the corresponding opcode, and generating code for the label itself defines
where the previous opcodes will jump to. Labels can have multiple jumps
targeting them, either before or after they are defined. But they can't be
defined more than once:

In Python 2.7, the traditional JUMP_IF_TRUE and JUMP_IF_FALSE
instructions were replaced with four new instructions that either conditionally
or unconditionally pop the value being tested. This was done to improve
performance, since virtually all conditional jumps in Python code pop the
value on one branch or the other.

To provide better cross-version compatibility, BytecodeAssembler emulates the
old instructions on Python 2.7 by emitting a DUP_TOP followed by a
POP_JUMP_IF_FALSE or POP_JUMP_IF_TRUE instruction.

However, since this decreases performance, BytecodeAssembler also emulates
Python 2.7's JUMP_IF_FALSE_OR_POP and JUMP_IF_FALSE_OR_TRUE opcodes
on older Pythons:

This means that you can immediately begin using the "or-pop" variations, in
place of a jump followed by a pop, and BytecodeAssembler will use the faster
single instruction automatically on Python 2.7+.

BytecodeAssembler also supports using Python 2.7's conditional jumps
that do unconditional pops, but currently cannot emulate them on older Python
versions, so at the moment you should use them only when your code requires
Python 2.7.

(Note: for ease in doctesting across Python versions, the dump() function
always shows the code as if it were generated for Python 2.6 or lower, so
if you need to check the actual bytecodes generated, you must use Python's
dis.dis() function instead!)

The YieldStmt node type generates the necessary opcode(s) for a yield
statement, based on the target Python version. (In Python 2.5+, a POP_TOP
must be generated after a YIELD_VALUE in order to create a yield statement,
as opposed to a yield expression.) It also sets the code flags needed to make
the resulting code object a generator:

The Call wrapper can also do simple constant folding, if all of its input
parameters are constants. (Actually, the args and kwargs arguments must be
sequences of constants and 2-tuples of constants, respectively.)

If a Call can thus compute its value in advance, it does so, returning a
Const node instead of a Call node:

>>> Call( Const(type), [1] )
Const(<type 'int'>)

Thus, you can also take the const_value() of such calls:

>>> const_value( Call( Const(dict), [], [('x',27)] ) )
{'x': 27}

Which means that constant folding can propagate up an AST if the result is
passed in to another Call:

Notice that this folding takes place eagerly, during AST construction. If you
want to implement delayed folding after constant propagation or variable
substitution, you'll need to recreate the tree, or use your own custom AST
types. (See Custom Code Generation, below.)

Note that you can disable folding using the fold=False keyword argument to
Call, if you want to ensure that even compile-time constants are computed
at runtime. Compare:

As you can see, the Code.DUP_TOP() is called on the code instance, causing
a DUP_TOP opcode to be output. This is sometimes a handy trick for
accessing values that are already on the stack. More commonly, however, you'll
want to implement more sophisticated callables.

To make it easy to create diverse target types, a nodetype() decorator is
provided:

>>> from peak.util.assembler import nodetype

It allows you to create code generation target types using functions. Your
function should take one or more arguments, with a code=None optional
argument in the last position. It should check whether codeisNone when
called, and if so, return a tuple of the preceding arguments. If code
is not None, then it should do whatever code generating tasks are required.
For example:

Note: although the nodetype() generator can be used above the function
definition in either Python 2.3 or 2.4, it cannot be done in a doctest under
Python 2.3, so this document doesn't attempt to demonstrate that. Under
2.4, you would do something like this:

@nodetype()
def TryFinally(...):

and code that needs to also work under 2.3 should do something like this:

nodetype()
def TryFinally(...):

But to keep the examples here working with doctest, we'll be doing our
nodetype() calls after the end of the function definitions, e.g.:

The nodetype() decorator is virtually identical to the struct()
decorator in the DecoratorTools package, except that it does not support
*args, does not create a field for the code argument, and generates a
__call__() method that reinvokes the wrapped function to do the actual
code generation.

Note: hashing only works if all the values you return in your argument tuple
are hashable, so you should try to convert them if possible. For example, if
an argument accepts any sequence, you should probably convert it to a tuple
before returning it. Most of the examples in this document, and the node types
supplied by peak.util.assembler itself do this.

If you want to incorporate constant-folding into your AST nodes, you can do
so by checking for constant values and folding them at either construction
or code generation time. For example, this And node type (a simpler
version of the one included in peak.util.assembler) folds constants during
code generation, by not generating unnecessary branches when it can
prove which way a branch will go:

The fold_args() function tries to evaluate the node immediately, if all of
its arguments are constants, by creating a temporary Code object, and
running the supplied function against it, then doing an eval() on the
generated code and wrapping the result in a Const. However, if any of the
arguments are non-constant, the original arguments (less the function) are
returned. This causes a normal node instance to be created instead of a
Const.

This isn't a very fast way of doing partial evaluation, but it makes it
really easy to define new code generation targets without writing custom
constant-folding code for each one. Just returnfold_args(ThisType,*args)
instead of returnargs, if you want your node constructor to be able to do
eager evaluation. If you need to, you can check your parameters in order to
decide whether to call fold_args() or not; this is in fact how Call
implements its fold argument and the suppression of folding when
the call has no arguments.

The simplest way to set up the calling signature for a Code instance is
to clone an existing function or code object's signature, using the
Code.from_function() or Code.from_code() classmethods. These methods
create a new Code instance whose calling signature (number and names of
arguments) matches that of the original function or code objects:

Note that these constructors do not copy any actual code from the code
or function objects. They simply copy the signature, and, if you set the
copy_lineno keyword argument to a true value, they will also set the
created code object's co_firstlineno to match that of the original code or
function object:

Although Python code objects want co_varnames to be a tuple, Code
instances use a list, so that names can be added during code generation. The
.code() method automatically creates tuples where necessary.

Here are all of the Code attributes you may want to read or write:

co_filename

A string representing the source filename for this code. If it's an actual
filename, then tracebacks that pass through the generated code will display
lines from the file. The default value is '<generatedcode>'.

co_name

The name of the function, class, or other block that this code represents.
The default value is '<lambda>'.

co_argcount

Number of positional arguments a function accepts; defaults to 0

co_varnames

A list of strings naming the code's local variables, beginning with its
positional argument names, followed by its * and ** argument names,
if applicable, followed by any other local variable names. These names
are used by the LOAD_FAST and STORE_FAST opcodes, and invoking
the .LOAD_FAST(name) and .STORE_FAST(name) methods of a code object
will automatically add the given name to this list, if it's not already
present.

co_flags

The flags for the Python code object. This defaults to
CO_OPTIMIZED|CO_NEWLOCALS, which is the correct value for a function
using "fast" locals. This value is automatically or-ed with CO_NOFREE
when generating a code object, if the co_cellvars and co_freevars
attributes are empty. And if you use the LOAD_NAME(),
STORE_NAME(), or DELETE_NAME() methods, the CO_OPTIMIZED bit
is automatically reset, since these opcodes can only be used when the
code is running with a real (i.e. not virtualized) locals() dictionary.

If you need to change any other flag bits besides the above, you'll need to
set or clear them manually. For your convenience, the
peak.util.assembler module exports all the CO_ constants used by
Python. For example, you can use CO_VARARGS and CO_VARKEYWORDS to
indicate whether a function accepts * or ** arguments, as long as
you extend the co_varnames list accordingly. (Assuming you don't have
an existing function or code object with the desired signature, in which
case you could just use the from_function() or from_code()
classmethods instead of messing with these low-level attributes and flags.)

stack_size

The predicted height of the runtime value stack, as of the current opcode.
Its value is automatically updated by most opcodes, but if you are doing
something sufficiently tricky (as in the Switch demo, below) you may
need to explicitly set it.

The stack_size automatically becomes None after any unconditional
jump operations, such as JUMP_FORWARD, BREAK_LOOP, or
RETURN_VALUE. When the stack size is None, the only operations
that can be performed are the resolving of forward references (which will
set the stack size to what it was when the reference was created), or
manually setting the stack size.

co_freevars

A tuple of strings naming a function's "free" variables. Defaults to an
empty tuple. A function's free variables are the variables it "inherits"
from its surrounding scope. If you're going to use this, you should set
it only once, before generating any code that references any free or cell
variables.

co_cellvars

A tuple of strings naming a function's "cell" variables. Defaults to an
empty tuple. A function's cell variables are the variables that are
"inherited" by one or more of its nested functions. If you're going to use
this, you should set it only once, before generating any code that
references any free or cell variables.

These other attributes are automatically generated and maintained, so you'll
probably never have a reason to change them:

co_consts

A list of constants used by the code; the first (zeroth?) constant is
always None. Normally, this is automatically maintained; the
.LOAD_CONST(value) method checks to see if the constant is already
present in this list, and adds it if it is not there.

co_names

A list of non-optimized or global variable names. It's automatically
updated whenever you invoke a method to generate an opcode that uses
such names.

co_code

A byte array containing the generated code. Don't mess with this.

co_firstlineno

The first line number of the generated code. It automatically gets set
if you call .set_lineno() before generating any code; otherwise it
defaults to zero.

co_lnotab

A byte array containing a generated line number table. It's automatically
generated, so don't mess with it.

co_stacksize

The maximum amount of stack space the code will require to run. This
value is updated automatically as you generate code or change
the stack_size attribute.

Code objects automatically track the predicted stack size as code is
generated, by updating the stack_size attribute as each operation occurs.
A history is kept so that backward jumps can be checked to ensure that the
current stack height is the same as at the jump's target. Similarly, when
forward jumps are resolved, the stack size at the jump target is checked
against the stack size at the jump's origin. If there are multiple jumps to
the same location, they must all have the same stack size at the origin and
the destination.

In addition, whenever any unconditional jump code is generated (i.e.
JUMP_FORWARD, BREAK_LOOP, CONTINUE_LOOP, JUMP_ABSOLUTE, or
RETURN_VALUE), the predicted stack_size is set to None. This
means that the Code object does not know what the stack size will be at
the current location. You cannot issue any instructions when the predicted
stack size is None, as you will receive an AssertionError:

Instead, you must resolve a forward reference (or define a previously-jumped to
label). This will propagate the stack size at the source of the jump to the
current location, updating the stack size:

>>> fwd()
>>> c.stack_size
0

Note, by the way, that this means it is impossible for you to generate static
"dead code". In other words, you cannot generate code that isn't reachable.
You should therefore check if stack_size is None before generating
code that might be unreachable. For example, consider this If
implementation:

The Python SETUP_FINALLY, SETUP_EXCEPT, and SETUP_LOOP opcodes
all create "blocks" that go on the frame's "block stack" at runtime. Each of
these opcodes must be matched with exactly onePOP_BLOCK opcode -- no
more, and no less. Code objects enforce this using an internal block stack
that matches each setup with its corresponding POP_BLOCK. Trying to pop
a nonexistent block, or trying to generate code when unclosed blocks exist is
an error:

When you issue a SETUP_EXCEPT or SETUP_FINALLY, the code's maximum
stack size is raised to ensure that it's at least 3 items higher than
the current stack size. That way, there will be room for the items that Python
puts on the stack when jumping to a block's exception handling code:

In the case of SETUP_EXCEPT, the current stack size is increased by 3
after a POP_BLOCK, because the code that follows will be an exception
handler and will thus always have exception items on the stack:

When a POP_BLOCK() is matched with a SETUP_EXCEPT, it automatically
emits a JUMP_FORWARD and returns a forward reference that should be called
back when the "else" clause or end of the entire try/except statement is
reached:

In the example above, an empty block executes with an exception handler that
begins at offset 7. When the block is done, it jumps forward to the end of
the try/except construct at offset 10. The exception handler does nothing but
remove the exception information from the stack before it falls through to the
end.

Note, by the way, that it's usually easier to use labels to define blocks
like this:

When a POP_BLOCK() is matched with a SETUP_FINALLY, it automatically
emits a LOAD_CONST(None), so that when the corresponding END_FINALLY
is reached, it will know that the "try" block exited normally. Thus, the
normal pattern for producing a try/finally construct is as follows:

The END_FINALLY opcode will remove 1, 2, or 3 values from the stack at
runtime, depending on how the "try" block was exited. In the case of simply
"falling off the end" of the "try" block, however, the inserted
LOAD_CONST(None) puts one value on the stack, and that one value is popped
off by the END_FINALLY. For that reason, Code objects treat
END_FINALLY as if it always popped exactly one value from the stack, even
though at runtime this may vary. This means that the estimated stack levels
within the "finally" clause may not be accurate -- which is why POP_BLOCK()
adjusts the maximum expected stack size to accomodate up to three values being
put on the stack by the Python interpreter for exception handling.

For your convenience, the TryFinally node type can also be used to generate
try/finally blocks:

The POP_BLOCK for a loop marks the end of the loop body, and the beginning
of the "else" clause, if there is one. It returns a forward reference that
should be called back either at the end of the "else" clause, or immediately if
there is no "else". Any BREAK_LOOP opcodes that appear in the loop body
will jump ahead to the point at which the forward reference is resolved.

Here, we'll generate a loop that counts down from 5 to 0, with an "else" clause
that returns 42. Three labels are needed: one to mark the end of the overall
block, one that's looped back to, and one that marks the "else" clause:

The arguments are given in execution order: first the "in" value of the loop,
then the assignment to a loop variable, and finally the body of the loop. The
distinction between the assignment and body, however, is only for clarity and
convenience (to avoid needing to glue the assignment to the body with a
Suite). If you already have a suite or only need one node for the entire
loop body, you can do the same thing with only two arguments:

Notice, by the way, that For() does NOT set up a loop block for you, so if
you want to be able to use break and continue, you'll need to wrap the loop in
a labelled SETUP_LOOP/POP_BLOCK pair, as described in the preceding sections.

In order to generate correct list comprehension code for the target Python
version, you must use the ListComp() and LCAppend() node types. This
is because Python versions 2.4 and up store the list being built in a temporary
variable, and use a special LIST_APPEND opcode to append values, while 2.3
stores the list's append() method in the temporary variable, and calls it
to append values.

The ListComp() node wraps a code body (usually a For() loop) and
manages the creation and destruction of a temporary variable (e.g. _[1],
_[2], etc.). The LCAppend() node type wraps a value or expression to
be appended to the innermost active ListComp() in progress:

To implement closures and nested scopes, your code objects must use "free" or
"cell" variables in place of regular "fast locals". A "free" variable is one
that is defined in an outer scope, and a "cell" variable is one that's defined
in the current scope, but will also be used by nested functions.

The simplest way to set up free or cell variables is to use a code object's
makefree(names) and makecells(names) methods:

This means that you can defer the decision of which locals are free/cell
variables until the code is ready to be generated. In fact, by passing in
a "parent" code object to the .code() method, you can get BytecodeAssembler
to automatically call makefree() and makecells() for the correct
variable names in the child and parent code objects, as we'll see in the next
section.

To create a code object for use in a nested scope, you can use the parent code
object's nested() method. It works just like the from_spec()
classmethod, except that the co_filename of the parent is copied to the
child:

Notice that you must pass the parent code object to the child's .code()
method to ensure that free/cell variables are properly set up. When the
code() method is given another code object as a parameter, it automatically
converts any locally-read (but not written) to "free" variables in the child
code, and ensures that those same variables become "cell" variables in the
supplied parent code object:

Notice that the STORE_FAST in the parent code object was automatically
patched to a STORE_DEREF, with an updated offset if applicable. Any
future use of Local('a') or LocalAssign('a') in the parent or child
code objects will now refer to the free/cell variable, rather than the "local"
variable:

The Function(body,name='<lambda>',args=(),var=None,kw=None,defaults=())
node type creates a function object from the specified body and the optional
name, argument specs, and defaults. The Function() node generates code to
create the function object with the appropriate defaults and closure (if
applicable), and any needed free/cell variables are automatically set up in the
parent and child code objects. The newly generated function will be on top of
the stack at the end of the generated code:

As you can see, Function() not only takes care of setting up free/cell
variables in all the relevant scopes, it also chooses whether to use
MAKE_FUNCTION or MAKE_CLOSURE, and generates code for the defaults.

(Note, by the way, that the defaults argument should be a sequence of
generatable expressions; in the examples here, we used numbers, but they could
have been arbitrary expression nodes.)