NAME

DESCRIPTION

Various material about the internal Perl compilation representation during parsing and optimization, before the actual execution begins, represented as B objects, the "B" op tree.

The well-known perlguts.pod focuses more on the internal representation of the variables, but not so on the structure, the sequence and the optimization of the basic operations, the ops.

And we have perlhack.pod, which shows e.g. ways to hack into the op tree structure within the debugger. It focuses on getting people to start patching and hacking on the CORE, not understanding or writing compiler backends or optimizations, which the op tree mainly is used for.

Brief Summary

When Perl parses the source code (via Yacc perly.y), the so-called op tree, a tree of basic perl OP structs pointing to simple pp_opname functions, is generated bottom-up. Those pp_ functions - "PP Code" (for "Push / Pop Code") - have the same uniform API as the XS functions, all arguments and return values are transported on the stack. For example, an OP_CONST op points to the pp_const() function and to an SV containing the constant value. When pp_const() is executed, its job is to push that SV onto the stack.

OPs are created by the newFOO() functions, which are called from the parser (in perly.y) as the code is parsed. For example the Perl code $a + $b * $c would cause the equivalent of the following to be called (oversimplifying a bit):

The simpliest type of an op structure is OP, a "BASEOP": this has no children. Unary operators, "UNOP"s, have one child, and this is pointed to by the op_first field. Binary operators ("BINOP"s) have not only an op_first field but also an op_last field. The most complex type of op is a "LISTOP", which has any number of children. In this case, the first child is pointed to by op_first and the last child by op_last. The children in between can be found by iteratively following the op_sibling pointer from the first child to the last.

There are also two other op types: a "PMOP" holds a regular expression, and has no children, and a "LOOP" may or may not have children. If the op_sibling field is non-zero, it behaves like a LISTOP. To complicate matters, if an UNOP is actually a null op after optimization (see "Compile pass 2: context propagation" below) it will still have children in accordance with its former type.

The beautiful thing about the op tree representation is that it is a strict 1:1 mapping to the actual source code, which is proven by the B::Deparse module, which generates readable source for the current op tree. Well, almost.

The Compiler

Perl's compiler is essentially a 3-pass compiler with interleaved phases:

1. A bottom-up pass
2. A top-down pass
3. An execution-order pass

Compile pass 1: check routines and constant folding

The bottom-up pass is represented by all the "newOP" routines and the ck_ routines. The bottom-upness is actually driven by yacc. So at the point that a ck_ routine fires, we have no idea what the context is, either upward in the syntax tree, or either forward or backward in the execution order. The bottom-up parser builds that part of the execution order it knows about, but if you follow the "next" links around, you'll find it's actually a closed loop through the top level node.

So when creating the ops in the first step, still bottom-up, for each op a check function (ck_ ()) is called, which which theroretically may destructively modify the whole tree, but because it knows almost nothing, it mostly just nullifies the current op. Or it might set the "op_next" pointer. See "Check Functions" for more.

Compile pass 2: context propagation

The context determines the type of the return value. When a context for a part of compile tree is known, it is propagated down through the tree. At this time the context can have 5 values (instead of 2 for runtime context): void, boolean, scalar, list, and lvalue. In contrast with the pass 1 this pass is processed from top to bottom: a node's context determines the context for its children.

Whenever the bottom-up parser gets to a node that supplies context to its components, it invokes that portion of the top-down pass that applies to that part of the subtree (and marks the top node as processed, so if a node further up supplies context, it doesn't have to take the plunge again). As a particular subcase of this, as the new node is built, it takes all the closed execution loops of its subcomponents and links them into a new closed loop for the higher level node. But it's still not the real execution order.

Todo: Sample where this context flag is stored

Additional context-dependent optimizations are performed at this time. Since at this moment the compile tree contains back-references (via "thread" pointers), nodes cannot be free()d now. To allow optimized-away nodes at this stage, such nodes are null()ified instead of free()'ing (i.e. their type is changed to OP_NULL).

Compile pass 3: peephole optimization

The actual execution order is not known till we get a grammar reduction to a top-level unit like a subroutine or file that will be called by "name" rather than via a "next" pointer. At that point, we can call into peep() to do that code's portion of the 3rd pass. It has to be recursive, but it's recursive on basic blocks, not on tree nodes.

So finally, when the full parse tree is generated, the "peephole optimizer" peep() is running. This pass is neither top-down or bottom-up, but in the execution order (with additional complications for conditionals).

This examines each op in the tree and attempts to determine "local" optimizations by "thinking ahead" one or two ops and seeing if multiple operations can be combined into one (by nullifying and re-ordering the next pointers).

It also checks for lexical issues such as the effect of use strict on bareword constants. Note that since the last walk the early sibling pointers for recursive (bottom-up) meta-inspection are useless, the final exec order is guaranteed by the next and flags fields.

basic vs exec order

The highly recursive Yacc parser generates the initial op tree in basic order. To save memory and run-time the final execution order of the ops in sequential order is not copied around, just the next pointers are rehooked in Perl_linklist() to the so-called exec order. So the exec walk through the linked-list of ops is not too cache-friendly.

In detail Perl_linklist() traverses the op tree, and sets op-next pointers to give the execution order for that op tree. op-sibling pointers are rarely unneeded after that.

Walkers can run in "basic" or "exec" order. "basic" is useful for the memory layout, it contains the history, "exec" is more useful to understand the logic and program flow. The "B::Bytecode" section has an extensive example about the order.

The class of an OP determines its size and the number of children. But the number and type of arguments is not so easy to declare as in C. opcode.pl tries to declare some XS-prototype like arguments, but in lisp we would say most ops are "special" functions, context-dependent, with special parsing and precedence rules.

OP Class Declarations in opcode.pl

The full list of op declarations is defined as DATA in opcode.pl. It defines the class, the name, some flags, and the argument types, the so-called "operands". make regen (via regen.pl) recreates out of this DATA table the files opcode.h, opnames.h, pp_proto.h and pp.sym.

needs stack mark - m
needs constant folding - f
produces a scalar - s
produces an integer - i
needs a target - t
target can be in a pad - T
has a corresponding integer version - I
has side effects - d
uses $_ if no argument given - u

BASEOP

All op classes have a single character signifier for easier definition in opcode.pl. The BASEOP class signifier is 0, for no children.

Below are the BASEOP fields, which reflect the object B::OP, since Perl 5.10. These are shared for all op classes. The parts after op_type and before op_flags changed during history.

op_next

Pointer to next op to execute after this one.

Top level pre-grafted op points to first op, but this is replaced when op is grafted in, when this op will point to the real next op, and the new parent takes over role of remembering the starting op. Now, who wrote this prose? Anyway, that is why it is called guts.

op_sibling

Pointer to connect the children's list.

The first child is "op_first", the last is "op_last", and the children in between are interconnected by op_sibling. This is at run-time only used for "LISTOP"s.

See http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2006-09/msg00082.html for a 20% space-reduction patch to get rid of it at run-time.

op_ppaddr

Pointer to current ppcode's function. The so called "opcode".

op_madprop

Pointer to the MADPROP struct. Only with -DMAD, and since 5.10. See "MAD" (Misc Attribute Decoration) below.

op_targ

PADOFFSET to "unnamed" op targets/GVs/constants, wasting no SV. Has for some op's also a different meaning.

op_type

The type of the operation.

Since 5.10 we have the next five fields added, which replace U16 op_seq.

op_opt

"optimized"

Whether or not the op has been optimised by the peephole optimiser.

See the comments in S_clear_yystack() in perly.c for more details on the following three flags. They are just for freeing temporary ops on the stack. But we might have statically allocated op in the data segment, esp. with the perl compiler's B::C module. Then we are not allowed to free those static ops. For a short time, from 5.9.0 until 5.9.4, until the B::C module was removed from CORE, we had another field here for this reason: op_static. On 1 it didn't free the static op. Before 5.9.0 the "op_seq" field was used with the magic value -1 to indicate a static op, not to be freed. Note: Trying to free a static struct is considered harmful.

op_latefree

Tell op_free() to clear this op (and free any kids) but not yet deallocate the struct. This means that the op may be safely op_free()d multiple times.

On static ops you just set this to 1 and after the first op_free() the op_latefreed is automatically set and further op_free() called are just ignored.

op_latefreed

If 1, an op_latefree op has been op_free()d.

op_attached

This op (sub)tree has been attached to the CV PL_compcv so it doesn't need to be free'd.

op_spare

Three spare bits in this bitfield above. At least they survived 5.10.

Those last two fields have been in all perls:

op_flags

Flags common to all operations. See OPf_* in op.h, or more verbose in B::Flags or dump.c

op_private

Flags peculiar to a particular operation (BUT, by default, set to the number of children until the operation is privatized by a check routine, which may or may not check number of children).

This flag is normally used to hold op specific context hints, such as HINT_INTEGER. This flag is directly attached to each relevant op in the subtree of the context. Note that there's no general context or class pointer for each op, a typical functional language usually holds this in the ops arguments. So we are limited to max 32 lexical pragma hints or less. See "Lexical Pragmas".

The exact op.h "BASEOP" history for the parts after op_type and before op_flags is:

LOGOP

The LOGOP class signifier is |.

A LOGOP has the same structure as a "BINOP", two children, just the second field has another name op_other instead of op_last. But as you see on the list below, the two arguments as above are optional and not strictly required.

cond_expr

LISTOP

The LISTOP class signifier is @.

struct listop {
BASEOP
OP * op_first;
OP * op_last;
};

This is most complex type, it may have any number of children. The first child is pointed to by op_first and the last child by op_last. The children in between can be found by iteratively following the op_sibling pointer from the first child to the last.

At all 99 ops from 366 are LISTOP's. This is the least restrictive format, that's why.

PMOP

The PMOP "pattern matching" class signifier is / for matching. It inherits from the "LISTOP".

The internal struct changed completely with 5.10, as the underlying engine. Starting with 5.11 the PMOP can even hold native "perlguts#REGEX" in "REGEX" objects, not just SV's. So you have to use the PM macros to stay compatible.

So op_pmnext, op_pmpermflags and op_pmdynflags are gone. The op_pmflags are not the whole deal, there's also op_pmregexp.extflags - interestingly called B::PMOP::reflags in B - for the new features. This is btw. the only inconsistency in the B mapping.

SVOP

The SVOP class is very special, and can even change dynamically. Whole SV's are costly and are now just used as GV or RV. The SVOP has no special signifier, as there are different subclasses. See "SVOP_OR_PADOP", "PVOP_OR_SVOP" and "FILESTATOP".

A SVOP holds a SV and is in case of an FILESTATOP the GV for the filehandle argument, and in case of trans (a "PVOP") with utf8 a reference to a swash (i.e., an RV pointing to an HV).

struct svop {
BASEOP
SV * op_sv;
};

Most old SVOP's were changed to "PADOP"'s when threading was introduced, to privatize the global SV area to thread-local scratchpads.

SVOP_OR_PADOP

The op aelemfast is either a PADOP with threading and a simple SVOP without. This is thanksfully known at compile-time.

aelemfast constant array element ck_null s$ A S

PVOP_OR_SVOP

The only op here is trans, where the class is dynamically defined, dependent on the utf8 settings in the "op_private" hints.

Character translations (tr///) are usually a PVOP, keeping a pointer to a table of shorts used to look up translations. Under utf8, however, a simple table isn't practical; instead, the OP is an "SVOP", and the SV is a reference to a swash, i.e. a RV pointing to an HV.

PADOP

The PADOP class signifier is $ for temp. scalars.

A new PADOP creates a new temporary scratchpad, an PADLIST array. padop-op_padix = pad_alloc(type, SVs_PADTMP);> SVs_PADTMP are targets/GVs/constants with undef names.

A PADLIST scratchpad is a special context stack, a array-of-array data structure attached to a CV (i.e. a sub), to store lexical variables and opcode temporary and per-thread values. See "Scratchpads" in perlguts.

Only my/our variable (SVs_PADMY/SVs_PADOUR) slots get valid names. The rest are op targets/GVs/constants which are statically allocated or resolved at compile time. These don't have names by which they can be looked up from Perl code at run time through eval "" like my/our variables can be. Since they can't be looked up by "name" but only by their index allocated at compile time (which is usually in op_targ), wasting a name SV for them doesn't make sense.

COP

The struct cop, the "Control OP", changed recently a lot, as the "BASEOP". Remember from perlguts what a COP is? Got you. A COP is nowhere described.

I would have naively called it "Context OP", but not "Control OP". So why? We have a global PL_curcop and then we have threads. So it cannot be global anymore. A COP can be said as helper context for debugging and error information to store away file and line information. But since perl is a file-based compiler, not block-based, also file based pragmata and hints are stored in the COP. So we have for every source file a seperate COP. COP's are mostly not really block level contexts, just file and line information. The block level contexts are not controlled via COP's, but global Cx structs.

cop.h says:

Control ops (cops) are one of the two ops OP_NEXTSTATE and OP_DBSTATE that (loosely speaking) are separate statements. They hold information for lexical state and error reporting. At run time, PL_curcop is set to point to the most recently executed cop, and thus can be used to determine our file-level current state.

But we need block context, eval context, subroutine context, loop context, and even format context. All these are seperate structs defined in cop.h.

So the COPs are not really that important, as the actual Cx context structs are. Just the CopSTASH is, the current package symbol table hash ("stash").

Another famous COP is PL_compiling, which sets the temporary compilation environment.

struct cop {
BASEOP
line_t cop_line; /* line # of this command */
char * cop_label; /* label for this construct */
#ifdef USE_ITHREADS
char * cop_stashpv; /* package line was compiled in */
char * cop_file; /* file name the following line # is from */
#else
HV * cop_stash; /* package line was compiled in */
GV * cop_filegv; /* file the following line # is from */
#endif
U32 cop_hints; /* hints bits from pragmata */
U32 cop_seq; /* parse sequence number */
/* Beware. mg.c and warnings.pl assume the type of this is STRLEN *: */
STRLEN * cop_warnings; /* lexical warnings bitmask */
/* compile time state of %^H. See the comment in op.c for how this is
used to recreate a hash to return from caller. */
struct refcounted_he * cop_hints_hash;
};

NEXTSTATE is replaced by DBSTATE when you call perl with -d, the debugger. You can even patch the NEXTSTATE ops at runtime to DBSTATE as done in the module Enbugger.

For a short time there used to be three. SETSTATE was added 1999 (pre Perl 5.6.0) to track linenumbers correctly in optimized blocks, disabled 1999 with change 4309 for Perl 5.6.0, and removed with 5edb5b2abb at Perl 5.10.1.

BASEOP_OR_UNOP

BASEOP_OR_UNOP has the class signifier %. As the name says, it may be a "BASEOP" or "UNOP", it may have an optional "op_first" field.

The list of % ops is quite large, it has 84 ops. Some of them are e.g.

FILESTATOP

The file stat OPs are created via UNI(OP_foo) in toke.c but use the OPf_REF flag to distinguish between OP types instead of the usual OPf_SPECIAL flag. As usual, if OPf_KIDS is set, then we return OPc_UNOP so that walkoptree can find our children. If OPf_KIDS is not set then we check OPf_REF. Without OPf_REF set (no argument to the operator) it's an OP; with OPf_REF set it's an SVOP (and the field op_sv is the GV for the filehandle argument).

LOOPEXOP

next, last, redo, dump and goto use OPf_SPECIAL to indicate that a label was omitted (in which case it's a "BASEOP") or else a term was seen. In this last case, all except goto are definitely "PVOP" but goto is either a PVOP (with an ordinary constant label), an "UNOP" with OPf_STACKED (with a non-constant non-sub) or an "UNOP" for OP_REFGEN (with goto &sub) in which case OPf_STACKED also seems to get set.

...

OP Definition Example

Let's take a simple example for a opcode definition in opcode.pl:

left_shift left bitshift (<<) ck_bitop fsT2 S S

The op left_shift has a check function ck_bitop (normally most ops have no check function, just ck_null), and the options fsT2. The last two S S describe the type of the two required operands: SV or scalar. This is similar to XS protoypes. The last 2 in the options fsT2 denotes the class BINOP, with two args on the stack. Every binop takes two args and this produces one scalar, see the s flag. The other remaining flags are f and T.

The first IV arg is pop'ed from the stack, the second arg is left on the stack (TOPi/TOPu), because it is used as the return value. (Todo: explain the opASSIGN magic check.) One IV or UV is produced, dependent on HINT_INTEGER, set by the use integer pragma. So it has a special signed/unsigned integer behaviour, which is not defined in the opcode declaration, because the API is indifferent on this, and it is also independent on the argument type. The result, if IV or UV, is entirely context dependent at compile-time ( use integer at BEGIN ) or run-time ( $^H |= 1 ), and only stored in the op.

What is left is the T flag, "target can be a pad". This is a useful optimization technique.

This is checked in the macro dATARGETSV *targ = (PL_op-op_flags & OPf_STACKED ? sp[-1] : PAD_SV(PL_op->op_targ));> OPf_STACKED means "Some arg is arriving on the stack." (see op.h) So this reads, if the op contains OPf_STACKED, the magic targ ("target argument") is simply on the stack, but if not, the op_targ points to a SV on a private scratchpad. "target can be a pad", voila. For reference see "Putting a C value on Perl stack" in perlguts.

Check Functions

They are defined in op.c and not in pp.c, because they belong tightly to the ops and newOP definition, and not to the actual pp_ opcode. That's why the actual op.c file is bigger than pp.c where the real gore for each op begins. The name of each op's check function is defined in opcodes.pl, as shown above.

So when a global PL_op_mask is fitting to the type the OP is nullified at once. If not, the type specific check function with the help of opcodes.pl generating the PL_check array in opnames.h is called.

Constant Folding

In theory pretty easy. If all op's arguments in a sequence are constant and the op is sideffect free ("purely functional"), replace the op sequence with an constant op as result.

We do it like this: We define the f flag in opcodes.pl, which tells the compiler in the first pass to call fold_constants() on this op. See "Compile pass 1: check routines and constant folding" above. If all args are constant, the result is constant also and the op sequence will be replaced by the constant.

But take care, every f op must be sideeffect free.

E.g. our newUNOP() calls at the end:

return fold_constants((OP *) unop);

OA_FOLDCONST ...

Lexical Pragmas

To implement user lexical pragmas, there needs to be a way at run time to get the compile time state of `%^H` for that block. Storing `%^H` in every block (or even COP) would be very expensive, so a different approach is taken. The (running) state of %^H is serialised into a tree of HE-like structs. Stores into %^H are chained onto the current leaf as a struct refcounted_he * with the key and the value. Deletes from %^H are saved with a value of PL_sv_placeholder. The state of %^H at any point can be turned back into a regular HV by walking back up the tree from that point's leaf, ignoring any key you've already seen (placeholder or not), storing the rest into the HV structure, then removing the placeholders. Hence memory is only used to store the %^H deltas from the enclosing COP, rather than the entire %^H on each COP.

To cause actions on %^H to write out the serialisation records, it has magic type 'H'. This magic (itself) does nothing, but its presence causes the values to gain magic type 'h', which has entries for set and clear. Perl_magic_sethint updates PL_compiling.cop_hints_hash with a store record, with deletes written by Perl_magic_clearhint. SAVEHINTS saves the current PL_compiling.cop_hints_hash on the save stack, so that it will be correctly restored when any inner compiling scope is exited.

Examples

Call a subroutine

subname(args...) =>

pushmark
args ...
gv => subname
entersub

Call a method

Here we have several combinations to define the package and the method name, either compile-time (static as constant string), or dynamic as GV (for the method name) or PADSV (package name).

method_named holds the method name as sv if known at compile time. If not gv (of the name) and method is used. The package name is at the top of the stack. A call stack is added with pushmark.

Hooks

Special execution blocks BEGIN, CHECK, UNITCHECK, INIT, END

Perl keeps special arrays of subroutines that are executed at the beginning and at the end of a running Perl program and its program units. These subroutines correspond to the special code blocks: BEGIN, CHECK, UNITCHECK, INIT and END. (See basics at "basics" in perlmod.)

Such arrays belong to Perl's internals that you're not supposed to see. Entries in these arrays get consumed by the interpreter as it enters distinct compilation phases, triggered by statements like require, use, do, eval, etc. To play as safest as possible, the only allowed operations are to add entries to the start and to the end of these arrays.

BEGIN, UNITCHECK and INIT are FIFO (first-in, first-out) blocks while CHECK and END are LIFO (last-in, first-out).

Devel::Hook allows adding code the start or end of these blocks. Manip::END even tries to remove certain entries.

The BEGIN block

A special array of code at PL_beginav, that is executed before main_start, the first op, which is defined be called ENTER. E.g. use module; adds its require and importer code into the BEGIN block.

The CHECK block

The B compiler starting block at PL_checkav. This hooks int the check function which is executed for every op created in bottom-up, basic order.

The UNITCHECK block

A new block since Perl 5.10 at PL_unitcheckav runs right after the CHECK block, to seperate possible B compilation hooks from other checks.

The INIT block

At PL_initav.

The END block

The array contains an undef for each block that has been encountered. It's not really an undef though, it's a kind of raw coderef that's not wrapped in a scalar ref. This leads to funky error messages like Bizarre copy of CODE in sassign when you try to assign one of these values to another variable. See Manip::END how to manipulate these values array.

B and O module. The perl compiler.

Malcom Beattie's B modules hooked into the early op tree stages to represent the internal ops as perl objects and added the perl compiler backends. See B and perlcompile.

MAD

Larry Wall worked on a new MAD compiler backend outside of the B approach, dumping the internal op tree representation as XML or YAML, not as tree of perl B objects.

The idea is that all the information needed to recreate the original source is stored in the op tree. To do this the tokens for the ops are associated with ops, these madprops are a list of key-value pairs, where the key is a character as listed at the end of op.h, the value normally is a string, but it might also be a op, as in the case of a optimized op ('O'). Special for the whitespace key '_' (whitespace before) and '#' (whitespace after), which indicate the whitespace or comment before/after the previous key-value pair.

Also when things normally compiled out, like a BEGIN block, which normally do not results in any ops, instead create a NULLOP with madprops used to recreate the object.

Is there any documentation on this?

Why this awful XML and not the rich tree of perl objects?

Well there's an advantage. The MAD XML can be seen as some kind of XML Storable/Freeze of the B op tree, and can be therefore converted outside of the CHECK block, which means you can easier debug the conversion (= compilation) process. To debug the CHECK block in the B backends you have to use the B::DebuggerOd or Od_o modules, which defer the CHECK to INIT. Debugging the highly recursive data is not easy, and often problems can not be reproduced in the B debugger because the B debugger influences the optree.

Pluggable runops

The compile tree is executed by one of two existing runops functions, in run.c or in dump.c. Perl_runops_debug is used with DEBUGGING and the faster Perl_runops_standard is used otherwise (See below in "Walkers"). For fine control over the execution of the compile tree it is possible to provide your own runops function.

It's probably best to copy one of the existing runops functions and change it to suit your needs. Then, in the BOOT section of your XS file, add the line:

PL_runops = my_runops;

This function should be as efficient as possible to keep your programs running as fast as possible. See Jit for an even faster just-in-time compilation runloop.

Walkers or runops

The standard op tree walker or runops is as simple as this fast Perl_runops_standard() in (run.c). It starts with main_start and walks the op_next chain until the end. No need to check other fields, strictly linear through the tree.

Conclusion

So this is about 30% of the basic op tree information so far. Not speaking about the guts. Simon Cozens and Scott Walters have more 30%, in the source are more 10% to copy&paste, and in the compilers and run-time information is the rest. I hope with the help of some hackers we'll get it done, so that some people will begin poking around in the B backends. And write the wonderful new dump/undump functionality (which actually worked in the early years on Solaris) to save-image and load-image at runtime as in LISP, analyse and optimize the output, output PIR (parrot code), emit LLVM or another JIT optimized code or even write assemblers. I have a simple one at home. :)

Written 2008 on the perl5 wiki with socialtext and pod in parallel by Reini Urban, CPAN ID rurban.