This is the first part of a series of articles which cover the parsing technique Parsing Expression Grammars.
This part introduces a support library and a parser generator for C# 3.0 .
The support library consists of the classes PegCharParser and PegByteParser which
are for parsing text and binary sources and which support user defined error handling,
direct evaluation during parsing, parse tree generation and abstract syntax tree generation.
Using these base classes results in fast, easy to understand and to extend Parsers,
which are well integrated into the hosting C# program.

The underlying parsing methodology — called Parsing Expression Grammar[1][2][3] — is
relatively new (first described 2004), but has already many implementations.
Parsing Expressions Grammars (PEG) can be easily implemented in any programming language,
but fit especially well into languages with a rich expression syntax like functional languages
and functionally enhanced imperative languages (like C# 3.0)
because PEG concepts have a close relationship to mutually recursive function calls,
short-circuit boolean expressions and in-place defined functions (lambdas).

A new trend in parsing is integration of parsers into a host language so that the
semantic gap between grammar notation and implementation in the host language is as small as possible
(Perl 6 and boost::sprit are forerunners of this trend).
Parsing Expression Grammars are escpecially well suited when striving for this goal.
Earlier Grammars were not so easy to implement, so that one grammar rule could result
in dozens of code lines. In some parsing strategies, the relationship between grammar rule and
implementation code was even lost.
This is the reason, that until recently generators were used to build parsers.

This article shows how the C# 3.0 lambda facility can be used to implement a support
library for Parsing Expression Grammars, which makes parsing with the PEG technique easy.
When using this library, a PEG grammar is mapped to a C# grammar class which
inherits basic functionality from a PEG base class and
each PEG grammar rule is mapped to a method in the C# grammar class.
Parsers implemented with this libary should be fast
(provided the C# compiler inlines methods whenever possible),
easy to understand and to extend.
Error Diagnosis, generation of a parse tree and addition of semantic actions
are also supported by this library.

The most striking property of PEG and especially this library is the small footprint and
the lack of any administrative overhead.

The main emphasis of this article is on explaining the PEG framework and on studying
concrete application samples. One of the sample applications is a PEG parser generator,
which generates C# source code. The PEG parser generator is the only sample parser which has
been written manually, all other sample parsers were generated by the parser generator.

Parsing Expression Grammars are a kind of executable grammars. Execution of
a PEG grammar means, that grammar patterns matching the input string advance
the current input position accordingly.
Mismatches are handled by going back to
a previous input string position where parsing eventually continues with an alternative.
The following subchapters explain PEGs in detail and introduce the basic PEG constructs,
which have been extended by the author in order to support error diagnosis, direct evaluation and
tree generation.

introduces a so called Nonterminal EnclosedDigits and a right hand side consisting
of two alternatives.

The first alternative ([0-9]+) describes a sequence of digits,
the second ( '(' EnclosedDigits ')') something enclosed in parentheses.
Executing EnclosedDigits with the string ((123))+5 as input would result in a match
and move the input position to just before +5.

This sample also shows the potential for recursive definitions,
since EnclosedDigits uses itself as soon as it recognizes an opening parentheses.
The following table shows the outcome of applying the above grammar for some other input strings.
The | character is an artifical character which visualizes the input position
before and after the match.

Input

Match Position

Match Result

|((123))+5

((123))|+5

true

|123

123|

true

|5+123

|5+123

false

|((1)]

|((1)]

false

For people familiar with regular expressions, it may help to think of a parsing expression
grammar as a generalized regular expression which always matches the beginning of
an input string (regexp prefixed with ^). Whereas
a regular expression consists of a single expression, a PEG consists of set of rules;
each rule can use other rules to help in parsing.
The starting rule matches the whole input and uses the other rules to match subparts of the input.
During parsing one has always a current input position and the input string starting
at this position must match against the rest of the PEG grammar.
Like regular expressions PEG supports the postfix operators * + ? ,
the dot . and character sets enclosed in [].

Unique to PEG are the prefix operators & (peek) and ! (not),
which are used to look ahead without consuming input.
Alternatives in a PEG are not separated by | but by / to indicate
that alternatives are strictly tried in sequential order.
What makes PEG grammars powerful and at the same time a potential memory hogg is
unlimited backtracking, meaning that the input position can be set back to any of
the previously visited input positions in case an alternative fails.
A good and detailed explanation of PEG can be found in the wikipedia [2].
The following table gives an overview of the PEG constructs
(and some homegrown extensions) which are
supported by the library class described in this article.
The following terminology is used

Notion

Meaning

Nonterminal

Name of a grammar rule. In PEG, there must be exactly one
grammar rule having a nonterminal on the left hand
side. The right hand side of the grammar rule
provides the definition of the grammar rule.
A nonterminal on the right hand side of a grammar rule
must reference an existing grammar rule definition.;

Input string

string which is parsed.

Input position

Indicates the next input character to be read.

Match

A grammar element can match a stretch of the input.
The match starts at the current input position.

Success/Failure

Possible outcome of matching a PEG element against the input

e, e1, e2

e, e1 and e2 stand each for arbitrary PEG expressions.

The extended PEG constructs supported by this library are listed in the following table
(|indicates the input position, italics like in name indicate a placeholder):

PEG element

Notation

Meaning

CodePoint

#32

(decimal)

#x3A0

(hex)

#b111

(binary)

Match input against the specified unicode character.

PEG

Success

Failure

#x25

%|1

|1%

Literal

'literal'

Match input against quoted string.

PEG

Success

Failure

'for'

for|tran

|afordable

Escapes take the same form as in the "C" syntax family.

CaseInsensitive Literal

'literal'\i

Same as for Literal but compares case insensitive. \i must follow a Literal

PEG

Success

Failure

'FOR'\i

FoR|TraN

|affordable

CharacterSet

[chars]

Same meaning as in regular expressions.
Supported are ranges as in [A-Za-z0-9], single characters and escapes sequences.

Any

.

increment the input position except when being at the end of the input.

PEG

Success

Failure

'this is the end' .

this is the end!|

|this is the end

BITS

BITS<bitNo>

BITS<low-high>

Interprets the Bit bitNo/Bitsequence [low-high] of the current input byte as integer
which must match is used as input for the PegElement.

PEG

Success

Failure

&BITS<7-8,#b11>

|11010101

|01010101

Sequence

e1 e2

Match input against e1 and then -in case of
success- against e2.

PEG

Success

Failure

'#'[0-9]

#5|

|#A

Sequentially executed alternatives

e1 / e2

Match input against e1 and then - in case of failure - against e2.

PEG

Success

Failure

'<='/'<'

<|5

|>5

Greedy Option

e?

Try to match input against e.

PEG

Success

Success

'-'?

-|42

|+42

Greedy repeat zero or more occurrences

e*

Match input repeated against e until the match fails.

PEG

Success

Success

[0-9]*

42|b

|-42

Greedy repeat one or more occurrences

e+

Shorthand for e e*

PEG

Success

Failure

[0-9]*

42|b

|-42

Greedy repeat
between minimum
and maximum
occurrences

e{min}
e{min,max}
e{,max}
e{min,}

Match input at least min times but not more than max times against e.

PEG

Success

Failure

('.'[0-9]*){2,3}

.12.36.42|.18b

|.42b

Peek

&e

Match e without changing input position.

PEG

Success

Failure

&'42'

|42

|-42

Not

!e

Like Peek but Success<->Failure

PEG

Success

Failure

!'42'

|-42

|42

FATAL

FATAL<"message">

Prints the message and error location to the error stream and
quits the parsing process afterwards (throws an exception).

WARNING

WARNING<"message">

WARNING<"message"> Prints the message and location to the error stream.
Success:<always>

Mandatory

@e

Same as (e/FATAL<"e expected"> )

Tree Node

^^e

if e is matched, a tree node will be added to the parse tree.
This tree node will hold the starting and ending match positions for e

Ast Node

^e

like ^^e, but node will be replaced by the child node if there is only one child

Rule

N: e;

N is the nonterminal; e the right hand side which is terminated by a semicolon.

Rule with id

[id]N: e;

id must be a positive integer, e.g. [1] Int:[0-9]+;
The id will be assigned to the the tree/ast node id.

Tree building rule

[id] ^^N: e;

N will be allocated as tree node having the id <id>

Ast building rule

[id]^N: e;

N will be allocated as tree node and is eventually replaced by a child if
the node for N has only one child which has no siblings.

Parametrized Rule

N<peg1,peg2,..>: e;

N takes the PEG epressions peg1, peg2 ... as parameter. This parameters
cant then be used in e.

Into variable

e:variableName

Set the host language variable (a string, byte[], int, double or PegBegEnd) to the matched input stretch.
The language variable must be declared either in the semantic block of the corresponding rule
or in the semantic block of the grammar (see below).

Bits Into variable

BITS<bitNo,:variable>
BITS<low-high,:variable>

Interpret the Bit bitNo or the Bitsequence [low-high] as integer and store it in the host variable.

Semantic Function

f_

call host language function f_ in a semantic block (see below).
A semantic function has the signature bool f_();.
A return value of true is handled as success whereas a return value of false
is handled as fail.

Semantic Block (Grammar level)

BlockName{ //host
//language
//statements
}

The BlockName can be missing in which case a local class named _Top will be created.
Functions and data of a grammar-level semantic block
can be accessed from any other rule-level semantic block.
Functions in the grammar-level semantic block can be used
as semantic functions at any place in the grammar.

CREATE Semantic Block(Grammar level)

CREATE{ //host
//language
//statements
}

This kind of block is used in conjunction with customized tree nodes as described at the very end of this table

Semantic Block (Rule level)

RuleName { //host
//language
//statements
}: e;

Functions and data of a rule-level semantic block
are only available from within the associated rule.
Functions in the rule associated semantic block can be used
as semantic functions on the right hand side of the rule.

Using semantic block(which is elsewhere defined)

RuleName
using NameOfSemanticBlock: e;

The using directive supports reusing the same semantic block
when several rules need the same local semantic block.

Custom Node Creation

^^CREATE<CreaFunName> N: e;
^CREATE<CreaFuncName> N: e;

Custom Node creation allows to create a user defined Node (which must be derived from the library node PegNode).
The CreaFunc must be defined in a CREATE semantic block (see above) and must have the following overall structure

PEG's behave in some respects similar to regular expressions:
The application of a PEG to an input string can be explained
by a pattern matching process which assigns matching parts of
the input string to rules of the grammar (much like with groups in regexes)
and which backtracks in case of a mismatch. The most important difference
between a PEG and regexes is the fact, that PEG support recursivenesss
and that PEG patterns are greedy.
Compared to most other traditional language parsing techniques, PEG is surprisingly different.
The most striking differences are:

Parsing Expression Grammars are deterministic and never ambigous,
thereby removing a
problem of most other parsing techniques. Ambiguity means that the same
input string can be
parsed with different sets of rules of a given grammar and that there
is no policy saying which of
the competing rules should be used. This is in most cases a serious
problem, since if this gets
undetected it results in different parse trees for the same input. The
lack of ambiguity is a big plus for PEG. But the fact, that the order
of alternatives in a PEG rule matters, takes getting used to. The following PEG rule e.g.
rel_operator: '<' / '<=' / '>' / '>=';
will never succeed in recognizing <= because the first alternative will be chosen.
The correct
rule is:
rel_operator: '<=' / '<' / '>=' / '>';

Parsing Expression Grammars are scannerless, whereas most other parsers
divide the parsing task
into a low level lexical parse phase called scanning and a high level
phase - proper parsing. The lexical parse phase
just parses items like numbers, identifiers and strings and presents
the information as a so called token
to the proper parser. This subdivision has its merits and its weak
points. It avoids in some cases backtracking and makes it e.g. easy to
distinguish between a keyword and an identifier. A weak point of most
scanners is the lack of context information inside the scanner so that
a given input string always results in the same token. This is not
always desirable and makes e.g. problems in C++ for the input string
>> which can be a right shift operator or the closing of two
template brackets.

Parsing Expression Grammars can backtrack to an arbitrary location
at the beginning of the input string.
PEG does not require that a file which has to be parsed must be read
completely into memory, but it prohibits
to give free any part of the file which has already been parsed. This
means that a file which foreseeably will
be parsed to the end, should be read into memory completely before
parsing starts. Fortunately memory is not anymore a scarce resource. In
a direct evaluation scenario (semantic actions are executed as soon as
the corresponding
syntax element is recognized) backtracking can also cause problems,
since already executed semantic actions
are in most cases not so easily undone. Semantic actions should
therefore be placed at points where backtracking
cannot anymore occur or where backtracking would indicate a fatal
error. Fatal errors in PEG parsing are best handled by throwing an
exception.

For many common problems idiomatic solution exist within the PEG framework as shown in the following table

Goal

Idiomatic solution

Sample

Avoid thatwhite space scanning clutters up the grammar

White Space scanning should be done immediately after reading a terminal,
but not in any other place.

Most modern programming languages are based on grammars, which can be almost parsed by the predominant
parsing technique (LALR(1) parsing). The emphasis here is on almost, meaning that there are
often grammar rules which require special handling outside of the grammar framework.
The PEG framework can handle this exceptional cases far better as will be shown for
the C++ and C# grammar.

A sequence of one or more tokens (§2.3.3) enclosed in parentheses is considered
the start of a cast-expression only if at least one of the following are true:
1) The sequence of tokens is correct grammar for a type,
and the token immediately following the closing parentheses is
the token <code>~</code>, the token <code>!</code>, the token <code>(</code>, an <code>identifier</code> (§2.4.1),
a <code>literal</code> (§2.4.4), or any <code>keyword</code> (§2.4.3) except <code>as</code> and <code>is</code>.
2) The sequence of tokens is correct grammar for a type, but notfor an expression.

A PEG grammar can only recognize an input string,
which gives you just two results, a boolean value
indicating match success or match failure and an input position
pointing to the end of the matched string part.
But in most cases, the grammar is only a means to give the input string
a structure.
This structure is then used to associate the input string with a meaning
(a semantic) and to execute statements based on this meaning.
These statements executed during parsing are called semantic actions.
The executable nature of PEG grammars makes integration of semantic actions easy.
Assuming a sequence of grammar symbols e1 e2 and a semantic action es_ which
should be performed after recognition of e1 we just get the sequence e1es_e2
where es_ is a function of the host language.

From the grammar view point es_ has to conform to the same interface as
e1 and e2 or any other PEG component, what means that es_ is a function
returning a bool value as result, where true means success and false failure.
The semantic function es_ can be defined either local to the rule
which uses (calls) es_ or in the global environment of the grammar.
A bundling of semantic functions, into-variables, helper data values and helper functions
forms then a semantic block.

Semantic actions face one big problem in PEG grammars, namely backtracking.
In most cases, backtracking should not occur anymore after a
semantic function (e.g. computation of a result of an arithemtic subexpression)
has been performed. The simplest way to guard against backtracking
in such a case is to handle any attempt to backtrack as fatal error.
The FATAL<msg> construct presented here aborts parsing (by raising an exception).

Embedding semantic actions into the grammar enables direct evaluation of the parsed
construct.
A typical application is the stepwise computation of an arithmetical expression
during the parse phase.
Direct evaluation is fast but very limiting since it can only use information present at
the current parse point.
In many cases embedded semantic actions are therefore used to collect
information during parsing for processing
after parsing has completed.

The collected data can have many forms, but the most important one is a tree.
Optimizing parsers and compilers delay semantic actions until the end of the parsing
phase and just create a physical parse tree during parsing
(our PEG framework supports tree generating by the prefixes ^ and ^^).
A tree walking process then checks and optimizes the tree.
Finally the tree is intrerpreted at runtime or it is
just used to generate virtual or real machine code.
The most important evaluation options are shown below

In a PEG implementation, tree generation must cope with backtracking by deleting
tree parts which were built after
the backtrack restore point.
Furthermore, no tree nodes should be created when a Peek or Not production is active.
In this implementation this is handled by tree generation aware
code in the implemenations for And, Peek, Not and ForRepeat productions.

During the application of a grammar to an input string, each grammar rule is called from some parent grammar rule
and matches a subpart of the input string which is matched by the parent rule. This results in a parse tree.
The grammar rule Expr would associate the arithmetical expressions 2.5 * (3 + 5/7)
with the following parse tree:

The above parse tree is not a physical tree but an implicit tree which only exists during the parse process.
The natural implementation for a PEG parser associates each grammar rule with a method (function).
The right hand side of the grammar rule corresponds to the function body and each
nonterminal on the right hand side of the rule is mapped to a function call.
When a rule function is called, it tries to match the input string at the current input
position against the right hand side of the rule. If it succeeds it advances the input
position accordingly and returns true otherwise the input position is unchanged and the result is false.
The above parse tree can therefore be regarded as a stack trace.
The location marked with [*] in the above parse tree corresponds to the
function stack Value<=Product<=Sum<=Expr with the function
Value at the top of the stack and the function Expr at the bottom of the stack.

The parsing process as described above just matches an input string or it
fails to match. But it is not difficult
to add semantic actions during this parse process by inserting helper
functions at appropriate places.
The PEG parser for arithemtical expressions could e.g. compute the result of the
expression during parsing.
Such direct evaluation does not significantly slow down the parsing process.
Using into variables and semantic blocks as listed above one would get
the following enhanced PEG grammar for arithmetical expressions which directly
evaluates the result of the expression
and prints it out to the console.

In many cases on the fly evaluation during parsing is not sufficient and one needs a
physical parse tree or an abstract syntax tree (abbreviated AST).
An AST is a parse tree shrinked to the essential nodes thereby saving space and
providing a view better suited for evaluation.
Such physical trees typically need at least 10 times the memory space of the input
string and reduce the parsing speed by a factor of 3 to 10.

The following PEG grammar uses the symbol ^ to indicate an abstract snytax node
and the symbol ^^ to indicate a parse tree node.
The grammar presented below is furthermore enhanced with the error handling item Fatal< errMsg>.
Fatal leaves the parsing process immediately with the result fail but the input position
set to the place where the fatal error occurred.

In this chapter I first show how to implement all the PEG constructs one by one.
This will be expressed in pseudo code. Then I will try to find the best interface for
this basic PEG functions in C#1.0 and C#3.0.

The natural representation of a PEG is a top down recursive parser with
backtracking.
PEG rules are implemented as functions/methods which call each other
when needed and return true in case of a match and false in case of a mismatch.
Backtracking is implemented by saving the input position before
calling a parsing function and restoring the input position to the saved one
in case the parsing function returns false.

Backtracking can be limited to the the PEG sequence construct
and the e<min,max> repetitions if the input position is only moved forward
after successful matching in all other cases.
In the following pseudo code we use strings and integer variables,
short circuit conditional expressions
(using && for AND and || for OR) and exceptions.
s stands for the input string and i refers to the current input position.
bTreeBuild is an instance variable which inhibits tree build operations when set to false.

In C#1.0 we can map the PEG operators CodePoint,Literal, Charset, Any, FATAL,
and WARNING to helper functions in a base class. But the other PEG constructs,
like Sequence, Repeat, Peek, Not, Into and Tree building cannot be easily outsourced
to a library module.
The Grammar for integer sums

To execute the Grammar we must just call the method Sum of an object of the above class.
But we cannot be happy and satisfied with this solution.
Compared with the original grammar rule, the method Sum in
the above class InSum_C1 is large and in its use of loops and helper variables
quite confusing. But it is perhaps the best of what is possible in C#1.0.
Many traditional parser generators even produce worse code.

PEG operators like Sequence, Repeat, Into, Tree Build, Peek and Not
can be regarded as operators or
functions which take a function as parameter.
This maps in C# to a method with a delegate parameter.
The Peg Sequence operator e.g can be implemented as a function
with the following interface
publicbool And(Matcher pegSequence);
where Matcher is the following delegate
publicdelegatebool Matcher();.

In older C# versions, passing a function as a parameter required some code lines, but with C#3.0 this changed.
C#3.0 supports lambdas, which are anonymous functions with a very low syntactical overhead.
Lambdas enable a functional implementation of PEG in C#.
The PEG Sequence e1 e2 can now be mapped to the C# term And(()=>e1() && e2()).
()=>e1()&& e2() looks like a normal expression,
but is in effect a fullfledged function
with zero parameters (hence ()=>) and the function body {return e1() && e2();}.
With this facility, the Grammar for integer sums

Compared to the C#1.0 implementation this parser class is a huge improvement.
We have eliminated all loops and helper variables. The correctness (accordance with the grammar rule)
is also much easier to check. The methods And, PlusRepeat, OptRepeat, In and OneOfChars
are all implemented in both the PegCharParser and PegByteParser base classes.

The following table shows most of the PEG methods available in the base library delivered with this article.

JSON (JavaScript Object Notation) [5][6] is an exchange format suited for serializing/deserializing program data.
Compared to XML it is featherweight and therefore a good testing candidate for parsing techniques.
The JSON Checker presented here gives an error message and error location in case the file does not conform
to the JSON grammar. The following PEG grammar is the basis of json_check.

With a few changes of the JSON checker grammar we get a grammar which
generates a physical tree for a JSON file. In order to have unique nodes for
the JSON values true, false, null
we add corresponding rules. Furthermore, we add a rule which matches the
content of a string (the string without the enclosing double quotes). This gives
us the following grammar:

BER (Basic Encoding Rules) is the most commonly used format
for encoding ASN.1 data. Like XML, ASN.1 serves the purpose of representing
hierarchical data, but unlike XML, ASN.1 is traditionally encoded in compact binary formats
and BER is one of the these formats (albeit the least compact one). The Internet
standards SNMP and LDAP are examples of ASN.1 protocols using BER as encoding.
The following PEG grammar for reading a BER file into a tree representation
uses semantic blocks to store information necessary for further parsing.
This kind of dynamic parsing which uses data read during the parsing process to
decode data further downstreams is typical for parsing of binary formats.
The grammar rules for BER [4] as shown below express the following facts:

BER nodes consist of the triple Tag Length Value (abbreviated as TLV)
where Value is either a primitive value or a list of TLV nodes.

The Tag identifies the element (like the start tag in XML).

The Tag contains a flag whether the element is primitive or constructed.
Constructed means that there are children.

The Length is either the length of the Value in bytes or it is the special pattern 0x80
(only allowed for elements with children), in which case the sequence of childrens
ends with two zero bytes (0x0000).

The Value is either a primitive value or -if the constructed flag is set-
it is a sequence of Tag Length Value triples. The sequence of TLV triples ends
when the length given in the Length part of the TLV tripple is used up or
in the case where the length is given as 0x80, when
the end marker 0x0000 has been reached.

This calculator supports the basic arithmetic operations + - * /,
built in functions taking one argument like 'sin','cos',.. and assignments to variables.
The calculator expects line separated expressions and assignments. It works
as two step interpreter which first builds a tree, then evaluates the tree.
The PEG grammar for this calculator can be translated to a peg parser by the
parser generator coming with the PEG Grammar Explorer. The evaluator must be
written by hand. It works by walking the tree and evaluating the results as it visits
the nodes.

The library classes PegCharParser and PegByteParser are designed for manual
Parser construction of PEG parsers. But it
is highly recommended in any case to first write the grammar on paper before implementing it.
I wrote a little parser generator (using PegCharParser) which translates a 'paper' Peg grammar
to a C# program. The current version of the PEG parser generator
just generates a C# parser. It uses optimizations for huge character sets and for big sets of literal alternatives.
Future versions will generate source code for C/C++ and other languages
and furthermore support debugging, tracing and direct execution of the grammar without the need to translate it to
a host language. But even the current version of the PEG parser generator is quite
helpful.

All the samples presented in the chapter
Expression Grammar Examples
were generated with it. The PEG Parser Generator is an example of a PEG parser which generates
a syntax tree. It takes a PEG grammar as input, validates the generated syntax tree
and then writes a set of C# code files, which implement the parser described by the PEG grammar.

The PEG Parser Generator coming with this article expects a set of grammar rules written as described in the
chapter Parsing Expression Grammars Basics.
These rules must be preceded by a header and terminated by a trailer as described in the following PEG Grammar:

The header of the grammar contains HTML/XML-style attributes which are used to determine
the name of the generated C# file and the input file properties. The following attributes
are used by the C# code generator:

Attribute Key

Optionality

Attribute Value

Name

Mandatory

Name for the generated C# grammar file and namespace

encoding_class

Optional

Encoding of the input file. Must be one of
binary,
unicode,
utf8 or
ascii.
Default is ascii

encoding_detection

Optional

Must only be present if the encoding_class is set to unicode.
In this case one of the values FirstCharIsAscii or
BOM is expected.

All further attributes are treated as comments.
The attribute reference in the following sample header

Semantic blocks are translated to local classes. The code inside
semantic blocks must be C# source text as expected in a class body, except that access keywords
can be left out. The parser generator prepends an internal access keyword when necessary. Top level semantic blocks are handled differently than local semantic blocks.

A top level semantic block is created in the grammar's constructor, wheras a local semantic
block is created each time the associated rule method is called. There is no need to define
a constructor in a local semantic block, since the parser generator creates a constructor
with one parameter, a reference to the grammar class.
The following sample shows a grammar excerpt with a top level and a local semantic block and
its translation to C# code.

Quite often, several grammar rules must use the same local semantic block. To avoid code duplication,
the parser generator supports the using SemanticBlockName clause.
The semantic block named SemanticBlockName should be defined before the
first grammar rule at the same place where the top level semantic blocks are defined. But because
such a block is referenced in the using clause of a rule, it is treated as local semantic block.

Local semantic blocks also support destructors. A destructor is tranlsated to a IDispose interface and the destructor code is placed into the corresponding Dispose() function.
The grammar rule function which is generated by the parser generator will be enclosed in a using block.
This allows to execute cleanup code at the end of the rule even in the presence of exceptions.
The following sample is taken from the Python 2.5.2 sample parser.

The Line_join_sem semantic block turns Python's implicit line joining on and off (Python is
line oriented except that line breaks are allowed inside constructs which are parenthized as in
(...) {...) [...]. The Line_join_sem semantic block and rule [8] of the
above grammar excerpt are translated to

Parsing Expression Grammars narrow the semantic gap between
formal grammar and implementation of the grammar in a functional or imperative programming language.
PEGs are therefore particularly well suited for manually written parsers as well as for attempts
to integrate a grammar very closely into a programming language. As stated in [1], the elements
which form the PEG framework are not new, but are well known and commonly used techniques when implementing
parsers manually. What makes the PEG framework unique is the selection and combination of the basic elements,
namely

A PEG grammar can incur a serious performance penalty, when backtracking occurs frequently. This is the reason
that some PEG tools (so called packrat parsers) memoize already read input and the associated rules. It can
be proven, that appropriate memoization guarantees linear parse time even when backtracking and unlimited lookahead
occurs. Memoization (saving information about already taken paths) on the other hand has its own overhead and
impairs performance in the average case. The far better approach to limit backtracking is rewriting the grammar
in a way, which reduces backtracking. How to do this will be shown in the next chapter.

The ideas underlying PEG grammars are not entirely new and many
of them are regularly used to manually construct parsers.
Only in its support and encouragement for backtracking and unlimited
lookahead deviates PEG from most earlier parsing techniques.
The simplest implementation for unlimited lookahead and backtracking
requires that the input file must be read into internal
memory before parsing starts. This is not a problem nowadays, but was
not acceptable earlier when memory was a scarce resource.

A set of grammar rules can recognize a given language.
But the same language can be described by many different grammars even within the same formalism (e.g. PEG grammars).
Grammar modifications can be used to meet the following goals:

Remarks:
[1] More informative tree nodes can be obtained by syntactical grouping of grammar
elements so that postprocessing is easier. In the above example, access
to the content of the string is improved by grouping consecutive non-escape characters
into one syntactical unit.
[2] The source reference place which is given by an error message is important.
In the example of a c comment which is not closed until the end of the input,
the error message should be given where the comment opens.
[3] Reducing calling depth means inlinig of function calls, since
each rule corresponds to one function call in our PEG implementation.
Such a transformation should only be carried out for hot spots, otherwise
the expressiveness of the grammar gets lost. Furthermore, some aggressive
inlining compilers may do this inlining for you.
Reducing calling depth may be questionable, but left factorization
is certainly not. It not only improves performance but also eliminiates potential
disruptive backtracking. When embedding semantic actions into a PEG parser,
backtracking should in many cases not occur anymore, because undoing semantic actions
may be tedious.

Most parsing strategies currently in use are based on the notion of
a context free grammar. (The following explanations follow
-for the next 50 lines- closely the material used in the
Wikipedia on Context free grammars [3])
A context free grammar consists
of a set of rules similar to the set of rules for a PEG parser.
But context free grammars are quite differently interpreted than
PEG grammars. The main difference is the fact, that context free
grammars are nondeterministic, meaning that

Alternatives in context free grammars can be chosen arbitrarily

Nonterminals can be substituted in an arbitrary order
(Substitution means replacing a Nonterminal on the right hand side of a rule by the
definition of the Nonterminal).
By starting with the start rule and choosing
alternatives and substituting nonterminals in all possible orders we can generate all the
strings which are described by the grammar
(also called the language described by the grammar).

With the context free grammar

S : 'a' S 'b' | 'ab';

e.g. we can generate the following language strings

ab, aabb, aaabbb,aaaabbbb,...

With PEG we cannot generate a language, we can only recognize an input string.
The same grammar interpreted as PEG grammar

S: 'a' S 'b' / 'ab';

would recognize any of the following input strings

ab
aabb
aaabbb
aaaabbbb

It turns out, that the nondeterministic nature of context free grammars, while being
indispensable for generating a language, can be a problem when recognizing an input string.
If an input string can be parsed in two different ways we have the problem of ambiguity,
which must be avoided by parsers.
A further consequence of nondeterminism is that a context free input string recognizer
(a parser) must choose a strategy how to substitute nonterminals on the right hand side of a rule.
To recognize the input string

This is called a rightmost derivation.
A leftmost derivation parsing strategy is called LL, wheras a rightmost
derivation parsing strategy is called LR
(the first L in LL and LR stands for "parse the input string from
Left", but who will try it from right?).
Most parsers in use are either LL or LR parsers. Furthermore, grammars
used for LL parsers and LR parsers must obey different rules.
A grammar for an LL parser must never use left recursive rules, whereas
a grammar for an LR parser prefers immediate left recursive rules over
right recursive ones.
The C# grammar e.g. is written for an LR parser. The rule for a list of
local variables is therefore:

One of the following chapters shows how to translate a context free rule into a PEG rule.

The following table compares the prevailing parser types.

Parser Type

Sub Type

Scanner

Lookahead

Generality

Implementation

Examples

Context Free

LR-Parser

yes

-

-

table driven

Context Free

SLR-Parser

yes

1

medium

table driven

handcomputed table

Context Free

LALR(1)-Parser

yes

1

high

table driven

YACC,Bison

Context Free

LL-Parser

yes

-

-

code or table driven

Context Free

LL(1)-Parser

yes

1

low

code or table driven

Predictive parsing

Context Free

LL(k)-Parser

yes

k

high

code or table driven

ANTLR,Coco-R

Context Free

LL(*)-Parser

yes

unlimited

high+

code or table driven

boost::spirit

PEG-Parser

PEG-Parser

no

unlimited

very high

code preferred

Rats,Packrat,Pappy

The reason, that the above table qualifies the generality and powerfulness of PEG as very high
is due to the PEG operators & (peek) and ! (not). It is not difficult to implement these operations,
but heavy use of them can impair the parser performance and earlier generations of parser writers
carefully avoided such features because of the implied costs.

When it comes to runtime performance, the differences between the above parser strategies are
not so clear. LALR(1) Parser can be very fast. The same is true for LL(1) parsers (predictive parsers).
When using LL(*) and PEG-Parsers, runtime performance depends on the amount of lookahead actually used
by the grammar. Special versions of PEG-Parsers (Packrat parsers) can guarantee linear runtime behaviour
(meaning that doubling the length of the input string just doubles the parsing time).

An important difference between LR-Parsers and LL- or PEG-Parsers is the fact that LR-Parser are
always table driven. A manually written Parser is therefore in most cases either an LL-Parser or a PEG-Parser.
Table driven parsing puts parsing into a black box which only allows limited user interaction.
This is not a problem for a one time, clearly defined parsing task, but is not ideal if one
frequently corrects/improves and extends the grammar because changing the grammar means in
case of a table driven parser a complete table and code regeneration.

Most specifications for popular programming languages come with a grammar suited for an LR parser.
LL and PEG parsers can not directly use such grammars because of left recursive rules.
Left recursive rules are forbidden in LL and PEG parsers because they result in idefinite recursion.
Another problem with LR grammars is that they often use alternatives with the same beginning.
This is legal in PEG but results in unwanted backtracking.
The following table shows the necessary grammar transformations when going from an LR grammar to
a PEG grammar.

Transformation Category

LR rule

~PEG rule (result of transformation)

Immediate Left Recursion =>
Factor out non recursive alternatives

// s1, s2 are terms which are not
//left recursive
// and not empty
A: A t1 | A t2 | s1 | s2;

Alternatives with same beginning =>
Merge alternatives using Left Factorization

A: s1 t1 | s1 t2;

A: s1 (t1 | t2);

The following sample shows the transformation of part of the "C" grammar from the LR grammar as presented
in Kernighan and Ritchies book on "C" to a PEG grammar (the symbol S is used to denote scanning of white space).

Comments and Discussions

I came across your article while searching the web for usefull PEG implementations.
Your concept seems to be the best I've seen so far. You got my five!
I consider to use it as part of an experimental implementation,
hence I would like to know if there will be an update
to your PEG Lib soon, so that I must not exchange the PEG Lib afterwards.