The topic of parsing comes up regularly. Most texts on parsing are
either very theoretical or bound to a particular tool. I'll try to introduce
parsing techniques that are well suited to be used with perl, and give just a
bit of theoretical background.

Intended audience: Perl hackers that can already work with regular
expressions, but don't have any formal Computer Science eduction, and
don't really know how
to parse things that are too complex for regular expressions.

Theoretical background: Parsing

Parsing is the process of matching a text against a formal
specification.

In computer science the most important question is if the text can be
derived from the specification.

In practice it is equally important to extract information according to
the formal specification, like capturing part of a regex, or building a parse
tree from a grammar.

Theoretical background: Regular Expressions

The name "Regular Expression" comes from computer science (I'll call it
CS-Regex for disambiguation), and in CS it has a different meaning than in
Perl.

CS-Regexes consists of concatenation, alternation, repetition (with *) and
grouping. Perl regexes add many features that don't fit into the computer
scientist's view on Regexes, like lookaround, capturing groups and
backreferences, zero width assertions (like \b) and so on.

So CS-Regexes are not as powerful as Perl Regexes, but they have a big
advantage: it's easy to build a machine that performs a CS-Regex match.

Regular expressions are well suited to match strings with short, fixed
patterns, but they can't be used for any type of contstructs that uses
nesting, like parentheis in arithmetic expressions, nested XML tags and the
like. (You can do this with (??{ .. }) constructs in 5.8 and with
recursive regexes in 5.10, but both are hard to write and maintain and
generally a pain.)

CS-Regexes can be matched with a simple state machine called DFA
(deterministic finite automaton), which consists of a bunch of states, and for
each character from the input it is decided which state should be the
next. So matching a string against a regex takes linear time (compared to the
length of the string).

Context Free Languages

If you want to parse something like arithmetic expressions or a programming
language, you need one more feature in your description language:
recursion.

You can see that our short grammar matches the given expression, but not
quite the way we'd like to have it. In normal notation '*' has tighter
precedence than '+', if we would evaluate our parse tree it would first
evaluate 3+4, and then multiply the result by 2.

That's not very surprising because our grammar didn't contain any
precedence information. We can fix this:

(Some lines contain a few derivations in parallel, just to save space. Each
production that doesn't result in a terminal symbol produces a pair of
parentheses to emphasize the resulting tree structure.)

The introduction of separate precedence levels for addition and
multiplication leads to a correctly nested parse tree.

Lexing

Most compilers don't parse their input in one pass, but first break it up
into smaller pieces called "tokens". The process is called "lexing" or
"scanning".

There are two important reasons for that:

Efficiency. Parsing CFLs takes O(n³) time, scanning is done in linear
time. By reducing parsing to tokens instead of characters the n
decreases.

Handling whitespaces and comments: The lexer can just throw away
whitespceas and comments, that way we don't have to clutter up the grammar
with rules how to handle whitespaces.

Lexing is normally done with a DFA, but since we're focusing on perl we can
just as well use regular expressions:

The token definition consits of a name for the token, a regular expression
and optionally a flag that indicates that the token shouldn't appear in the
output.

The program loops while the end of the string is reached, trying to match
all regexes in turn. As soon as a match is found, the token is pushed onto the
@tokens array. (In a real compiler it would also store the current
position to allow better error messages on wrong input).

The /c modifier on the regex takes care that a failed match doesn't
reset pos($input). The \G assertion anchors the regex to the end
of the previous match.

You can see that branches (like in factor) and repetition (in expression
and term) use a lookahead of one token to decide which branch to take.
Fortunately we never have to backtrack, we're always in the right branch.

A bit more theory

This parser is called a "recursive descending" parser, because it uses
recursion to descent into the rules that finally match terminal symbols.

It is a predictive, i.e. it guesses what the next token is, and it's a
top down parser.

Recursive descending parsers can't parse all context free languages, only a
subset called LL(k), where k is the number of lookahead tokens (1 in our
case).

Instead of predicting the next token, we could read all the tokens and push
them onto a stack. When the top items on the stack correspond to the right
hand side of a grammar rule we can remove them, and push the rule (annoted
with the match tokens, of course) onto the stack.

That technique is bottom up parsing (or LR parsing), and the languages that
can be parsed are called "deterministic context free languages", short DCFL.
It is more flexible, but also more complicated.

Parser generators like yacc or bison usually produce such LR parsers.

More on Recursive Descending Parsers

You will have noticed that the subs expression and term have
the same structure, and only differ in the operator and the next function to
be called.

One way around that duplicate code is a function factory (you'll notice
that I've started to read "Higher Order Perl" ;-). Very good stuff!):

Since you need such a rule for every precedence level, that can save save
you a lot of typing.

But if you write more parsers, you'll notice that a lot of work will still
be duplicated.

A good alternative is a module like Parse::RecDescent which does
all the work that you normally duplicate (though I admit that I've never used
it myself).

Perl 6 Rules, the replacement for regular expressions, are another
alternative. And don't start screaming "but it's not there yet" - it is. You
can use the Parrot Grammar Engine (PGE) to write Perl 6-like rules.

A Perl 6 Rule comes in three flavours: "regex", "rule" and "token".
A regex is just what you think it is: a regex, but not very regular. A "token"
is a regex that doesn't backtrack, and a "rule" is a regex that doesn't
backtrack and that treats every whitespace in the rule body as an optional
whitespace in the input.

So this is what our grammar would look like in Perl 6, with the old one as
a comparsion:

Note that [...] are non-capturing groups, while char classes are
<[...]> now. <name> is a call to another rule, and it's result
is stored in a named capture with the name of the rule (which can be change
with <capturename=rulename>).

In case of a syntax error this regex will just not match, which is usually
not the desired result. You can add a bit of error handling code:

Now when a closing parenthesis is missing you'll get a nice error
message.

Perl 6 will have an even better mechanism for parsing arithmetic
expressions: you can specify what an operator and a what an atom is, and
specify the precedence and associativity of each operator. Then Perl uses a
bottom-up parser to does what you want:

Summary

You've seen that parsing isn't magic, and while it takes a little while to
get use it, it's not all that hard if you can stick to simple grammars.

In practice a recursive descending parser will get you very far, and it's
not hard to write one. You can either do it manually, or use a module
or Perl 6's fabolous rules.

There's much more to say about parsing, for example about error handling
and recovery, associativity and many other topics, but it's a start ;-)

(And as usual I couldn't resist to introduce some Perl 6 stuff. You can see
that from Perl 5 to Perl 6 we jump from "regular" to context free languages,
which is quite a step in the Chompsky hierarchy.)

This is a little misleading. LALR parsers take linear time (where the size of its lookup tables is taken to be a constant, since it is independent of the input string to be parsed), while recursive descent can in principle take exponential time.

I'm guessing you are thinking of the CKY algorithm, which does take cubic time. This algorithm's importance is strictly theoretical, since it can be used to efficiently parse completely arbitrary grammars. In practice, we exploit the extra structure possesed by most practical grammars to obtain faster parsing than CKY provides.

So I don't think lexing is a matter of running-time efficiency.
Its benefit is that if you blindly parse a character stream without lexing, you would have a separate leaf in the parse tree for every character (including whitespace!).
Tokens like /\d+/, if expressed directly in the context-free grammar and parsed one character at a time, would then lead to very unwieldy parse trees like this:

Lexing "collects" all these nasty bottom sections of the parse tree, which would otherwise be everywhere, into chunks that reflect how the parse tree will be used. Of course this could also be cleaned up with special cases in the parser, but lexing separates out this step in an elegant way.

PS: "Chompsky" (with a p) is a name commonly given to pet dogs by linguists... either as an insult or homage to Chomsky, depending on which side of the great linguistic divide you fall ;)

He wouldn't, because you didn't specify a grammar. You implemented one in a Turing machine (aka perl).

The implicit, underlying grammar for parsing is not in RE. It's in CFL, even in DCFL. In this case it's LL(1) (you need one character lookahead to distinguish * and ** tokens).

Ironically you usually think of LL(1) in terms of top down parsers, that is "all grammars that can be matched by recursive descending parsers with one token lookahead" (not an exact definition, of course), but you use a bottom up technique.

The technique is none of the standard techniques I know, which all read the input string (or stream) from left to right, while you perform multiple passes (with while( m[[()\s]] )).

Update:

The computation of the result doesn't work in CFL, of course. If you're really a tough guy you can evaluate such things in context sensitive grammars (but it's much work), and for that you need, in general, a linearly bounded Turing machine (LTM).

In the normal computation model you can't modify the input tape, and the size of a supplemental, rw-tape is the size the "linear" refers to. And since the algorithm above uses a complete copy of the input string, it also uses (at least) a LBM ;-)

You may or may not find that “recursive descent” parsing is right for you. Recursive-descent is very appropriate when there's a lot to do at each point, and the structure of the input-language is very consistent and predictable. Cases where the input is, if you will, “heavy and full of meaning.”

But there is another approach that is sometimes quite useful: it's a more grammar-driven, state-table-driven approach, the one used by bison or yacc, in which a grammar-following engine is chugging through its internal description of the language, building trees (“hashes and lists of hashes and lists”) along the way, and calling-out to your code as pure subroutines along the way.

In this scenario, that “grammar-following engine” is driving the bus for the entire time. It calls-out to you at the prescribed times, passing your routines the information they're supposed to receive, and expecting each routine to mind its own business. The complexities of the language being processed are not represented in (nor should they be represented in) the discrete subroutines that you write... unlike recursive-descent, where more-or-less the opposite is true.

Which is “better?” That question can't be answered categorically: “it depends.” It's enough to know that in Perl both approaches are available to you. Each approach is distinct, and each approach is distinct for a to-it appropriate reason. That's why you get to choose.

This is, or can be, a very deep pool ... but don't go any deeper than you actually need to go, unless you're curious and you've brought your life-jacket with you.

I applaud this piece because ... parsers are an extremely powerful technology that you need to become aware-of. I'm not quite sure why they get relegated to the cloistered halls of graduate-school. They're actually relatively straightforward these days. So... if you're wrestling your way through “regular expression hell,” remember that ... especially in Perl-land ... There's More Than One Way To Do It.™

I've read about 1/3 of this and fully intend to finish it. Overall, I've given it ++. The following is intended as a constructive comment in the "RFC" sense.

I think I self-identify with your stated audience, Perl hackers that can already work with regular expressions, but don't have any formal Computer Science eduction, and don't really know how to parse things that are too complex for regular expressions. I'm currently trying my hand at an applied parsing problem (Reversible parsing (with Parse::RecDescent?)). I got as far as I did searching CPAN for "parse" and reading the doc. I'd say I got to a practical (in the parse direction, not in the reverse direction), albeit simple solution. I learned alot more from ikegami's reply. I'm also learning from the documentation for Parse::Marpa

The criticism is that I'm not sure I could have gotten to that level of practical results from yout tutorial. I think your tutorial has aimed a little high for the stated target audience. Terms like "deterministic finite automaton", "linear time", "Context Free Languages" and the whole "A bit more theory" section slow me down a little and make it a dense read. Having gone through the work I already went through, your tutorial is helpful and instructive, especially in defining some terms that the other sources have been throwing around. This gives me good academic material to compliment the quick-n-dirty applicable stuff I got from Parse::RecDescent's documentation, et al..

Just to be clear, I find this to be immensely informative, and am enjoying working my way through it. The academic terms are defined as you use them for the most part, and I'm learning alot. I think the only edit I'd suggest (partly becasue the whole article seems a little above me for me to be making significant content suggestions) is to restate the intended audience. This is an intermediate-to-advanced text that fills a niche between Perl hackers that can already work with regular expressions, but don't have any formal Computer Science eduction and the heavily technical papers that are intended for academia. Said Perl Hackers can use this to improve themselves, but I found it a little beyond (though certainly comlementary to) "practical".

#my sig used to say 'I humbly seek wisdom. '. Now it says:
use strict;
use warnings;
I humbly seek wisdom.