1 Answer
1

The most important difference is that lexing will translate your input domain.

A nice result of this is that

You do not have to think about whitespace anymore. In a direct (non-lexing) parser, you have to sprinkle space parsers in all places where whitespace is allowed to be, which is easy to forget and it clutters your code if whitespace must separate all your tokens anyway.

You can think about your input in a piece-by-piece manner, which is easy for humans.

However, if you do perform lexing, you get the problems that

You cannot use common parsers on String anymore - e.g. for parsing a number with a library Function parseFloat :: Parsec String s Float (that operates on a String input stream), you have to do something like takeNextToken :: TokenParser String and execute the parseFloat parser on it, inspecting the parse result (usually Either ErrorMessage a). This is messy to write and limits composability.

You have adjust all error messages. If your parser on tokens fails at the 20th token, where in the input string is that? You'll have to manually map error locations back to the input string, which is tedious (in Parsec this means to adjust all SourcePos values).

Error reporting is generally worse. Running string "hello" *> space *> float on wrong input like "hello4" will tell you precisely that there is expected whitespace missing after the hello, while a lexer will just claim to have found an "invalid token".

Many things that one would expect to be atomic units and to be separated by a lexer are actually pretty "too hard" for a lexer to identify. Take for example String literals - suddenly "hello world" are not two tokens "hello and world" anymore (but only, of course, if quotes are not escaped, like \") - while this is very natural for a parser, it means complicated rules and special cases for a lexer.

You cannot re-use parsers on tokens as nicely. If you define how to parse a double out of a String, export it and the rest of the world can use it; they cannot run your (specialized) tokenizer first.

You are stuck with it. When you are developing the language to parse, using a lexer might lead you into making early decisions, fixing things that you might want to change afterwards. For example, imagine you defined a language that contains some Float token. At some point, you want to introduce negative literals (-3.4 and - 3.4) - this might not be possible due to the lexer interpreting whitespace as token separator. Using a parser-only approach, you can stay more flexible, making changes to your language easier. This is not really surprising since a parser is a more complex tool that inherently encodes rules.

To summarize, I would recommend writing lexer-free parsers for most cases.

In the end, a lexer is just a "dumbed-down"* parser - if you need a parser anyway, combine them into one.

What about performance? I guess if you're using Parsec anyhow performance isn't paramount, but it's still a possible consideration.
–
Tikhon JelvisMar 5 '13 at 5:26

2

Another potential issue with a dedicated lexer is that you won't be able to implement extensible parsers (with different sets of tokens) any more.
–
SK-logicMar 5 '13 at 10:28

2

Nice answer, but I have to take issue with the "atomic units are hard in a lexer" example. Now I'm certainly not an expert in the theory, but I believe that delimited strings can be parsed pretty easily with a regular grammar. i.e. /^"([^\\"]|\\")*"/ is a real regular expression (in the formal sense -- I think) that even deals with escaping. The point is well taken, though.
–
Matt FenwickMar 5 '13 at 19:20

1

I have to agree, performance should be mentioned here. I've converted from the all-parser style to parser+lexer (both in Parsec) then to parser+alex-generated-lexer, and every step increased performance noticeably.
–
ScottWestMar 14 '13 at 8:40

1

Another nice result of using a scanner with regular expressions is that those expressions can be automatically left factored during the NFA to DFA conversion. While this can be done by hand in Parsec, in practice everyone resorts to backtracking instead, which is less efficient in both time and space.
–
John F. MillerApr 22 '13 at 18:01