The monadic approach is more flexible, because it allows the grammar of the second part to depend on the result from the first one, but we rarely need this extra flexibility in practice.

You might think that having some extra flexibility can't hurt, but in reality it can. It prevents us from doing useful static analysis on a parser without running it. For example, let's say we want to know whether a parser can match the empty string or not, and what the possible first characters can be in a match. We want functions

empty :: Parser a -> Bool
first :: Parser a -> Set Char

With an applicative parser, we can easily answer this question. (I'm cheating a little here. Imagine we have a data constructors corresponding to (<*>) and (>>=) in our candidate parser "languages").

By allowing more, we're able to reason less. This is similar to the choice between dynamic and static type systems.

But what is the point of this? What might we use this extra static knowledge for? Well, we can for example use it to avoid backtracking in LL(1) parsing by comparing the next character to the first set of each alternative. We can also determine statically whether this would be ambiguous by checking if the first sets of two alternatives overlap.

Usually, however, the choice between applicative and monadic parsing has already been made by the authors of the parsing library you're using. When a library such as Parsec exposes both interfaces, the choice of which one to use is purely a stylistic one. In some cases applicative code is easier to read than monadic code and sometimes it's the other way round.

Wait! I had thought the same until today, when it occurred to me that the empty test can be applied to monadic parsers as well. The reason is that we can get the value you named ??? by applying the parser x on the empty string. More generally, you can just feed the empty string into the parser and see what happens. Likewise, the set of first characters can be obtained at least in a functional form first :: Parser a -> (Char -> Bool). Of course, converting the latter to Set Char would involve an inefficient enumeration of characters, that's where applicative functors have the edge.
–
Heinrich ApfelmusApr 21 '12 at 8:12

If a parser is purely applicative, it is possible to analyse its structure and "optimise" it before running it. If a parser is monadic, it's basically a Turing-complete program, and performing almost any interesting analysis of it is equivalent to solving the halting problem (i.e., impossible).

The difference between Applicative and Monad has nothing to do with Turing-completeness. In Haskell, the relative difficulty of optimizing Monad instances is only due to the historical mistake of exposing (>>=) alone in the type class, making it impossible for instances to provide more optimized implementations of operators like ap. The Applicative class avoids this mistake, and exposes <*> (the equivalent of ap).
–
Piet DelportAug 3 '14 at 12:27

@PietDelport, I think the trouble is that the underlying representations of monadic parsers generally aren't amenable to optimized Applicative instances, while applicative parsers generally don't support >>=.
–
dfeuerJul 15 at 22:14

The main reason I can see to prefer applicative parsers over monadic parsers is the same as the main reason to prefer applicative code over monadic code in any context: being less powerful, applicatives are simpler to use.

This is an instance of a more general engineering dictum: use the simplest tool which gets the job done. Don't use a fork lift when a dolly will do. Don't use a table saw to cut out coupons. Don't write code in IO when it could be pure. Keep it simple.

But sometimes, you need the extra power of Monad. A sure sign of this is when you need to change the course of the computation based on what has been computed so far. In parsing terms, this means determining how to parse what comes next based on what has been parsed so far; in other words you can construct context-sensitive grammars this way.

No, "use the simplest tool" may seem like a good rule of thumb, but actually is not. E.g., we use computer for writing letters, however computer to a sheet of paper is something like a table saw compared to a pair of scissors.
–
Valentin GolevDec 6 '12 at 16:29

I mean, there are always upsides and downsides for every choice, but mere simplicity is a bad basis for a choice. Especially when you're deciding whether to use Haskell. :D
–
Valentin GolevDec 6 '12 at 16:31

1

Yes, you're right. It would be better to say something like, "the right tool is the one which is maximally efficient while being minimally complex." What's missing from my description is the part about efficiency: you want a tool which is sufficiently powerful not just to do the job, but to make the job as easy as possible. But at the same time you don't want a tool which has lots of bells and whistles not applicable to the task at hand, since these most likely increase the complexity of operating it to no benefit.
–
Tom CrockettDec 6 '12 at 20:42

ISTR that formally, because Haskell allows infinite grammars, monad does not actually increase the number of recognizable languages.
–
luquiOct 22 '11 at 19:13

3

@luqui I'm curious about your comment. Here's a language. The alphabet is Haskell Strings, and the language is the set of words in which all the letters are equal. This is dead easy as a monadic parser: option [] (anyToken >>= many . exactToken) (where here anyToken and exactToken aren't actually a part of the Parsec library, but probably ought to be; ask me if you're unsure of what they do). How would the corresponding applicative parser look?
–
Daniel WagnerOct 22 '11 at 21:15

2

@stephen, can you give a reference for context sensitive parsers? I'm curious what is the exact power of monadic and applicative parsers.
–
sdcvvcOct 22 '11 at 21:47

1

@sdcvvc: this paper discusses the relative power of arrow parsers vs. monadic parsers and points out that monadic parsers enable parsing context-sensitive grammars while arrows do not. I believe applicative parsers would be strictly less powerful than arrow parsers.
–
Tom CrockettOct 22 '11 at 23:06

Did you mean to say that Monads are a superset of Applicatives?
–
GuildensternNov 8 '13 at 17:10

1

@Guildenstern Monadic operations are a superset of Applicative operations. But said another way: types have an instance of Monad are a subset of types that have an instance of Applicative. When speaking of "Monads" and "Applicatives," one is usually referring to the types, not the operations.
–
Dan BurtonNov 8 '13 at 17:37