2009 December 30

Fun with the Lazy State Monad

The lazy state monad doesn’t work the way you think it works. Thinking of the lazy state monad in terms of imperative programming is a very useful first approximation, at least in terms of what is computed, but in terms how things are computed this intuition is beyond useless. I hope to dissuade you of the latter part of the intuition in this post, by demonstrating two of the more interesting things that can be done with the lazy state monad.

Albert Y.C. Lai recently shared a program that demonstrates the lazy state monad in a rather interesting way:

pro :: State [Bool] ()
pro = do
pro
s <- get
put (True : s)

Instead of tail recursion, this employs head recursion. The question is, what does pro return when you run it, and why? I found it easy to guess the correct answer, but my reasoning was completely wrong. Of course, if viewed through a wholly imperative mindset, this leads to non-termination, but the lazy state monad extracts usable information from this definition.

I finally disentangled the corecursive bits from this example today. The circular programming occurs in the list argument, not the tree. Note that the flow of the list ns from one call of renumber to the next is like the state monad. From this observation, I wrote the following whimsical code:

And finally, we have arrived at a way to disentangle Jones and Gibbons’ algorithm. This easily generalizes from breadth-first numbering to breadth-first labeling, and like the original, it is capable of renumbering the Stern-Brocot tree. The key insights here are the use of the lazy state monad, and getting the type of fresh correct. Everything else is relatively straightforward, once this groundwork is laid.

It’s interesting to perform a post-mortem analysis of why coming up with this proved to be so difficult for me. I’ll admit that I spent a few weeks trying to decipher the operational characteristics of Jones and Gibbons’ labeling algorithm, and while I think I have a reasonable grasp on it, I’m still not completely comfortable with it. However, this monadic abstraction seems perfectly obvious in hindsight.

This contrasts starkly to my own understanding of Lloyd Allison’s queue: I was completely comfortable with the operational aspects of the algorithm, but the monadic abstraction was quite difficult to come to grips with. So my difficulties with Jones and Gibbons’ algorithm was an over-emphasis on the operational aspects of the algorithm, and too much focus on the continuation monad as part of the solution. Basically, I was hopeful that the same methodologies in my Monad Reader article would be directly useful, so much so that I had a case of tunnel vision.

While it is not obvious how the continuation monad might be applicable to this problem, continuations still did play a role in the solution: look at the type of fresh, it’s the same (a -> r) -> r pattern that the continuation monad uses. It seems to me that there is something deeper here, although I don’t know what that would be.

These examples might make a stronger case for the lazy state monad than the example in my previous post. While foldrByLevel is relatively easy to adapt to the continuation passing state monad, I don’t know how to do the same with either of these two examples.

That is very elegant, my appreciation of applicative functors could be better. :-)

I would contrast this with C++ and Scheme in that their order of evaluation of arguments is not specified. I wrote a parser in C++ years ago basically using an applicative functor kind of idiom, only to have to rewrite it in something closer to the monadic style because of this fact.

At first I found this rather frustrating, although I grew to see it as kind of a good thing, as you must make dependencies on order explicit, and it allows the compiler to optimize more aggressively.

On the other hand, the (big) downside is that code happens to work on one compiler can break on another. Of course, these specific issues don’t apply to Haskell because effects are tightly controlled, but it does inform my stylistic tastes.

With the monadic style it’s easy to specify a right-to-left renumbering, all you have to do is swap the order of two lines. However, I’m not sure the elegance of the applicative style holds up quite as nicely in the face of this change.

But of course, monadic versus applicative styles is not an all-or-nothing proposition, and I do think that your decomposition of fresh into fresh and down is probably a good move.

The problem is that once you apply a continuation transformer to the monad stack itself, you make the lazy state monad a strict state monad, and this algorithm will get stuck in an infinite non-productive loop. It does not matter if it’s a codensity transformer, or a regular continuation transformer. The higher-ranked types do not change the termination properties of the program.

One can verify my claim above by installing Edward Kmett’s category-extras package, and applying his Codensity transformer to the original code. Also, I think I see what you tried to do with freshCod, however, your code produces a depth-first renumbering, not breadth-first as is desired.

It’s an interesting idea though; is it possible to somehow “layer” the monads on each other without resorting to transformers? Basically, instead of combining the effects of two monads, can you use one monad to structure the computation inside another monadic language while keeping them more-or-less separate?