Archive for February, 2009

I think some asked me to post a link to the slides I used
for my talk last Friday.
They’re at http://www.cs.nott.ac.uk/~pgh/cont-eating-talk.pdf
(There are some backup slides at the end I didn’t use.)

Eating: beyond streams

Abstract
`Eating’ is about writing programs that implement functions on final
coalgebras, typically stream types. The crucial thing is to do it in
a composable way, as with Unix-style piping.

I’ll quickly canter through stream eating, pointing out the use of
nested fixedpoints, and of universal properties to define things.
The main content of the talk is about eating `infinite’ objects that
inhabit the final coalgebras of container functors.

For a long time I was stuck on how to do this in a composable way;
a few days ago I cracked it, and would like to show you how.
You can regard it as a study in dependently typed programming,
in which quite a few interesting things emerge.

****
Finally, many thanks for a really fun three and a half years in
Nottingham. And for the Talisker and baccy!
xxx,
Hank

On my way to check the plumbing, I dropped in to see how people were doing and told a bit of a story about proof search in implicational logic. You may have encountered the problem before in the context of Roy Dyckhoff’s contraction-free sequent calculi, which Lennart Augustsson turned into Djinn. Here’s a Haskell rendering of the basic idea. An implicational formula is an atom or an implication.

data Fmla
= Atom Atom
| Fmla :-> Fmla
type Atom = String

Let’s write a checker to see if a Fmla follows from a bunch of hypotheses. Step one, introduce the hypotheses, reducing the problem to testing if an atom holds.

That’s to say, to use an implicational hypothesis, show the premise holds and use the conclusion. But from which context should we show the premise? If you look carefully, you’ll see that atom passes in the whole of its context, bar the hypothesis in the stoup. That stops us getting in a loop when we try to deduce A from (A → A). We haven’t thrown away any theorems, because any proof which uses the same hypothesis twice on a path in the tree can be shortened to cut out the loop.

But is that enough to ensure termination? Sure is. How might we see that? The problem is quite nasty, because if we want to show

(A → B → C) → D, A → C |- D

backchaining actually increases the length of the context: we end up showing

A → C, A, B |- C

but even so, you might get the feeling that the problem is getting smaller somehow. And you’d be right: we got rid of a higher-order hypothesis but acquired more at lower order. We can make that clear with a wee bit of indexing.

The index is a loose bound on the order (arrow-left-nestedness) of the formula. Never use tight bounds when loose bounds will do—it’s too much hassle keeping tight bounds accurate. Now we can collect formulae in buckets according to their order. Let’s use this handy structure

which I’ve defined recursively today for ease of iso. To see why these are sort-of triangles, instantiate T to vectors. Also T = Fin·suc goes with a bang! But I digress… Vectors can be seen as degenerate triangles.

So our checker repeatedly makes higher-order buckets smaller, at the cost of making lower-order buckets larger, but that’s a perfectly respectable way to be monotonically decreasing.

Agda’s termination checker isn’t so good at spotting that this is what’s going on, but before we had Agda’s bag on the side, there was a language called Epigram which resorted to Type Theory as its language for explaining termination. Gasp!

It’s gone 5pm on Friday and I’m still in school. (Thorsten and Graham are away today: they’d never allow this!) I’m just going to link to this inscrutable lump of code and finish this post later.