The StreamMemo library for Coq illustrates how to memoize a function f : nat -> A over the natural numbers. In particular when f (S n) = g (f n), the imemo_make shares the computation of recursive calls.

Suppose we have a function f : binTree -> A that is structurally recursive, meaning that there is a function g : A -> A -> A such that f (Branch x y) = g (f x) (f y). How do we build a similar memo table for f in Coq such that the recursive computations are shared?

In Haskell, it is not too hard to build such a memo table (see MemoTrie for example) and tie-the-knot. Clearly such memo tables are productive. How can we arrange things to convince a dependently typed language to accept such knot tying is productive?

Although I've specified the problem in Coq, I wouldd be happy with an answer in Agda or any other dependently typed language as well.

$\begingroup$Thanks for this. I have two worries about this code. Firstly the go value is a function of a Size parameter. In general, there is no sharing between independent function calls at the same value. This can probably be fixed by adding a let statement in the definition of h (Branch l r). Secondly, the stratified definition of BT means that two, otherwise identically shaped trees, will have different values when they occur at different levels. These distinct values won't be shared in the MemoTrie.$\endgroup$
– Russell O'ConnorApr 23 '18 at 23:51

$\begingroup$The issue I linked to seems to conclude that the sizes are not a problem at runtime when compiled with the GHC backend, though I haven't verified this myself.$\endgroup$
– SaizanApr 24 '18 at 8:15

$\begingroup$I see. I'm looking for a memoization solution that can be used within the proof assistant so it can be used as part of a proof by reflection. Your solution is probably suitable for compilation assuming the Size types end up erased.$\endgroup$
– Russell O'ConnorApr 24 '18 at 21:22

It operates similarly to Saizan's solution, by stratifying binary trees based on a size metric, in my case counting the number of internal nodes of binary trees, and producing a lazy stream of containers of all solutions for a particular size. Sharing happens because of a let statement in the stream generator that holds the initial part of the stream to be used in later parts of the stream.

Examples show that for vm_compute, evaluating a perfect binary tree with 8 levels after having evaluated a perfect binary tree with 9 levels is much faster than only evaluating a perfect binary tree with 8 levels.

However, I'm hesitant to accept this answer because the overhead of this particular solution is bad that it performs much worse than my memoization without structural recursion for my examples of practical inputs. Naturally, I want a solution that performs better under reasonable inputs.