Finger Tree - The ultimate data structure?

In this post I will be talking about one of the most useful data structures I have come across in quite some time - Finger Trees.

Why do I ask if its the ultimate data structure? Well read this excerpt from the the original paper on Finger Trees:

a functional representation of persistent sequences supporting access to the ends in amortized constant time,
and concatenation and splitting in time logarithmic in the size of the smaller piece. Further, by defining
the split operation in a general form, we obtain a general purpose data structure that can serve as a sequence,
priority queue, search tree, priority search queue and more.

Need I say more? A data structure so multi faceted that it can, at the same time, function as multiple other structures answering various types of search queries in logarithmic time.
To be honest, I see finger trees more as a substrate to build other data structures on top of it. They can be used to implement nearly all the abstract data structures mentioned in Okasaki’s classic book. Take a look:
You can see most of the update, add or cons operations have logarithmic time complexities. And that runs very well for most of our day to day applications. For instance persistent hash maps in Clojure have O(log n) lookup cost. In fact it is base 32 logarithm so that nearly translates to O(1).

So now we are going to try and implement this amazing data structure in Haskell. Well finger trees are not exclusive to lazy languages. However, it is heavily based on one of the most fundamental typeclasses of Haskell: Monoids and also it makes essential use of laziness but it is suitable for strict languages that provide a lazy evaluation primitive. So without any further delay let us dive into Finger Trees.

The definition is very similar to a normal recursive tree definition in Haskell

dataTreeva=Leafva|Branchv(Treeva)(Treeva)

This is a binary tree. However we store the values only at the leaves unlike a normal BST. We will majorly optimize the structure so that the access of the leaves happens in logarithmic time.

Now the point of interest for us here is the type parameter v. The metadata in v gives us information on how to traverse the tree. However we want this to be as generic as possible, only then can we use the same underlying structure to answer multiple types of queries. We are going to take 2 examples to compare and contrast how to unify the 2 metadata:

A list (linked list to be precise) with logarithmic random access.

A priority queue with logarithmic peek.

We know a normal xs!!n operation on a list in any programming language runs at O(n). However we will use a simple trick of annotating each subtree with its size. The size of a tree is the number of nodes in the tree i.e the sum of number of nodes in its left and right child. Hence the metadata of a tree will look like this:

4
/ \
2 2
/ \ / \
1 1 1 1

And of course the leaves will contain the actual value. So whenever we are asked to access the nth element we do this:

Before starting with the actual code lets think about the priority queue too. We will model the priority queue like a tournament tree. Here lets represent the priority using integral numbers. The lower the number, the higher the priority. So if we model the same using our Finger Tree the type parameter v will now represent the minimum of its 2 children. Hence:

3
/ \
5 3
/ \ / \
8 5 7 3

Thats a simple tournament tree with winning strategy being the function min, which leads to the super simple recursion:

if priority(node) = priority (left child)
recurse down left child
else
recurse down right child
Break recursion at the leaf node of course!!

Thats laughably simple. Right? The trick is unifying the above 2 metadata functions for priority queue and random access list. And the grand unifier of it all is the elegant Monoid typeclass.

A succint definition of a Monoid is a binary associative operation with an identity. Lets look at the typeclass definition of Monoid:

classMonoidawheremempty::amappend::a->a->a

While the mempty operation is simply the identity of the monoidal structure, mappend is an incredibly useful abstraction. Although the first thing that comes to our mind when thinking about “appending” is ++ because we are too used to operating with lists. But in a general sense a mappend operation is a way to summarize about the data structure. If you come to think of it most search problems deal with searching for a specific element within a complex structure. If that structure happens to be an instance of Monoid you can use the same mappend operation to summarize and answer queries on that very same data structure. The mappend operation is represented as <>. So, the above trees should be able to answer:

Take a while to look above and understand the mempty element. And check how both the instances satisfies left identity, right identity and associativity laws of Monoids.
Now this gives me the amazing power to define my branch function as a smart constructor like this:

And for the leaves it is dead simple to construct the leaf nodes simply using the normal constructors. The Size and Partition types are already instances of monoids, like:

Leaf(Size1)5--in case of listLeaf(Priority2)4--in case of a priority queue

And thats it! You have your finger tree defined. As you encounter more operations and extend it to answer more type of queries, instantiate the Monoid typeclass and figure out ways of combining the elements into action. Have a look at the random access function and the priority queue lookup function together. They are just super simple Haskell implementations of the pseudocode defined above:

I have also implemented the search functionality in my github repo which actually unifies the above 2 functions into an even higher abstraction, but the above examples should drive the thought to your head.

The above implementations are from Heinrich Apfelmus’ amazing blog on Finger Trees. However the code in his blog does not type check. I believe he wanted to expose the ideas majorly. I made sure the code mentioned here does type check with GHC 7.10 and it actually required some effort to fix the types. And adding a new typeclass actually did modify the type signature a bit and those signatures will tell you a lot about the relation between these structures. You can go through the code here. I urge interested readers to actually study the original paper by Ralf Hinze and Ross Patterson and it actually happens to be one of the more accessible papers on functional programming and Haskell.