A symbol is uniquely identified by its name, the module it originates from and what entity it refers to. Some symbols carry additional information, for example constructors have an additional field for the type they belong to.

Let’s look at the new example from the haskell-names github page. The findHeads function finds all occurrences of the head symbol in a given AST of a Haskell module. It needs access to stored name information and therefore runs in ModuleT.

Thursday, October 9, 2014

Since by now we have several ongoing image processing library projects in Haskell I thought it’s time to spice things up a little. I therefore announce the Haskell image processing benchmark game. Right now the five contestants are: friday, unm-hip, yarr, repa and opencv but the contest is open and participation is just one pull request away! The disciplines are: reading an image, binary thresholding and mean filtering. Of course more disciplines will be added.
Performance is not the only goal one should strive for. Ease of use and number of predefined algorithms are two other important dimensions.
Let’s compare the different libraries with code examples for specific use cases.

The image type

Currently all benchmarks run on grey images so the first thing we need is a type for those.

For friday and unm-hip the preferred type for grey images was easy to find. There is only one. For yarr and repa not so much. In opencv there is only one image type for any kind of image regardless of the pixel type. From experience I can tell that this is not a good thing.

Reading in an image in unm-hip is straight forward. Except that it does not know how to read PNG and can only read PGM. For yarr, repa and friday I had to look a little more stuff up than I’d have liked to. And opencv being a foreign library we need to manually make every image we create garbage collected by adding a finalizer.

Forcing an image

For benchmarking we also need a way to make sure the resulting images are fully evaluated. The friday and opencv image types already guarantee that. For the others we define an extra force function:

Wait, what kind of hack is that forcing an unm-hip image? I couldn’t find a better way because the constructor of the image type is hidden. Of course this means that the benchmark is pointless, but I decided to include it anyway.

Binary thresholding

Now let’s look at the actual image processing algorithms. The threshold function should accept a grey value image and yield an image where every pixel that has a value above 127 in the original image has value 255 and all others have value 0.

Binary thresholding is a simple map that is straight forward to implement in yarr and repa. The other three friday, unm-hipand opencv provide their own specialized functions for binary thresholding. unm-hips toBinaryImage chooses reasonable default values for you. friday provides its own mini language for creating filtering operations that I had to learn. opencv takes as its fifth parameter an integer that represents the thresholding algorithm to be used and depending on that gives different meaning to the other parameters. I personally find that very confusing.

friday provides it’s own set of functions for filtering. Using unm-hip feels like functional image processing should feel like. Using repa and yarr to implement a five by five mean filter required quite a bit of reading documentation and examples. Again the opencv function cvSmooth accepts as its third parameter an integer representing the smoothing algorithm to be used and changes the meaning of the other parameters or completely ignores them depending on that. This is why I think we need one or more Haskell image processing library, so let’s work together to make it happen.

Oh, right, there are benchmark results as well! Disclaimer: While I have tried my best, I might have used the libraries in the wrong way. Please do correct my mistakes.

EDIT: After using deepseq to force unm-hip images, rewriting the mean function in friday to pointful style and adjusting the yarr and repa functions these are the new results.

Sunday, October 5, 2014

So I have translated the non-recursive1 insertion and selection sort to Morte. The idea was to check if they both reduce to the same highly optimized code. It turns out that they do not. I will reproduce the code here anyway because I might have made a mistake or it is useful to someone else.

Sunday, September 28, 2014

So I told my (non-Haskeller) friends about Morte and how it promises that programs with identical semantics have identical performance. They immediately replied: “How is that possible, if we have different sorting algorithms with the same semantics but different asymptotic performance”. I replied: “Either it is impossible to write a sorting algorithm in Morte, or they all have the same performance”. Looking back I now understand that identical semantics here means identical semantics as provable by equational reasoning and that there might be other notions of identical semantics where programs with identical semantics can have different performance.
Anyway, I challenged myself to write a sorting algorithm in Morte to see what’s going on. As a first step I had to find a non-recursive Haskell sorting algorithm. This is what the rest of this post is about. I then plan to translate it to Morte.

Let’s first introduce the standard F-Algebra and F-Coalgebra machinery. Why? Because Gabriel suggest to implement recursion with an algebra and corecursion with a coalgebra. A little generalization doesn’t hurt.

But these standard definitions for Fix, fold and unfold are recursive. This will not do.

The non-recursive version encodes the least fix point Mu as the ability to fold and the greatest fix point Nu as the ability to unfold. Mu uses universal quantification and Nu uses existential quantification. wrap folds the inner data and then applies the given algebra one more time. unwrap applies the coalgebra once and then proceeds to unfold with the new seed.