Computational science and languages

Getting started with repa

Edit 5/17/2011: Check this out! Don Stewart has started a tutorial on using Repa on the Haskell.org wiki, and it has far more detail than I had in this post from February. I see people land here occasionally from Google, so I wanted to make sure that if you want up to date details (especially since my post was from an earlier version of Repa), I would recommend you read the tutorial. It’s pretty good!

I’ve come to realize over the last few years that, regardless of what language I tend to land in, one of the first things I encounter when trying to write code to implement models relevant to my research is a need for arrays. In some cases, I’m able to get rid of arrays in favor of an abstraction that is better suited to the language I’m working with (e.g., moving to purely functional heaps from array-based ones in C). In other cases though, the array concept is fundamental to the problem at hand. The obvious example that I hit frequently is image processing – images are more often than not represented as arrays of pixel intensity values due to the representation most commonly used by camera manufacturers. In other cases, the pattern arises because data is discretized and an algorithm is defined over the elements of the discretized data based on some spatial neighborhood.

In the last few months, ever since the paper “Regular, shape-polymorphic, parallel arrays in Haskell” came out (along with the corresponding repa package available on Hackage), I’ve been interested in taking advantage of this new way of programming arrays in a functional setting. The first thing that caught my eye wasn’t the parallel part, but the abstractions that the repa library provides for common operations on arrays such as slices and whole-array operations. Even in a single core setting, this is attractive versus explicit indexing. I have always found that code based on whole-array notation is easier to read and write, which is one of my attractions to languages like Matlab.

For example, consider the following scenario: one can represent a 2D set of vectors as an MxNx2 array, where the two MxN slabs correspond to the two components of the vectors. How would one compute a 2D array representing the magnitude of these vectors? First, let’s consider how one would write this in Fortran 90 (the Matlab implementation is very similar) for a 5×5 array.

The key is the last line really – the rest isn’t important. In that line of code, we use slice notation (the colon character, which indicates “everything along a dimension”) to express a computation over an entire array. The array x is a 5x5x2, so the array x(:,:,1) is just a 5x5x1, or 5×5 slice. The same goes for the slice above it, x(:,:,2). The arithmetic operators * and + are applied to every element of their operands (in Matlab, we would use the .* operator for elementwise multiplication to distinguish it from matrix multiplication). Operators that work over whole arrays assume that the operands have the same shape, otherwise there would be some elements in which one operand would be undefined.

In my initial experiments implementing this kind of code in Haskell, I worked with the vector package simply for performance reasons: vectors correspond pretty well to flat, 1D C arrays, so they are pretty fast. Unfortunately, they lack convenient abstractions for dealing with multidimensional arrays, whole-array operations, and operations over structured sub-arrays like slices. More often than not, somewhere in my code based on Data.Vector, there existed a function that looked like:

This kind of function was used to map logical 2D indices onto the 1D vector that held the data. Algorithms over the whole array would degrade into essentially one level loops over the entire index space flattened to one dimension. This is undesirable because in effect, we remove all ability from the compiler to do anything interesting based on the higher dimensional structure of the algorithm. Furthermore, we explicitly lower the level of abstraction of the arrays – we move from a 2 or 3D array that the algorithm is specified in terms of to a 1D representation that is explicitly defined by the programmer. This isn’t a very desirable approach since, in reality, it’s nothing more than a direct mapping of loops we’d write in C to Haskell.

This is where array languages (e.g., Matlab and Fortran) have an advantage – they don’t require the programmer to lower their algorithm to the equivalent 1D mapping over memory locations. Programmers in those languages can keep their problem expression at a higher level of abstraction that is closer to the original mathematical formulation.

Fortunately, we can get these useful abstractions in Haskell via repa too now, and along with them we also get good performance and parallel execution. Representing this particular problem using repa is actually quite clean. Consider the following code:

This implements the same algorithm that I stated above in Fortran (or Matlab) using the slice and shape representation capability in repa. Instead of saying “u(:,:,0)”, we define a shape “slabX = (Z:.All:.All:.(0::Int))”, and then use the slice operator invoked with the array to slice and the shape representing the index set where the slice is to be applied. In array languages like Fortran or Matlab, the colon on a dimension corresponds to “every index on that dimension”. In repa, “All” represents the same concept. Of course, the correspondence isn’t completely transparent. Haskell currently lacks the syntactic sugar to allow one to use a compact array notation like Matlab — if one wants to extract a slice, it is explicitly requested using the slice function that takes the array to slice and the index set shape in which to apply the slice. From a syntactic perspective, the repa library a little wordy, but from a programmatic perspective, we have a nice notation that corresponds to something well suited to data parallel programming.

Interestingly, the repa examples and repa paper don’t go into the usage of slice notation very deeply, which can lead to issues for newcomers to Haskell (or even us intermediate level folks). Consider this single line of code:

let slabX = (Z:.All:.All:.(0::Int))

This is necessary since, without the explicit type, the compiler defaults to something other than Int, leading to a set of somewhat mysterious compiler error messages. For example, removing the explicit type on the function f above, we see errors like:

This isn’t really a fault of either the compiler or the repa library developers – it is simply a side effect of the repa library pushing on a dimension of the Haskell type system that is not richly used by other libraries. As a result, error cases can lead to somewhat ambiguous messages that can make debugging awkward. Note that I wrote this post and tested my code using GHC 6.12.3 and repa 1.1.0.0. It is entirely possible that these errors will change in future versions of the compiler or the repa library.

In a followup post to this one, I will share some code I wrote for a real physics model that makes heavy use of whole array operations based on repa. I wanted to split out this basic discussion of slices and subtleties of their syntax into a standalone post before I put up something about the larger program I’m wrapping up.

3 thoughts on “Getting started with repa”

Does it make sense to use Repa on GHC 6.12? One one hand, its Hackage page suggest that GHC > 6.13.20100309 should be used to get decent performance, but it’s not a stable release to rely upon. On the other hand, Repa doesn’t build with GHC 7.0.1. So I am still in doubt with what kind of environment it is supposed to work (except from experimental 6.13.* builds).

As I understand it, the main difference you would see between 6.12.x and 6.13.x is performance. I have been working with 6.12 simply because it is the most recent Haskell platform release, and at this time performance isn’t a big concern since I expect the new Haskell platform any day now and a likely update to repa to make it work on the new GHC. I think your concern will be addressed pretty soon when the next HP hits the net and things all get updated to work with it.