A fellow Haskeller at SC11 - greetings! I've been using Haskell in the HPC world for a while, although not for writing HPC applications directly. I've mostly been interested in Haskell as a language for building DSLs that generate code in lower level languages like C or Fortran. I've been following the parallel Haskell work, but haven't done anything serious with it yet - mostly in lurker mode on the mailing list at the moment. Functional programming in the HPC realm has been around almost as long as HPC itself -- stretching back to the old dataflow languages, things like multilisp and *lisp, and the Sisal project.

Shoot me a message if you want to meet at the conference and chat about Haskell and HPC!

I'm in similar shoes. I'm using Haskell for what can be loosely labeled as HPC: small in HPC world, big in normal world (meaning small clusters). However I'm only using Haskell for coordination, and later on I plan to use DSLs to generate C or CUDA code.

In particular we're working with a new startup that is working on HPC in Haskell. Both low level SIMD vector stuff and also distributed/cluster parallelism. I hope to blog about both in the not too distant future...

I'm surprised we haven't seen more on this. Haskell is extremely well suited for implementations on unconventional hardware, since the compiler is profoundly able to understand the semantics of the program, and since the program is profoundly prevented from making assumptions about the underlying hardware.

This gets back to the issue of how smart the compiler can be, and how well it can understand the semantics of the application program.

As a simple example consider how easy it is to make the map operation parallel, because the compiler and library know that the calling program cannot depend on what order the element mappings are computed in.