Tools

"... ABSTRACT. If you want to program a parallel computer, a purely functional language like Haskell is a promising starting point. Since the language is pure, it is by-default safe for parallel evaluation, whereas imperative languages are by-default unsafe. But that doesn’t make it easy! Indeed it has p ..."

ABSTRACT. If you want to program a parallel computer, a purely functional language like Haskell is a promising starting point. Since the language is pure, it is by-default safe for parallel evaluation, whereas imperative languages are by-default unsafe. But that doesn’t make it easy! Indeed it has proved quite difficult to get robust, scalable performance increases through parallel functional programming, especially as the number of processors increases. A particularly promising and well-studied approach to employing large numbers of processors is data parallelism. Blelloch’s pioneering work on NESL showed that it was possible to combine a rather flexible programming model (nested data parallelism) with a fast, scalable execution model (flat data parallelism). In this paper we describe Data Parallel Haskell, which embodies nested data parallelism in a modern, general-purpose language, implemented in a state-of-the-art compiler, GHC. We focus particularly on the vectorisation transformation, which transforms nested to flat data parallelism. 1

...ating the description with a non-trivial example, the Barnes-Hut algorithm [BH86]. GHC supports other forms of concurrency besides data parallelism, but we focus here exclusively on the latter. Singh =-=[SJ08]-=- gives a tutorial covering a broader scope, including semi-implicit parallelism (par), explicit threads, transactional memory, as well as Data Parallel Haskell. DPH is simply Haskell with the followin...

"... In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semanti ..."

In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.

Abstract. The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO. 1

"... Abstract. We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructe ..."

Abstract. We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence. 1

"... Over the last five years, graphics cards have become a tempting target for scientific computing, thanks to unrivaled peak performance, often producing a runtime speed-up of x10 to x25 over comparable CPU solutions. However, this increase can be difficult to achieve, and doing so often requires a fun ..."

Over the last five years, graphics cards have become a tempting target for scientific computing, thanks to unrivaled peak performance, often producing a runtime speed-up of x10 to x25 over comparable CPU solutions. However, this increase can be difficult to achieve, and doing so often requires a fundamental rethink. This is especially problematic in scientific computing, where experts do not want to learn yet another architecture. In this paper we develop a method for automatically parallelising recursive functions of the sort found in scientific papers. Using a static analysis of the function dependencies we identify sets — partitions — of independent elements, which we use to synthesise an efficient GPU implementation using polyhedral code generation techniques. We then augment our language with DSL extensions to support a wider variety of applications, and demonstrate the effectiveness of this with three case studies, showing significant performance improvement over equivalent CPU methods, and similar efficiency to hand-tuned GPU implementations.

...ode generation[1] for loop schedules. There is a wealth of literature on parallel implementations of fully fledged functional programming languages, and a recent tutorial in the context of Haskell is =-=[8]-=-. We have consciously restricted our input language to enable more aggressive optimisations. Elliott [5] shows how functional programs can be used to generate efficient code for graphics operations on...

"... Transactional Memory Introspection (TMI) is a novel reference monitor architecture that provides complete mediation, freedom from time of check to time of use bugs and improved failure handling for authorization. TMI builds on and integrates with implementations of the Software Transactional Memory ..."

Transactional Memory Introspection (TMI) is a novel reference monitor architecture that provides complete mediation, freedom from time of check to time of use bugs and improved failure handling for authorization. TMI builds on and integrates with implementations of the Software Transactional Memory (STM) architecture [Harris and Fraser 2003]. In this paper we present a formal definition of TMI and a concrete implementation over the Haskell STM. We find that this specification and reference implementation establishes clear semantics for the TMI architecture. In particular, they help identify and resolve ambiguities that apply to implementations such in our prior work [Birgisson et al. 2008].

...pawns a new thread to execute the action, immediately returning a newly allocated thread identifier. For further discussion of concurrency we refer to [Peyton Jones 2001] or tutorials such as [Peyton =-=Jones and Singh 2008-=-]. The Haskell STM is based on a monadic type similar to the one for I/O actions, namely STM a. A value of this type represents an STM action, which when executed may perform smaller STM actions and r...

...nterface (Section 15) • High-speed concurrent servers (Section 14) • Distributed programming (Section 17) One useful aspect of this tutorial as compared to previous tutorials covering similar ground (=-=[12; 13]-=-) is that I have been able to take into account recent changes to the APIs. In particular, the Eval monad has replaced par and pseq (thankfully), and in asynchronous exceptions mask has replaced the o...

"... Abstract: We report on a case study of implementing parallel variants of the Davis-Putnam-Logemann-Loveland algorithm for solving the SAT problem of propositional formulas in the functional programming language Haskell. We explore several state of the art programming techniques for parallel and conc ..."

Abstract: We report on a case study of implementing parallel variants of the Davis-Putnam-Logemann-Loveland algorithm for solving the SAT problem of propositional formulas in the functional programming language Haskell. We explore several state of the art programming techniques for parallel and concurrent programming in Haskell and provide the corresponding implementations. Based on our experimental results, we compare several approaches and implementations. 1