Tools

"... The process of writing large parallel programs is complicated by the need to specify both the parallel behaviour of the program and the algorithm that is to be used to compute its result. This paper introduces evaluation strategies, lazy higher-order functions that control the parallel evaluation of ..."

The process of writing large parallel programs is complicated by the need to specify both the parallel behaviour of the program and the algorithm that is to be used to compute its result. This paper introduces evaluation strategies, lazy higher-order functions that control the parallel evaluation of non-strict functional languages. Using evaluation strategies, it is possible to achieve a clean separation between algorithmic and behavioural code. The result is enhanced clarity and shorter parallel programs. Evaluation strategies are a very general concept: this paper shows how they can be used to model a wide range of commonly used programming paradigms, including divideand -conquer, pipeline parallelism, producer/consumer parallelism, and data-oriented parallelism. Because they are based on unrestricted higher-order functions, they can also capture irregular parallel structures. Evaluation strategies are not just of theoretical interest: they have evolved out of our experience in parallelising several large-scale applications, where they have proved invaluable in helping to manage the complexities of parallel behaviour. These applications are described in detail here. The largest application we have studied to date, Lolita, is a 60,000 line natural language parser. Initial results show that for these applications we can achieve acceptable parallel performance, while incurring minimal overhead for using evaluation strategies.

"... This paper presents a practical evaluation and comparison of three state-of-the-art parallel functional languages. The evaluation is based on implementations of three typical symbolic computation programs, with performance measured on a Beowulf-class parallel architecture. We assess ..."

This paper presents a practical evaluation and comparison of three state-of-the-art parallel functional languages. The evaluation is based on implementations of three typical symbolic computation programs, with performance measured on a Beowulf-class parallel architecture. We assess

"... The performance of data parallel programs often hinges on two key coordination aspects: the computational costs of the parallel tasks relative to their management overhead | task granularity ; and the communication costs induced by the distance between tasks and their data | data locality . In da ..."

The performance of data parallel programs often hinges on two key coordination aspects: the computational costs of the parallel tasks relative to their management overhead | task granularity ; and the communication costs induced by the distance between tasks and their data | data locality . In data parallel programs both granularity and locality can be improved by clustering, i.e. arranging for parallel tasks to operate on related sub-collections of data.

"... . Parallel programs must describe both computation and coordination, i.e. what to compute and how to organize the computation. In functional languages equational reasoning is often used to reason about computation. In contrast, there have been many different coordination constructs for functional la ..."

. Parallel programs must describe both computation and coordination, i.e. what to compute and how to organize the computation. In functional languages equational reasoning is often used to reason about computation. In contrast, there have been many different coordination constructs for functional languages, and far less work on reasoning about coordination. We present an initial semantics for GpH, a small extension of the Haskell language, that allows us to reason about coordination. In particular we can reason about work, average parallelism and runtime. The semantics captures the notions of limited (physical) resources, the preservation of sharing, and speculative evaluation. We show a consistency result with Launchbury&apos;s well-known lazy semantics. 1 Introduction One of the advantages of declarative languages is that it is relatively easy to reason about the values computed by programs, this being attributable to their preservation of referential transparency. Indeed, within the fun...

"... We present a framework for designing parallel programming languages whose semantics is functional and where communications are explicit. To this end, we specialize Brookes and Geva's generalized concrete data structures with a notion of explicit data layout and obtain a CCC of distributed struc ..."

We present a framework for designing parallel programming languages whose semantics is functional and where communications are explicit. To this end, we specialize Brookes and Geva&apos;s generalized concrete data structures with a notion of explicit data layout and obtain a CCC of distributed structures called arrays. We find that arrays&apos; symmetric replicated structures, suggested by the data-parallel SPMD paradigm, are incompatible with sum types. We then outline a functional language with explicitly-distributed (monomorphic) concrete types, including higher-order, sum and recursive ones. In this language, programs can be as large as the network and can observe communication events in other programs. Such flexibility is missing from current data-parallel languages and amounts to a fusion with their so-called annotations, directives or meta-languages. 1 Explicit communications and functional programming Faced with the mismatch between parallel programming languages and the requirements o...

"... This paper extends the existing concept of evaluation strategies, higher-order functions that can be used to separate behaviour from algorithm in parallel functional programs, to support the SPMD model of parallelism. Our model allows SPMD sub-programs to be composed with other parallel paradigms, o ..."

This paper extends the existing concept of evaluation strategies, higher-order functions that can be used to separate behaviour from algorithm in parallel functional programs, to support the SPMD model of parallelism. Our model allows SPMD sub-programs to be composed with other parallel paradigms, or to be arbitrarily nested with other parallel paradigms. This represents a major step towards supporting the Group SPMD model, a generalisation of SPMD, in Glasgow Parallel Haskell. It also represents the first attempt of which we are aware to model SPMD programming in a system supporting implicit communication. In order to ensure that the necessary barrier communications are modelled correctly at the high level we use the implementation concept synchronisation through logically shared graph reduction. We therefore ensure, in common with other work on evaluation strategies, that no new constructs need be added to the host functional language other than basic ones to introduce parallelism an...

"... High-level control of parallel process behaviour simplifies the development of parallel software substantially by freeing the programmer from low-level process management and coordination details. The latter are handled by a sophisticated runtime system which controls program execution. In this pape ..."

High-level control of parallel process behaviour simplifies the development of parallel software substantially by freeing the programmer from low-level process management and coordination details. The latter are handled by a sophisticated runtime system which controls program execution. In this paper we look behind the scenes and show how the enormous gap between high-level parallel language constructs and their low-level implementation has been bridged in the implementation of the parallel functional language Eden. The main idea has been to specify the process control in a functional language and to restrict the extensions of the low-level runtime system to a few selected primitive operations.

"... MPP Haskell is a parallel implementation of the lazy purely-functional language Haskell for the Thinking Machines Inc. CM-5 large-scale distributed-memory multiprocessor. MPP Haskell is a derivative of GUM, a message-based parallel implementation of Haskell. GUM was carefully designed to minimise pe ..."

MPP Haskell is a parallel implementation of the lazy purely-functional language Haskell for the Thinking Machines Inc. CM-5 large-scale distributed-memory multiprocessor. MPP Haskell is a derivative of GUM, a message-based parallel implementation of Haskell. GUM was carefully designed to minimise performance loss from low-bandwidth and high-latency communications. As an apparent consequence MPP Haskell, with the benefit of high-bandwidth and lowlatency communications, achieves remarkably scalable performance; it appears to be the highest performance higher-order lazy functional language implementation extant. Despite existing inefficiencies in lazy language implementation, considerable performance is recouped by automatic dynamic load balancing. 1 Background Functional programming languages have long been touted as ideal for parallel implementation. Despite this fact, and the existence of numerous functional language implementations, very few parallel implementations have become gen...