Legend:

But, why would we be interested in modifying GHC's concurrency environment? There are several good reasons to believe that a particular concurrent programming model, or a scheduling policy would not suit every application. With the emergence of many-core processors, we see NUMA effects becoming more prominent, and applications might benefit from NUMA aware scheduling and load balancing policies. Moreover, an application might have a better knowledge of the scheduling requirements -- a thread involved in user-interaction is expected to be given more priority over threads performing background processing. We might want to experiment with various work-stealing or work-sharing policies. More ambitiously, we might choose to build X10 style async-finish or Cilk style spawn-sync task parallel abstractions. Ideally, we would like allow the programmer to write an application that can seamlessly combine all of these different programming abstractions, with pluggable scheduling and load balancing policies.

10

10

11

While we want to provide flexibility to the Haskell programmer, this should not come at a cost of added complexity and decreased performance. This idea reflects in the synchronization abstractions exposed to the programmer ([#PTM PTM]), and our decision to keep certain pieces of the concurrency puzzle in the RTS ([#SafeForeignFunctionInterface Safe FFI],[#Thunksandblackholes Blackholes]).

11

While we want to provide flexibility to the Haskell programmer, this should not come at a cost of added complexity and decreased performance. This idea reflects in the synchronization abstractions exposed to the programmer ([#PTM PTM]), and our decision to keep certain pieces of the concurrency puzzle in the RTS ([#SafeForeignFunctionInterface Safe FFI],[#Thunksandblackholes Blackholes]). The figure below captures the key design principles of the proposed system.