Tutorial

Very short Tutorial

The FastFlow programming model

The FastFlow programming model is a structured parallel programming model. The framework provides several pre defined, general purpose, customizable and composable parallel patterns (or algorithmic skeletons). Any applications whose parallel structure may be modelled using the provided parallel patterns, used alone or in composition, may be implemented using FastFlow.

The basic

Pipeline

Pipelining is one of the simplest parallel pattern where data flows through a series of
stages (or nodes) and each stage processes the input data in some way producing as output a modified version or new data. We will call the data flows streams of data or simply streams. A pipeline's stages can operate sequentially or in parallel and may have or not
have an internal state.

Farm

The task-farm pattern is a stream parallel paradigm based on the replication of a purely functional computation (let's call the function F). Its parallel semantics ensures that it will process tasks such that the single task latency is close to the time needed to compute the function F sequentially,
while the throughput (under certain conditions) is close to F/n where n is the number of parallel agents used to execute the farm (called Workers). The concurrent scheme of a farm is composed of three distinct parts: the Emitter (E), the pool of workers (Ws) and the Collector (C). The emitter gets farm's input tasks and distributes them to workers using a given scheduling strategy (round-robin, auto-scheduling, user-defined). The collector collects tasks from workers and sends them to the farm's output stream.

ParallelFor/Map

A sequential iterative kernel with independent iterations is also known as a par-
allel loop. Parallel loops may be clearly parallelized by using the map or farm
patterns, but this typically requires a substantial re-factoring of the original
loop code with the possibility to introduce bugs and not preserving sequential equivalence.
In the FastFlow framework there are a set of data parallel patterns implemented on top of the
basic FastFlow skeletons to ease the implementation of parallel loops: ParallelFor, ParallelForReduce, ParallelForPipeReduce.

Data Dependency Tasks Executor (aka MDF)

The data-flow programming model is a general approach to parallelization
based upon data dependencies among a program's operations. The computations is expressed
by the data-flow graph, i.e. a DAG whose nodes are instructions and arcs are pure data dependencies.
If instead of simple instructions, portions of code (sets of instructions or functions) are used as graph's nodes, then it is called the macro data-flow model (MDF). It is worth noting that, the data-flow programming model is able to work both on stream of values and on a single value.

As an example, considering the Strassen's algorithm described by the following sequence of instructions operating on (sub-)matrices :