Concurrent Processes

With an application composed of many concurrent processes, we lose
the convenience offered by the determinism of sequential programs.
For processes sharing the same zone of memory, the result of the
following program cannot be deduced from reading it.

main program

letx=ref1;;

process P

process Q

x:=!x+1;;

x:=!x*2;;

At the end of the execution of P and Q, the reference x
can point to 2, 3 or 4, depending on the order of execution of each
process.

This indeterminism applies also to terminations. When the memory
state depends on the execution of each parallel process, an
application can fail to terminate on a particular execution, and
terminate on another. To provide some control over the execution, the
processes must be synchronized.

For processes using distinct memory areas, but communicating between
each other, their interaction depends on the type of
communication. We introduce for the following example two
communication primitives: send which sends a value,
showing the destination, and receive which receives a
value from a process. Let P and Q be two communicating
processes:

process P

process Q

letx=ref1;;

lety=ref1;;

send(Q,!x);

y:=!y+3;

x:=!x*2;

y:=!y+receive(P);

send(Q,!x);

send(P,!y);

x:=!x+receive(Q);

y:=!y+receive(P);

In the case of a transient communication, process Q can miss
the messages of P. We fall back into the non-determinism of the
preceding model.

For an asynchronous communication, the medium of the communication
channel stores the different values that have been transmitted. Only
reception is blocking. Process P can be waiting for Q, even
if the latter has not yet read the two messages from P. However, this
does not prevent it from transmitting.

We can classify concurrent applications into five categories
according to the program units that compose them:

unrelated;

related, but without synchronization;

related, with mutual exclusion;

related, with mutual exclusion and communication;

related, without mutual exclusion, and with synchronous communication.

The difficulty of implementation comes principally from these last
categories. Now we will see how to resolve these difficulties by
using the Objective CAML libraries.

Compilation with Threads

The Objective CAML thread library is divided into five modules, of which the
first four each define an abstract type:

The Threads library is not usable with the native compiler
unless the platform implements threads conforming to the POSIX
10031. Thus we
compile executables by adding the libraries unix.a and
pthread.a:

Module Thread

The Objective CAML Thread module contains the primitives for
creation and management of threads. We will not make an exhaustive
presentation, for instance the operations of file I/O have been
described in the preceding chapter.

A thread is created through a call to:

# Thread.create;;- : ('a -> 'b) -> 'a -> Thread.t = <fun>

The first argument, of type 'a -> 'b, corresponds to the
function executed by the created process; the second argument, of type
'a, is the argument required by the executed function; the
result of the call is the descriptor associated with the process. The
process thus created is automatically destroyed when the associated
function terminates.

Knowing its descriptor, we can ask for the execution of a process and
wait for it to finish by using the function
join. Here is a usage example:

Let us consider the previous example, and add timing. We create a
first process
t1 of which the associated function
f_proc2 creates in its turn a process t2 which executes
f_proc1, then f_proc2 delays for
d seconds, and then terminates t2. On termination of
t1, we print the contents of n.