Concurrency: Managing Parallel Processes

This feature is not supported on the Wolfram Cloud.

Processes and Processors

A process is simply a Wolfram Language expression being evaluated. A processor is a parallel kernel that performs such evaluations.

The command ParallelEvaluate discussed in the tutorial "Parallel Evaluation" will send an evaluation to an explicitly given processor, requiring you to keep track of available processors and processes yourself. The scheduling functions discussed in this tutorial perform these functions for you. You can create any number of processes, many more than the number of available processors. If more processes are created than there are processors, the remaining processes will be queued and serviced when a processor becomes available.

Starting and Waiting for Processes

The two basic commands are ParallelSubmit[expr] to line up an expression for evaluation on any available processor, and WaitAll[pid] to wait until a given process has finished.

Each process in the queue is identified by its unique evaluation ID, or eid.

waits for one of the given processes to finish. It returns , where id is the pid of the finished process,res is its result, and ids is the list of remaining eids

Queuing processes.

WaitNext is nondeterministic. It returns an arbitrary process that has finished. If no process has finished, it waits until a result is available. The third element of the result of WaitNext is suitable as an argument of another call to WaitNext.

The functions ParallelSubmit and WaitAll implement concurrency. You can start arbitrarily many processes, and they will all be evaluated eventually on any available remote processors. When you need a particular result, you can wait for any particular pid, or you can wait for all results using repeated calls to WaitNext.

Basic Usage

Queue the evaluation for processing on a remote kernel. Note that has the attribute to prevent evaluation of the expression before queuing. The value returned by is the process ID (pid) of the queued process.

A Note on the Use of Variables

If an expression e in ParallelSubmit[e] involves variables with assigned values, care must be taken to ensure that the remote kernels have the same variable values defined. Unless you use DistributeDefinitions or shared variables, locally defined variables will not be available to remote kernels. See Values of Variables in the tutorial "Parallel Evaluation" for more information.

Lower-Level Functions

the list of evaluations submitted with ParallelSubmit but not yet assigned to an available kernel

$QueueLength

gives the length of the input queue

ResetQueues[]

waits for all running processes and abandons any queued processes

QueueRun[]

collects finished evaluations from all kernels and assign new ones from the queue

Evaluation queue control.

returns True if at least one evaluation was submitted to a kernel or one result received from a kernel and False otherwise. Normally, you should not have to run yourself. It is called at appropriate places inside WaitAll and other functions. You need it only if you implement your own main loop for a concurrent program.

Working with Process IDs

WaitAll[{pid1,pid2,…}] is merely a simple form of a general mechanism to parallelize computations. WaitAll can take any expression containing pids in its arguments and will wait for all associated processes to finish. The pids will then be replaced by the results of their processes.

The pids generated by an instance of ParallelSubmit should be left intact and should neither be destroyed nor duplicated before WaitAll performs its task. The reason is that each of them represents an ongoing parallel computation whose result should be collected exactly once.

Examples of expressions that leave pids intact follow.

Pids in a list are safe, because the list operation does not do anything to its arguments; it merely keeps them together. Nested lists are also safe for the same reason.

Pids are symbolic objects that are not affected by Plus. They may be reordered, which is irrelevant. Most arithmetic operations are safe.

Mapping a function involving ParallelSubmit onto a list is safe because the result will contain the list of the pids.

Now you can start the computation, requiring that it print each result as it goes on. To stop the computation, abort it by choosing Kernel▶Abort Evaluation or pressing .. The explicit call of is necessary in such examples where you program your own scheduling details.

Automatic Process Generation

A general way to parallelize many kinds of computation is to replace a function g occurring in a functional operation by Composition[ParallelSubmit,g]. This new operation will cause all instances of calls of the function g to be queued for parallel evaluation. The result of this composition is a process ID (pid) that will appear inside the structure constructed by the outer computation where g occurred. To put back the results of the computation of g, wrap the whole expression in WaitAll. This will replace any pid inside its expression by the result returned by the corresponding process.

Here are a few examples of such functional compositions.

Parallel Mapping

A parallel version of Map is easy to develop. The sequential Map wraps a function around all elements in a list.

Comparison with Parallelize

Parallel mapping, tables, and inner products were already introduced in "Parallel Evaluation". Those functions divide the task into batches of several subproblems each, under control of the method option. The functions in this section generate one evaluation for each subproblem. This division is equivalent to the setting .

If all subproblems take the same amount of time, the functions such as ParallelMap[] and ParallelTable[] are faster. However, if the computation times of the subproblems are different, and not easy to estimate in advance, it can be better to use WaitAll[…ParallelSubmit[]…] as described in this section or use the equivalent method option setting. If the number of processes generated is larger than the number of remote kernels, this method performs automatic load balancing, because jobs are assigned to a kernel as soon as the previous job is done, and all kernels are kept busy all the time.