We have rewritten our parallel computing tools to use 0MQ and Tornado. The redesign
has resulted in dramatically improved performance, as well as (we think), an improved
interface for executing code remotely. This doc is to help users of IPython.kernel
transition their codes to the new code.

The process model for the new parallel code is very similar to that of IPython.kernel. There is
still a Controller, Engines, and Clients. However, the the Controller is now split into multiple
processes, and can even be split across multiple machines. There does remain a single
ipcontroller script for starting all of the controller processes.

Creating a client with default settings has not changed much, though the extended options have.
One significant change is that there are no longer multiple Client classes to represent the
various execution models. There is just one low-level Client object for connecting to the
cluster, and View objects are created from that Client that provide the different interfaces for
execution.

To create a new client, and set up the default direct and load-balanced objects:

The main change to the API is the addition of the apply() to the View objects. This is a
method that takes view.apply(f,*args,**kwargs), and calls f(*args,**kwargs) remotely on one
or more engines, returning the result. This means that the natural unit of remote execution
is no longer a string of Python code, but rather a Python function.

non-copying sends (track)

remote References

The flags for execution have also changed. Previously, there was only block denoting whether
to wait for results. This remains, but due to the addition of fully non-copying sends of
arrays and buffers, there is also a track flag, which instructs PyZMQ to produce a MessageTracker that will let you know when it is safe again to edit arrays in-place.

The multiplexing interface previously provided by the MultiEngineClient is now provided by the
DirectView. Once you have a Client connected, you can create a DirectView with index-access
to the client (view=client[1:5]). The core methods for
communicating with engines remain: execute, run, push, pull, scatter, gather. These
methods all behave in much the same way as they did on a MultiEngineClient.

The other major difference is the use of apply(). When remote work is simply functions,
the natural return value is the actual Python objects. It is no longer the recommended pattern
to use stdout as your results, due to stream decoupling and the asynchronous nature of how the
stdout streams are handled in the new system.

Load-Balancing has changed more than Multiplexing. This is because there is no longer a notion
of a StringTask or a MapTask, there are simply Python functions to call. Tasks are now
simpler, because they are no longer composites of push/execute/pull/clear calls, they are
a single function that takes arguments, and returns objects.

The load-balanced interface is provided by the LoadBalancedView class, created by the client:

In [10]: lbview=rc.load_balanced_view()# load-balancing can also be restricted to a subset of engines:In [10]: lbview=rc.load_balanced_view([1,2,3])

A simple task would consist of sending some data, calling a function on that data, plus some
data that was resident on the engine already, and then pulling back some results. This can
all be done with a single function.

Let’s say you want to compute the dot product of two matrices, one of which resides on the
engine, and another resides on the client. You might construct a task that looks like this:

Note the use of Reference This is a convenient representation of an object that exists
in the engine’s namespace, so you can pass remote objects as arguments to your task functions.

Also note that in the kernel model, after the task is run, ‘A’, ‘B’, and ‘C’ are all defined on
the engine. In order to deal with this, there is also a clear_after flag for Tasks to prevent
pollution of the namespace, and bloating of engine memory. This is not necessary with the new
code, because only those objects explicitly pushed (or set via globals()) will be resident on
the engine beyond the duration of the task.

See also

Dependencies also work very differently than in IPython.kernel. See our doc on Dependencies for details.

With the departure from Twisted, we no longer have the Deferred class for representing
unfinished results. For this, we have an AsyncResult object, based on the object of the same
name in the built-in multiprocessing.pool module. Our version provides a superset of that
interface.

However, unlike in IPython.kernel, we do not have PendingDeferred, PendingResult, or TaskResult
objects. Simply this one object, the AsyncResult. Every asynchronous (block=False) call
returns one.

The basic methods of an AsyncResult are:

AsyncResult.wait([timeout]):# wait for the result to arriveAsyncResult.get([timeout]):# wait for the result to arrive, and then return itAsyncResult.metadata:# dict of extra information about execution.