Previous topic

Next topic

This Page

Quick search

The direct, or multiengine, interface represents one possible way of working with a set of
IPython engines. The basic idea behind the multiengine interface is that the
capabilities of each engine are directly and explicitly exposed to the user.
Thus, in the multiengine interface, each engine is given an id that is used to
identify the engine and give it work to do. This interface is very intuitive
and is designed with interactive usage in mind, and is the best place for
new users of IPython to begin.

This form assumes that the default connection information (stored in
ipcontroller-client.json found in IPYTHONDIR/profile_default/security) is
accurate. If the controller was started on a remote machine, you must copy that connection
file to the client machine, or enter its contents as arguments to the Client constructor:

# If you have copied the json connector file from the controller:In [2]: rc=Client('/path/to/ipcontroller-client.json')# or to connect with a specific profile you have set up:In [3]: rc=Client(profile='mpi')

To make sure there are engines connected to the controller, users can get a list
of engine ids:

In [3]: rc.idsOut[3]: [0,1,2,3]

Here we see that there are four engines ready to do work for us.

For direct execution, we will make use of a DirectView object, which can be
constructed via list-access to the client:

In many cases, you simply want to apply a Python function to a sequence of
objects, but in parallel. The client interface provides a simple way
of accomplishing this: using the DirectView’s map() method.

Python’s builtin map() functions allows a function to be applied to a
sequence element-by-element. This type of code is typically trivial to
parallelize. In fact, since IPython’s interface is all about functions anyway,
you can just use the builtin map() with a RemoteFunction, or a
DirectView’s map() method:

Calling a @parallel function does not correspond to map. It is used for splitting
element-wise operations that operate on a sequence or array. For map behavior,
parallel functions do have a map method.

call

pfunc(seq)

pfunc.map(seq)

# of tasks

# of engines (1 per engine)

# of engines (1 per engine)

# of remote calls

# of engines (1 per engine)

len(seq)

argument to remote

seq[i:j] (sub-sequence)

seq[i] (single element)

A quick example to illustrate the difference in arguments for the two modes:

The most basic type of operation that can be performed on the engines is to
execute Python code or call Python functions. Executing Python code can be
done in blocking or non-blocking mode (non-blocking is default) using the
View.execute() method, and calling functions can be done via the
View.apply() method.

The main method for doing remote execution (in fact, all methods that
communicate with the engines are built on top of it), is View.apply().

We strive to provide the cleanest interface we can, so apply has the following
signature:

view.apply(f,*args,**kwargs)

There are various ways to call functions with IPython, and these flags are set as
attributes of the View. The DirectView has just two of these flags:

dv.block :bool

whether to wait for the result, or return an AsyncResult object
immediately

dv.track :bool

whether to instruct pyzmq to track when zeromq is done sending the message.
This is primarily useful for non-copying sends of numpy arrays that you plan to
edit in-place. You need to know when it becomes safe to edit the buffer
without corrupting the message.

dv.targets :int, list of ints

which targets this view is associated with.

Creating a view is simple: index-access on a client creates a DirectView.

In blocking mode, the DirectView object (called dview in
these examples) submits the command to the controller, which places the
command in the engines’ queues for execution. The apply() call then
blocks until the engines are done executing the command:

In non-blocking mode, apply() submits the command to be executed and
then returns a AsyncResult object immediately. The
AsyncResult object gives you a way of getting a result at a later
time through its get() method.

Note the import inside the function. This is a common model, to ensure
that the appropriate modules are imported where the task is run. You can
also manually import modules into the engine(s) namespace(s) via
view.execute('importnumpy')().

Often, it is desirable to wait until a set of AsyncResult objects
are done. For this, there is a the method wait(). This method takes a
tuple of AsyncResult objects (or msg_ids or indices to the client’s History),
and blocks until all of the associated results are ready:

In [72]: dview.block=False# A trivial list of AsyncResults objectsIn [73]: pr_list=[dview.apply_async(wait,3)foriinrange(10)]# Wait until all of them are doneIn [74]: dview.wait(pr_list)# Then, their results are ready using get() or the `.r` attributeIn [75]: pr_list[0].get()Out[75]: [2.9982571601867676,2.9982588291168213,2.9987530708312988,2.9990990161895752]

Most DirectView methods (excluding apply()) accept block and
targets as keyword arguments. As we have seen above, these keyword arguments control the
blocking mode and which engines the command is applied to. The View class also has
block and targets attributes that control the default behavior when the keyword
arguments are not provided. Thus the following logic is used for block and targets:

If no keyword argument is provided, the instance attributes are used.

The Keyword arguments, if provided overrides the instance attributes for
the duration of a single call.

The following examples demonstrate how to use the instance attributes:

In addition to calling functions and executing code on engines, you can
transfer Python objects to and from your IPython session and the engines. In
IPython, these operations are called push() (sending an object to the
engines) and pull() (getting an object from the engines).

Since a Python namespace is just a dict, DirectView objects provide
dictionary-style access by key and methods such as get() and
update() for convenience. This make the remote namespaces of the engines
appear as a local dictionary. Underneath, these methods call apply():

Sometimes it is useful to partition a sequence and push the partitions to
different engines. In MPI language, this is know as scatter/gather and we
follow that terminology. However, it is important to remember that in
IPython’s Client class, scatter() is from the
interactive IPython session to the engines and gather() is from the
engines back to the interactive IPython session. For scatter/gather operations
between engines, MPI, pyzmq, or some other direct interconnect should be used.

Any imports made inside the block will also be performed on the view’s engines.
sync_imports also takes a local boolean flag that defaults to True, which specifies
whether the local imports should also be performed. However, support for local=False
has not been implemented, so only packages that can be imported locally will work
this way.

You can also specify imports via the @require decorator. This is a decorator
designed for use in Dependencies, but can be used to handle remote imports as well.
Modules or module names passed to @require will be imported before the decorated
function is called. If they cannot be imported, the decorated function will never
execute and will fail with an UnmetDependencyError. Failures of single Engines will
be collected and raise a CompositeError, as demonstrated in the next section.

In [69]: fromIPython.parallelimportrequireIn [70]: @require('re'): ....:deffindall(pat,x): ....:# re is guaranteed to be available ....:returnre.findall(pat,x)# you can also pass modules themselves, that you already have locally:In [71]: @require(time): ....:defwait(t): ....:time.sleep(t) ....:returnt

Note

sync_imports() does not allow importfooasbar syntax,
because the assignment represented by the asbar part is not
available to the import hook.

In the multiengine interface, parallel commands can raise Python exceptions,
just like serial commands. But it is a little subtle, because a single
parallel command can actually raise multiple exceptions (one for each engine
the command was run on). To express this idea, we have a
CompositeError exception class that will be raised in most cases. The
CompositeError class is a special type of exception that wraps one or
more other types of exceptions. Here is how it works:

Notice how the error message printed when CompositeError is raised has
information about the individual exceptions that were raised on each engine.
If you want, you can even raise one of these original exceptions: