para||el: a (simple) parallel computing framework for Torch

This package provides a simple mechanism to dispatch and run Torch/Lua code
as independant processes and communicate via ZeroMQ sockets. Processes
can be forked locally or on remote machines.

License

Copyright (c) 2011 Clement Farabet, Marco Scoffier

Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:

The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

In the spirit of really abstracting where the jobs are executed, calibrate() can
be called to estimate the compute power of each machine, so that you can distribute
your load accordingly.

parallel.addremote(...)
parallel.calibrate()
forked = parallel.sfork(parallel.remotes.cores) -- fork as many processes as cores availablefor _,forked inipairs(forked) doprint('id: '.. forked.id..', speed = '.. forked.speed)
end-- the speed of each process is a number ]0..1]. A coef of 1 means that it is the-- fastest process available, and 0.5 for example would mean that the process is 2x-- slower

Once processes have been forked, they all exist in a table: parallel.children, and
all methods (exec,send,receive,join) work either on individual processes, or on
groups of processes.

The first thing to do is to load these new processes with code. The code given
can either be a function, with no arguments (it won't have any env when executing
in the new process), or a string. Whether it is a string or a function, both
get serialized into strings, and reloaded on the process side, using loadstring().

parallel implements a simple yield/join mechanism to allow a parent to sync
and affect the behavior of its children.

-- child code:
code =function()
whiletruedoprint('something')
parallel.yield()
endend
c = parallel.fork()
c:exec(code)
-- parent codefor i =1,10do
c:join()
end-- each time join() is called, it waits for the child to yield, and vice-versa.-- in that example, 'something' only gets printed when the parent joins its child

Slightly more complex things can be implemented with yield/join: join() can take
a string as an argument, which is returned by the corresponding yield(). This
is useful to control branching in your children:

Sometimes you might want to wait for a process to actually terminate (die), so that
you can start new ones. The proper way to do this is to use the sync() function,
which waits for the PID of that process to fully disappear from the OS. It also
clears the child from the parallel.children list, and decrement parallel.nchildren.

When creating a child (parallel.fork), a connection is established
to transfer data between the two processes. Two functions send() and receive()
can be used to efficiently transfer data between these processes. Any Lua type,
and all Torch7 type (tensor, storage, ...) can be transferred this way. The transmission
is efficient for numeric data, as serialization merely involves a binary copy and
some extra headers for book-keeping (see serialization in Torch7's manual).

A convenient print function that prepends the process ID issuing the print:

> parallel.print('something')
<parallel#014> something

Last, but not least: always run your parent code in a protected call, to catch
potential errors, Ctrl+C, and the likes, and terminate nicely. By terminating
nicely, I mean: killing all remote processes that you forked... If you don't
do so, you leave you remote machines (and potentially yours) with hanging
processes that are just waiting to receive data, and will not hesitate to get
back in business the next time you run your parent code :-)