but of course that doesn't work (hangs up unable to pickle something).

The example here is trivial, but by the time you add multiple "process" functions, and each of those is dependent on multiple additional inputs... well it all becomes a bit reminiscent of something written in BASIC 30 years ago. Trying to use classes to at least aggregate the state with the appropriate functions seems an obvious solution, but doesn't seem to be that easy in practice.

Is there some recommended pattern or style for using multiprocessing.pool which will avoid the proliferation of global state to support each function I want to parallel map over ?

How do experienced "multiprocessing pros" deal with this ?

Update: Note that I'm actually interested in processing much bigger arrays, so variations on the above which pickle src each call/iteration aren't nearly as good as ones which fork it into the pool's worker processes.

I'm not an experienced multiprocessing pro or anything, but let me ask you why can't you simply do pool.map(process,product([src],range(100))) and change the process function to accept both variables as args? Is this highly inefficient too?
–
luke14freeApr 14 '12 at 9:36

@luke14free: Yes that'd pickle the src array over for every call, and I'm actually interested in much bigger data/arrays than those in the sample code above, so not ideal. With process pool, whatever state is set up at the point the pool is created is forked into the worker processes and available for them to read "for free". The idea would help avoid putting more minor "control variables" (e.g flags) state into globals though, thanks.
–
timdayApr 14 '12 at 9:48

Yup works very nicely thanks; bye bye globals. Normally I'd wait longer before accepting a solution to see if anything else turns up but this is perfect. I'd tried classes for this problem before and not had any success; seems that callable makes all the difference.
–
timdayApr 14 '12 at 10:31

1

Wouldn't it pickle the callable and you are back to square one ?
–
v1v3knJul 15 at 23:48