Not taking the time to analyze this thing too closely ... what you need to do is to fork-off a controlled number of worker threads, each one of which (by means of a properly-locked, shared variable) fetches “the next chunk” of data and sends it. All of the threads do this until they discover that there are no more chunks to send, at which time they all terminate. When all of the children terminate, the parent process ends.

The two control-parameters to this process will be ... how many children do you want to fork, and how many rows of data do you want each of them to send at one time?

Notice that the children, once spawned, are persistent until the entire job is done. At the top of the loop, they must lock the shared variable, test its value, and then either “unlock and exit” if the job is done, or “increment the value and unlock” if it is not. They retrieve, package, and send their chunk, then loop back to do it again. (There are several ways to do it, many packages to choose from, etc. Only the concept is what I’m driving at here...) Taken together, it is fairly unpredictable which one of the threads will send the “next” chunk, but, working together as a team for however long it takes, they will get the job done.

What do you recommend for locked, shared variables? This is something about IPC I haven't gotten into. Another idea I had was getting the number of database records needed to be processed (one way or another, not concerned about efficiency until I get something working), divide them by the number of workers, get the offsets, and THEN fire up the children.

I think your method is a bit more solid though, so I'll look into that.

Thanks!

Three thousand years of beautiful tradition, from Moses to Sandy Koufax, you're god damn right I'm living in the fucking past