and then, after seeing that work edit the submit script to have asmany jobs as their are input files:

condor_submit scripts/spinn3r-transform.submit

There is one key problem with this, which we discussed on the phone:when the job dies, it starts over on the input list. Let's discussusing the zookeeper "task_queue" stage.

running on task_queue: zookeeper --------------------------------

To use the zookeeper task queue, you must install zookeeper on acomputer that your cluster can access. Here is an example zookeeperconfig:

# The number of milliseconds of each ticktickTime=10000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.dataDir=/var/zookeeper# the port at which the clients will connectclientPort=2181server.0=localhost:2888:3888maxClientCnxns=2000

Note the large maxClientCnxns for running with many nodes in condor,and also not the 10sec tickTime, which is needed to avoid frequentsession timeouts from condor slots that are working hard.