I'm trying to use MPI on a cluster running OpenMPI 1.2.4 and starting
processes through PBSPro_11.0.2.110766. I've been running into a couple
of performance and deadlock problems and like to check whether I'm
making a mistake.

One of the deadlocks I managed to boil down to the attached example. I
run it on 8 cores. It usually deadlocks with all except one process
showing

start barrier

as last output.

The one process out of order shows:

start getting local

My question at this point is simply whether this is expected behaviour
of OpenMPI.