All our nodes are connected to one LAN. However, half of them
also have an interface to a second private LAN. If the first
openMPI process of a job starts on one of the dual-homed nodes, and
a second process from the same job starts on a single-home, it
will hang on MPI_BCast. It works if the first process is on a
single-homed node, or for all other combinations.

It also works if I disable the private interface. Otherwise there
are no network problems. I can ping any host from any other.
openmpi programs without MPI_BCast work OK.

I guess that this is something to do with openmpi passing IP
addresses between processes. Is there some setting that can be
done to override this?