> That really did fix it, George:
>
> # mpirun --prefix $MPIHOME -hostfile ~/testdir/hosts --mca btl
> tcp,self --mca btl_tcp_if_exclude ib0,ib1 ~/testdir/hello
> Hello from Alex' MPI test program
> Process 0 on dr11.lsf.platform.com out of 2
> Hello from Alex' MPI test program
> Process 1 on compute-0-0.local out of 2
>
> It never occurred to me that the headnode would try to communicate
> with the slave using infiniband interfaces... Orthogonally, what are

The problem here is that since your IB IP addresses are
"public" (meaning that they're not in the IETF defined ranges for
private IP addresses), Open MPI assumes that they can be used to
communicate with your back-end nodes on the IPoIB network. See this
FAQ entry for details: