It turns out that the use of --host and --hostfile act as a filter of
which nodes to run on when you are running under SGE. So, listing them
several times does not affect where the processes land. However, this
still does not explain why you are seeing what you are seeing. One
thing you can try is to add this to the mpirun command.

-mca ras_gridengine_verbose 100

This will provide some additional information as to what Open MPI is
seeing as nodes and slots from SGE. (Is there any chance that node0002
actually has 8 slots?)

I just retried on my cluster of 2 CPU sparc solaris nodes. When I run
with np=2, the two MPI processes will all land on a single node, because
that node has two slots. When I go up to np=4, then they move on to the
other node. The --host acts as a filter to where they should run.

In terms of the using "IB bonding", I do not know what that means
exactly. Open MPI does stripe over multiple IB interfaces, so I think
the answer is yes.

Rolf

PS: Here is what my np=4 job script looked like. (I just changed np=2
for the other run)