> Before MPI (unfortunately) came to dominate message-passing, PVM was
> the standard library used. PVM is designed for heterogeneous
> systems. For example I have a code that uses MPI internally on both a
> Cray T3E and also a Fujitsu Vector Processor but which uses PVM to
> communicate between the two big machines.
Despite the implication above that MPI is inferior to PVM in its
support of heterogeneous systems, the MPI standard _was_ designed
for heterogeneous systems. A conforming MPI program provides enough
information on both send and receive to allow the MPI implementation
to translate data between machine formats (without requiring a
function call per data element to achieve it as PVM used to do!).
The issue which is likely preventing you from exploiting this is that
of _starting_ MPI processes on these two different machines and
exploiting the vendor optimised MPI on both of them. Since CRAY has no
incentive to make their MPI handle a Fujitsu VPP, and Fujitsu has no
incentive to make their MPI handle a Cray T3E interoperability of
_vendor optimised_ MPIs is small. (Though, of course, your Quadrics'
MPI will work in an optimised fashion with the Fujitsu VPP and T3E, I
expect :-)
However, if you're prepared to use a portable MPI such as MPICH, then
you can easily handle heterogeneous machines inside a single
program. (See the Globus/MPI work, for instance). I have also seen
work which used the MPI profiling interface to wrap a vendor MPI so
that it would inter-operate with a portable MPI.
So, in summary
1) The MPI specification fully supports heterogeneity.
2) There are MPI implementations which support heterogeneity.
3) You're living in another universe if you think that vendors will
spend any time making their MPI implementations inter-operate
off-box with their competitors, rather than tweaking their on-box
performance in the hope of wiping out said competitors !
-- Jim
James Cownie <jcownie at etnus.com>
Etnus, LLC. +44 117 9071438
http://www.etnus.com