MPI (Message Passing Interface) is the preferred method for parallel programming at MCSR. MPI is a vendor independent API for breaking programs into components that can run concurrently, each on its own processor (with its own memory), all the while coordinating their progress by the passing of messages among the processes. These messages are delivered by the library routines of the installed MPI implementation. There are several such MPI implementations on the market--some are free and some are for sale. MCSR employs the free MPICH implementation of MPI, developed by the Argonne National Laboratory and Mississippi State University, in all of our parallel computing environments: redwood and mimosa. On mimosa, PGI MPI is available as well. See the May 2005 Parallel-O-Gram article PGI Versions on Mimosa for useful tips on navigating the various
PGI Versions installed on mimosa.
On redwood, SGI's implementation of MPI (MPT) is available.

If you have a Fortran or C research application that has potential for taking advantage of the parallel processing capabilities of one parallel computer (such as sweetgum), and if you would like it to port easily to another parallel environment (such as mimosa), MPI is the way to go. First, you should get your application working serially in C or Fortran 77. Then, add the MPI library calls, restructuring source code as necessary, and recompile your program on the target platform, using one of the MPICH or PGI compilers. If you need to move it, say from mimosa to sweetgum, just recompile the same code on the next MPI-enabled platform, and it should be ready to run. Submit the job to PBS, requesting the desired number of processors, and using the language appropriate mpirun to run the executable. If you have problems porting your code to an MCSR MPI platform, please contact the MCSR consulting staff.

(*)Those using the PGI compilers will need to include the following link options:
-lmpich and -lfmpich (Fortran). You will also need to ensure that the environment variabl PGI is set to "=/usr/local/apps/pgi-7.2/" (note that this should be set by default by system login scripts unless you override this in your own login scripts, such as your .bashrc file.)
Failure to have this variable set correctly may result in error messages from PBS saying that mpi.args cannot be found (if compiling from PBS).

Step 1: Choose which platform (mimosa, sequoia, or redwood) you want to use.
Step 2: Choose which compiler you want to compile with. (C, C++, Fortran 77, Fortran 90, Fortran 95).
Step 3: Compile your program using the appropriate MPI-capable compiler
and using the appropriate MPI load syntax for that compiler, such as
-lmpi (for the SGI MPT C library), -lmpi -lmpi++ (for the SGI MPT C++ library) or -Mmpi=mpich (for PGI)
Debug any compiler errors and repeat, until you have an executable a.out file.
Step 4: Write a PBS
batch script to run your program on multiple processors by
invoking the mpirun program, where the "-np" option specifies the number
of processors to use. Use the nodes= PBS resource option to tell PBS how
many nodes to allocate. (Make sure the number you use is the same as the
-np argument to mpirun.):mpirun -np 4 a.out Step 5: Submit your script to PBS using qsub:qsub yourfile.pbs

To schedule an MPI workshop for a small group, either on the UM campus, or on
your remote Mississippi campus, please e-mail us.
Meanwhile, you might try the
Introduction to MPI online tutorial at WebCT-HPC, sponsored by the National
Computational Science Alliance Partners for Advanced Computational Services.