Specifically, you can launch Open MPI's mpirun in an interactive
SLURM allocation (via the salloc command) or you can submit a
script to SLURM (via the sbatch command), or you can "directly"
launch MPI executables via srun.

Open MPI automatically obtains both the list of hosts and how many
processes to start on each host from SLURM directly. Hence, it is
unnecessary to specify the --hostfile, --host, or -np options to
mpirun. Open MPI will also use SLURM-native mechanisms to launch
and kill processes ([rsh] and/or ssh are not required).

For example:

# Allocate a SLURM job with 4 nodes
shell$ salloc -N 4 sh
# Now run an Open MPI job on all the nodes allocated by SLURM
# (Note that you need to specify -np for the 1.0 and 1.1 series;
# the -np value is inferred directly from SLURM starting with the
# v1.2 series)
shell$ mpirun my_mpi_application

This will run the 4 MPI processes on the nodes that were allocated by
SLURM. Equivalently, you can do this:

Yes, if you have configured OMPI --with-pmi=foo, where foo is
the path to the directory where pmi.h/pmi2.h is located. Slurm (> 2.6,
> 14.03) installs PMI-2 support by default.

Older versions of Slurm install PMI-1 by default. If you desire PMI-2,
Slurm requires that you manually install that support. When the
--with-pmi option is given, OMPI will automatically determine if PMI-2
support was built and use it in place of PMI-1.

3. I use SLURM on a cluster with the OpenFabrics network stack. Do I need to do anything special?