Documentation Feedback

GROMACS

Building and Running GROMACS on Vesta/Mira

The Gromacs Molecular Dynamics package has a large number of executables. Some of them, such as luck, are just utilities that do not need to be built for the back end.

Begin by building the serial version of Gromacs (i.e., the version that can run within one processor, with or without more than one thread) for the front end and then build the parallel version (i.e., with MPI) for the back end. This way, a full set of executables is available for the front end and another full set for the back end. Before presenting the steps to building Gromacs, a demonstration of how to build with the IBM [mpi]xl<c | cxx | f77>_r compilers is provided. Also, a build with double precision enabled is provided.

Step Zero:

=============

Download gromacs-4.5.5.tar.gz, untar/unzip it and in the gromacs-4.5.5 directory create the following directories: BGP/fen, BGP/ben, BGP/scaling.

For BG/Q, download gromacs-4.6.1.tar.gz, untar/unzip it, and verify that you have Cmake version 2.8.x or later in /soft/buildtools/cmake/ .

Step One (Blue Gene/P only): Building the serial version of Gromacs for the front end nodes (fen)

Once you’ve finished building and installing, the various executables will be available for download. These executables take some time to become visible.

NOTE: These executables may look unfamiliar because the BGP_fen_ program prefix has been added to demonstrate that the executables are for the BGP fen. The _serial_d program-suffix denotes that the executables have not been built with MPI and they have double precision enabled.

To confirm that the executables have been built correctly, select one—such as BGP_fen_g_luck_serial_d—and type the following at the prompt sign:

Select the d.dppc directory and change nsteps in the grompp.mdp file from 50,000 to 150,000 time steps.

Next, run BGP_fen_grompp_serial_d from your prompt to generate the topo.tpr file:

<your_prompt>./BGP_fen_grompp_serial_d -v

Now you are set to run the BGP_mdrun_mpi_d executable on 128, 256, 512 and 1024 cores (not nodes) in vn mode on the BG/P.

While in the BGP/scaling/d.dppc directory, issue the following command from your prompt:

<your_prompt>qsub -t 60 -n 32 --proccount 128 --mode vn -A <your_project> --env GMX_MAXBACKUP=-1 /your/path/to/gromacs-4.5.5/BGP/ben/bin/BGP_mdrun_mpi_d. On BGQ, the command line is: qsub -t 60 -n 8 --mode c16 -A <your_project> --env OMP_NUM_THREADS=4 /your/path/to/gromacs-4.6.1/exe/bin/mdrun. Note the A2 core of BGQ has 4 hardware threads and therefore a pure MPI run doesn’t show acceptable performance. The ‘OMP_NUM_THREADS=4 ‘ means that 4 OpenMP threads are used per MPI rank. Once it runs, take a look at the md.log fileon 128 cores, after 150,000 time steps, and you should see something like this in the last 100 (or so) lines in your md.log file: