Darshan

Darshan is a lightweight "scalable HPC I/O characterization tool". It is intended to profile I/O by emitting log files to a consistent log location for systems administrators, and also provides scripts to create summary PDFs to characterize I/O in MPI-based programs.

Availability and Restrictions

Versions

The following versions of Darshan are available on OSC clusters:

version

Owens

pitzer

3.1.2

X

3.1.4

X

3.1.5-pre1

X

3.1.5

X

3.1.6

X*

X*

* Current default version

You can use module spider darshan to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

Darshan is available to all OSC users without restriction.

Publisher/Vendor/Repository and License Type

MCSD, Argonne National Laboratory, Open source

Usage

Usage on Owens

Setup on Owens

To configure the Owens cluster for Darshan use the following commands:

module load darshan

Darshan is only supported for the following compiler and MPI implementations:

# basic call to darshan
mpiexec.darshan [args] ./my_mpi_program
# to show evidence that Darshan is working and to see internal timing
mpiexec.darshan.timing [args] ./my_mpi_program

An Example of Using Darshan with MPI-IO

Below is an example batch script (mpiio_with_darshan.qsub) for understanding MPI-IO, see this resource for a detailed explanation: http://beige.ucs.indiana.edu/I590/node29.html. The C program examples have each MPI task write to the same file at different offset locations sequentially. A serial version (1 processor writes to 1 file) is included and timed for comparison. Because the files generated here are large scratch files there is no need to retain them.

In order to run it via the batch system, submit the mpiio_with_darshan.qsub file with the following command:

qsub mpiio_with_darshan.qsub

Usage on Pitzer

Setup on Pitzer

To configure the Pitzer cluster for Darshan use the following commands:

module load darshan

Darshan is only supported for the following compiler and MPI implementations:

intel/18.0.3 mvapich2/2.3
intel/18.0.4 mvapich2/2.3

Batch Usage on Pitzer

Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems.

If you have an MPI-based program the syntax is as simple as

# basic call to darshan
mpiexec.darshan [args] ./my_mpi_program
# to show evidence that Darshan is working and to see internal timing
mpiexec.darshan.timing [args] ./my_mpi_program

An Example of Using Darshan with MPI-IO

Below is an example batch script (mpiio_with_darshan.qsub). The C program examples have each MPI task write to the same file at different offset locations sequentially. A serial version (1 processor writes to 1 file) is included and timed for comparison. Because the files generated here are large scratch files there is no need to retain them.