The dacapo fortran program adds the enviroment variable DACAPOPATH to the pseudopotential filename
(if the file is not found in the current working directiory).
Copy all pseudopotentials to a directory and set the DACAPOPATH environment variable to this directory:

Dacapo can run in parallel using the MPI parallel library.
You need to compile a parallel executable:

gmake <arch> MP=mpi

For getting dacapo to work in parallel with ASE you need to make a script
dacapo.run, which should be executable and in your path.
dacapo.run is an example of such a script. This example use the
LAM/MPI and the Torque/PBS batch system.

If you do not use a batch system you can replace the line:

MACHINEFILE=$PBS_NODEFILE

with a explicit file containing the names of the nodes, one on each line:

For dacapo to run together with ASE you need a dacapo.run script in your path, that will
start the correct dacapo executable. This OpenMPI dacapo.run script assumes you are running OpenMPI
using the Torque/PBS batch system.

You might have to edit the location and names of the serial and parallel executable in this script,
i.e the lines:

# Name of serial and parallel DACAPO executables
DACAPOEXE="dacapo_2.7.7.run"
DACAPOEXE_PAR="dacapo_2.7.7_mpi.run"

If OpenMPI is not installed under /usr you will also have to change this in the script.

Dacapo can be built on a large number of different systems and compilers.
This portability has been evolving over the years, and the supported systems are displayed by the command make in the
top level source code directory.
Below we give specific instructions for some systems which we actively use at our site.

If you would like to contribute new entries to the Makefile, correct errors, or add complete instructions
for a new platform to the present Wiki page, please send an E-mail to support@fysik.dtu.dk.

If you would like to see how we install Dacapo etc. on our Niflheim Linux cluster and how we build RPM
packages, please consult the Niflheim Wiki.

relocation error: /usr/pgi/linux86/6.2/lib/libpthread.so.0: symbol _h_errno, version GLIBC_2.0 not defined in file libc.so.6 with link time reference

then this may possibly be a problem with the PGI compiler installation.
You can use ldd dacapo.run to examine which shared libraries are needed, and if libpthread.so.0
in the /usr/pgi tree is referenced, the recommended solution is to remove the soft-link
/usr/pgi/linux86/6.2/lib/libpthread.so.0 (or whatever version you have).

The Intel Fortran and C/C++ compilers are installed on Intel CPUs such as Pentium-4, Xeon, Itanium etc.
The Intel MKL library contains highly optimized BLAS subroutines, among many other things.
It is strongly recommended that you use the latest version of the Intel compilers,
since many bugs in the past have caused a number of problems for Dacapo and other software packages.

Get the latest version source tar file from OpenMPI.
You must use the Intel C++ compiler version October 5, 2006 (build 44) or later, see
an OpenMPI FAQ.
(Note added: OpenMPI version 1.1.4 has a workaround for this bug, see the
Release Notes).

Build OpenMPI with support for Torque (installation in /usr/local) and the Intel compilers using:

You must use the Intel compiler version 9.1 (or later). If you use older versions of the compiler,
bugs will give you a lot of troubles.

Check the Dacapo code out from CVS:

cvs checkout dacapo/src
cd dacapo/src

Edit the Makefile section named intellinux if you want to modify the compiler flags to generate optimal code for your
particular Intel CPU, where the -x flag controls the code generation (see man ifort):

INTELLINUX_OPT = -O3 -xN

You should also select the Intel MKL library for your specific Intel CPU architecture:

The default process stacksize limit may cause unexpected crashes of Dacapo at the point when all available
stack space has been exhausted.
Therefore you must increase the resource limits before running Dacapo: