CPMD at NSC


Level of support

Tier 2 NSC has only limited experience with this software, but we will try to help as much as possible. We have ran some tests, for example, if the program comes with a test suite, but they may be far from exhaustive. We will try to install and test new versions, as soon as we can.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

The CPMD code is a parallelized plane wave / pseudopotential implementation of Density Functional Theory, particularly designed for ab-initio molecular dynamics.

The CPMD installations at NSC are generally done by Johan Raber (raber@nsc.liu.se).

Licensing

In order to use CPMD you need a CPMD license. This is (as of April 2013) free for non profit purposes.

In order to use the CPMD binaries provided by NSC, you need to inform NSC that you have a valid CPMD license. You can do so by sending an email to support@nsc.liu.se stating that you have a valid CPMD license. We will then give you access to our CPMD binaries.

How to run

You will need to provide a valid CPMD license to get access to the CPMD binary. Getting such a license typically require a registration at www.cpmd.org. E-mail support@nsc.liu.se to be added to the correct unix group, thereby getting access.

Provided you have a valid CPMD input file, here’s how you can run it from an interactive session:

@triolith: $ interactive -N2 --exclusive -t 00:30:00 #This drops you in a shell on an allocated node 
@node: $ module load cpmd/3.15.3
@node: $ export CPMD_FILEPATH=${SNIC_TMP}
@node: $ cpmd.run input > output # Optionally add a path to pseudo potential libraries after the input file

Running it in batch mode is very similar and an example is available below. For optimal performance it is advised that you run your job on the node local scratch disks most conveniently by setting at runtime

export CPMD_FILEPATH=${SNIC_TMP}

as indicated in the example above. You can alternatively specify an entry in your input file using the FILEPATH directive, but this is very much less convenient and is discouraged unless you have an absolute need to set it in the input file (hard to imagine why though).

Putting all large output files under ${SNIC_TMP} in a job will prompt the need to save them at the end of the job. Otherwise they will be removed by the cleanup script run between jobs. An example of how to accomplish this is shown in the example batch job script below.

Example batch script

A batch script for running CPMD at NSC may look like this:

#!/bin/bash
#SBATCH -N 2
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH --exclusive
#SBATCH -A SNIC-xxx-yy

WD=$(pwd)

module load cpmd/3.15.3
#Trap SIGTERM and copy the larger files from $CPMD_FILEPATH if the job hits the walltime limit 
trap 'cp ${CPMD_FILEPATH}/* ${WD}/; echo "SIGTERM was traped"' SIGTERM

#export OMP_NUM_THREADS=2 # Uncomment if wanted
cpmd.run input > output

exit_status=$?

cp ${CPMD_FILEPATH}/* ${WD}/

if (( $exit_status != 0 )); then
    echo "CPMD exited with error status $exit_status"
else
    echo "CPMD exited successfully."
fi

exit $exit_status

(Note that you should at least edit the jobname and the account number before submitting.)

The runscript (cpmd.run) is capable of accepting hybrid MPI/OpenMP jobs if you set the OMP_NUM_THREADS environment variable higher than one, e.g.

export OMP_NUM_THREADS=2 # Advisory: Don't go beyond two unless you really have to

The runscript automatically adjust the parameters needed to launch the parallel job optimally by inferring them from the environment variables in a job environment, so there should normally be no need to make your own launcher.

Visualizing output from CPMD

Viewing the output from CPMD can be done with the GUIs VMD, Avogadro or Gabedit available via the module system. For Avogadro and Gabedit you may need to do some post-processing on some output file(s) and produce so-called cube files, see the CPMD home page. Should you wish to produce ray-traced images and movies from the Avogadro or Gabedit interfaces you will also need to load a povray module. You will additionally need to run with X forwarding or the VNC solution currently installed. NSC recommends that you use the VNC solution.

Known problems

  • Hybrid OpenMP/MPI calculations are typically only about half as fast the MPI only calculations but may allow you to reduce memory requirements. It is unlikely beneficial to go beyond four (4) OpenMP threads in this case.