Quantum Espresso installations at NSC


Directory list:

VersionDescription
5.0 Quantum Espresso 5.0
5.0.1 Quantum Espresso 5.0.1
5.0.2 Quantum Espresso 5.0.2
5.0.3 Quantum Espresso 5.0.3

Level of support

Tier 2 NSC has only limited experience with this software, but we will try to help as much as possible. We have ran some tests, for example, if the program comes with a test suite, but they may be far from exhaustive. We will try to install and test new versions, as soon as we can.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

The Quantum Espresso installations are generally maintained by Peter Larsson (pla@nsc.liu.se).

How to run

Launch the desired QE binary with “mpprun”:

mpprun /software/apps/espresso/5.0/build02/bin/pw.x -input file.in

If you load the espresso/5.0 module, a variable called BIN_DIR will be set for you, which can be used to shorten the above and make it more flexible.

mpprun $BIN_DIR/pw.x -input file.in

The path to the default pseudopotential directory included with Quantum Espresso is stored in the $PSEUDO_DIR environment variable.

Example batch script

A minimum batch script for running QE looks like this:

#!/bin/bash
#SBATCH -N 4
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH --exclusive
#SBATCH -A SNIC-xxx-yyy

mpprun /software/apps/espresso/5.0.3/build01/bin/pw.x -input file.in

(Note that you should edit the jobname and the account number before submitting.)

Parallelization settings

NSC is currently working on characterizing the parallel scalability of Quantum Espresso on Triolith. The initial findings suggest that the general guidelines of how to choose parallelization flags describe in the Quantum Espresso manual applies. This means: use k-point parallelization -npool=k-points whenever you can, and use -ntg=1/2/4 when you run on many processors (more cores than FFT grid points in the Z direction). Scalapack parallelization works, but set -ndiag to a value much less than the total number of cores.

Special instructions for phonon calculations

Due to heavy I/O in QE phonon calculations one needs to use the local scratch disk at each node. Below is an example of one way to do this, assuming that the scf and phonon calculations are done in the same run.

Set the output directory in the scf input file for pw.x and also for the phonon input file for ph.x (i.e. in the two input files):

outdir='TMP'

In the job script, before running pw.x create the temporary folder and insert the correct path for scratch disk:

mkdir $SNIC_TMP/tmp
# below assumes outdir='TMP' set in your input files, to get correct path!
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.scf.in
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.ph.in

In case you run on a single node, everything is now set up. If you want to run a parallel job on several nodes, you need to adjust the job script in the following way before running ph.x. In the example it is assumed that prefix=’Ti’ and that the scf calculation were done in the same job:

mkdir $SNIC_TMP/tmp/_ph0
mkdir $SNIC_TMP/tmp/_ph0/Ti.save
cp $SNIC_TMP/tmp/Ti.save/charge-density.dat $SNIC_TMP/tmp/_ph0/Ti.save/

for i in $(hostlist -e $SLURM_JOB_NODELIST | tail -n +2)
do
    scp -r $SNIC_TMP/tmp $i:$SNIC_TMP
done 

If the scf and phonon runs are done separately, the output from scf must be copied such that ph.x can find it in outdir (see above).