Quantum Espresso installations at NSC


Directory list:

VersionDescription
4.3.2 Quantum Espresso 4.3.2
5.0 Quantum Espresso 5.0
5.0.1 Quantum Espresso 5.0.1
5.0.2 Quantum Espresso 5.0.2
5.0.3 Quantum Espresso 5.0.3
5.1 Quantum Espresso 5.1
5.2.0 Quantum Espresso 5.2.0
5.2.1 Quantum Espresso 5.2.1
5.3.0 Quantum Espresso 5.3.0
5.4.0 Quantum Espresso 5.4.0
6.0 Quantum Espresso 6.0
6.1 Quantum Espresso 6.1

Level of support

Tier 2 NSC has only limited experience with this software, but we will try to help as much as possible. We have ran some tests, for example, if the program comes with a test suite, but they may be far from exhaustive. We will try to install and test new versions, as soon as we can.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

The Quantum Espresso installations are generally maintained by Weine Olovsson (weiol@nsc.liu.se).

How to run

Launch the desired QE binary with “mpprun”:

mpprun /software/apps/espresso/5.0/build02/bin/pw.x -input file.in

If you load the espresso/5.0 module, a variable called BIN_DIR will be set for you, which can be used to shorten the above and make it more flexible.

mpprun $BIN_DIR/pw.x -input file.in

The path to the default pseudopotential directory included with Quantum Espresso is stored in the $PSEUDO_DIR environment variable.

Example batch script

A minimum batch script for running QE looks like this:

#!/bin/bash
#SBATCH -N 4
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH --exclusive
#SBATCH -A SNIC-xxx-yyy

mpprun /software/apps/espresso/5.0.3/build01/bin/pw.x -input file.in

(Note that you should edit the jobname and the account number before submitting.)

Parallelization settings

NSC is currently working on characterizing the parallel scalability of Quantum Espresso on Triolith. The initial findings suggest that the general guidelines of how to choose parallelization flags describe in the Quantum Espresso manual applies. This means: use k-point parallelization -npool=k-points whenever you can, and use -ntg=1/2/4 when you run on many processors (more cores than FFT grid points in the Z direction). Scalapack parallelization works, but set -ndiag to a value much less than the total number of cores.

Special instructions for phonon calculations

Due to heavy I/O in QE phonon calculations one needs to use the local scratch disk at each node. Below is an example of one way to do this, assuming that the scf and phonon calculations are done in the same run.

Set the output directory in the scf input file for pw.x and also for the phonon input file for ph.x (i.e. in the two input files):

outdir='TMP'

In case you run on a single node #SBATCH -N 1, in the job script, before running pw.x create the temporary folder and insert the correct path for scratch disk:

mkdir $SNIC_TMP/tmp
# below assumes outdir='TMP' set in your input files, to get correct path!
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.scf.in
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.ph.in

Now everything is set up for running on a single node.

If you want to run a parallel job on several nodes, you need to adjust the job script before running ph.x. In this example it is assumed that prefix=’Ti’ and that the scf calculation is done in the same job, with its temporary folder in the job working directory:

export WORKDIR=$PWD
mkdir -p $WORKDIR/tmp
# below assumes outdir='TMP' set in your input files, to get correct path!
sed -i.bak -e "s|TMP|${WORKDIR}/tmp|g" input.scf.in
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.ph.in 

mpprun /software/apps/espresso/6.0/build01/bin/pw.x -input input.scf.in

The phonon calculations are set up right after the scf run in the job script:

mkdir $WORKDIR/tmp/_ph0
mkdir $WORKDIR/tmp/_ph0/Ti.save
cp $WORKDIR/tmp/Ti.save/charge-density.dat $WORKDIR/tmp/_ph0/Ti.save/
cp -r $WORKDIR/tmp $SNIC_TMP/

for i in $(hostlist -e $SLURM_JOB_NODELIST | tail -n +2)
do
jobsh $i <<HERE
rsync -a $WORKDIR/tmp $SNIC_TMP
HERE
done

mpprun /software/apps/espresso/6.0/build01/bin/ph.x -input input.ph.in

This would also work with scf and phonon runs done as separate jobs, since the ‘TMP’ folder from scf is in the local work directory.