Quantum ESPRESSO Installations on Tetralith & Sigma

The Quantum ESPRESSO installations are generally maintained by Weine Olovsson (weiol@nsc.liu.se).

"Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials."

Available modules


Most recent standard installation. 6.6 includes wannier90 3.1.0, EPW 5.3.0 and Yambo 4.5.1. Standard tests go through. For EPW, there are some failed tests All done. ERROR: only 84 out of 91 tests passed. Yambo isn't tested. want and gipaw not installed. Compiled with OpenMP.


Recent standard installation. 6.5 includes wannier90 3.0.0, gipaw 6.5 and want. Standard tests go through. Compiled with OpenMP.


Standard installations. 6.4.1 includes the patch backports-6.4.1.diff, wannier90 3.0.0, gipaw 6.4.1, want, EPW 5.1.0 and Yambo 4.2.1 (stable) and 4.3.2 (developer version). Standard tests goes through. For EPW, the tests epw_mob_ibte and epw_mob_ibte_sym fail. Yambo isn't tested. Compiled with OpenMP. Version 6.4 doesn't contain gipaw or Yambo.


Standard installation. Includes wannier90 2.1.0, gipaw 6.3 and want. Use this version rather than previous build nsc1. Compiled with OpenMP.


A debug version of the installation above. Use this if regular module crashes and you need more details for debugging. Compiled with OpenMP.


This installation includes wannier90 2.1.0, gipaw 6.2.1, EPW 5.0.0 and Yambo 4.2.1.


This installation includes wannier90 2.1.0, gipaw 6.1 and WEST 3.1.0.

How to run

Load the desired QE module:

module add QuantumESPRESSO/6.3-nsc2-intel-2018b-eb

Launch the QE binary with "mpprun":

mpprun pw.x -input file.in

The path to the default pseudopotential directory included with Quantum ESPRESSO is stored in the $PSEUDO_DIR environment variable.

Example batch script

A minimum batch script for running QE looks like this:

#SBATCH -n 32
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy

module add QuantumESPRESSO/6.3-nsc2-intel-2018b-eb
mpprun pw.x -input file.in

(Note that you should edit the jobname and the account number before submitting.)

Parallelization settings

Following the Quantum ESPRESSO manual: use k-point parallelization -npool=k-points whenever you can, and use -ntg=1/2/4 when you run on many processors (more cores than FFT gridpoints in the Z direction). Scalapack parallelization works, but set -ndiag to a value much less than the total number of cores.

Special instructions for phonon calculations

OBS: This section is under revision & might change! Due to heavy I/O in QE phonon calculations one needs to use the local scratch disk at each node. Below is an example of one way to do this, assuming that the scf and phonon calculations are done in the same run.

Set the output directory in the scf input file for pw.x and also for the phonon input file for ph.x (i.e. in the two input files):


In case you run on a single node #SBATCH -n 32 (since Tetralith / Sigma has 32 cores per node), in the job script, before running pw.x create the temporary folder and insert the correct path for scratch disk:

mkdir $SNIC_TMP/tmp
# below assumes outdir='TMP' set in your input files, to get correct path!
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.scf.in
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.ph.in

Now everything is set up for running on a single node.

If you want to run a parallel job on several nodes, you need to adjust the job script before running ph.x. In this example it is assumed that prefix='Ti' and that the scf calculation is done in the same job, with its temporary folder in the job working directory:

mkdir -p $WORKDIR/tmp
# below assumes outdir='TMP' set in your input files, to get correct path!
sed -i.bak -e "s|TMP|${WORKDIR}/tmp|g" input.scf.in
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.ph.in 

mpprun pw.x -input input.scf.in

The phonon calculations are set up right after the scf run in the job script:

mkdir $WORKDIR/tmp/_ph0
mkdir $WORKDIR/tmp/_ph0/Ti.save
cp $WORKDIR/tmp/Ti.save/charge-density.dat $WORKDIR/tmp/_ph0/Ti.save/
cp -r $WORKDIR/tmp $SNIC_TMP/

for i in $(hostlist -e $SLURM_JOB_NODELIST | tail -n +2)
jobsh $i <<HERE
rsync -a $WORKDIR/tmp $SNIC_TMP

mpprun ph.x -input input.ph.in

This would also work with scf and phonon runs done as separate jobs, since the 'TMP' folder from scf is in the local work directory.

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems


NSC Express