Quantum ESPRESSO

The Quantum ESPRESSO installations are generally maintained by Weine Olovsson (weiol@nsc.liu.se).

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.”

Installations on NSC systems

Tetralith and Sigma

Quantum ESPRESSO is available via the module system on Tetralith and Sigma. For more information about available versions, please see the Tetralith and Sigma Software list.

Most of the old installations from CentOS 7 are now provided as -compat-el7 after the upgrade to Rocky Linux 9. This means that the installations are automatically run within a CentOS 7 container, with the same (or very similar) performance.

QuantumESPRESSO/7.5-hpc1-intel-2023a-eb

Most recent installation, 7.5 includes wannier90 3.1.0, QE-GIPAW and EPW 6.0. Want is installed. Most QE tests go through. Compiled with OpenMP.

QuantumESPRESSO/7.4.1-hpc1-intel-2023a-eb

Standard installation, 7.4.1 includes wannier90 3.1.0, QE-GIPAW and EPW 5.9. Want is installed. Most QE tests go through. Compiled with OpenMP.

QuantumESPRESSO/7.3-hpc1-intel-2023a-eb

Standard installation, done after update to Rocky Linux 9. 7.3 includes wannier90 3.1.0 and EPW 5.8. Want is installed. Most QE tests go through. Compiled with OpenMP.

QuantumESPRESSO/7.2-nsc1-intel-2018b-eb-compat-el7

Last standard installation on CentOS 7. 7.2 includes wannier90 3.1.0 and EPW 5.7. Want is installed. For EPW, 101 tests out of 102 goes through. Compiled with OpenMP. In this installation, Thermo_pw 1.8.1 is included.

QuantumESPRESSO/7.1-nsc1-intel-2018b-eb-compat-el7

Standard installation. 7.1 includes wannier90 3.1.0, gipaw 7.1, EPW 5.5 and Yambo 4.5.2. Standard tests go through. Want is installed. Compiled with OpenMP. In this installation, Thermo_pw 1.7.1 is included.

QuantumESPRESSO/7.0-nsc1-intel-2018b-eb-compat-el7

Standard installation. 7.0 includes wannier90 3.1.0, gipaw 7.0, EPW 5.4.1 and Yambo 4.5.2. Standard tests go through. Want is installed. Compiled with OpenMP. In this installation, Thermo_pw 1.6.1 is included.

QuantumESPRESSO/6.8-nsc1-intel-2018b-eb-compat-el7

Standard installation. 6.8 includes wannier90 3.1.0, gipaw 6.8, EPW 5.4 and Yambo 4.5.2. Standard tests go through. Want is installed. Compiled with OpenMP. In this installation, Thermo_pw 1.6.0 is also included.

QuantumESPRESSO/6.7-nsc1-intel-2018b-eb-compat-el7

Standard installation. 6.7.0 includes wannier90 3.1.0 and EPW 5.3.1. Standard tests go through. Want is installed, but not gipaw. Compiled with OpenMP.

QuantumESPRESSO/6.6-nsc1-intel-2018b-eb-compat-el7

Standard installation. 6.6 includes wannier90 3.1.0, gipaw 6.6, EPW 5.3.0 and Yambo 4.5.1. Standard tests go through. For EPW, there are some failed tests All done. ERROR: only 84 out of 91 tests passed. Yambo isn’t tested. want not installed. Compiled with OpenMP.

QuantumESPRESSO/6.5-nsc1-intel-2018b-eb-compat-el7

Standard installation. 6.5 includes wannier90 3.0.0, gipaw 6.5 and want. Standard tests go through. Compiled with OpenMP.

QuantumESPRESSO/6.4.1-nsc1-intel-2018b-eb-compat-el7
QuantumESPRESSO/6.4-nsc1-intel-2018b-eb-compat-el7

Standard installations. 6.4.1 includes the patch backports-6.4.1.diff, wannier90 3.0.0, gipaw 6.4.1, want, EPW 5.1.0 and Yambo 4.2.1 (stable) and 4.3.2 (developer version). Standard tests goes through. For EPW, the tests epw_mob_ibte and epw_mob_ibte_sym fail. Yambo isn’t tested. Compiled with OpenMP. Version 6.4 doesn’t contain gipaw or Yambo.

QuantumESPRESSO/6.3-nsc2-intel-2018b-eb-compat-el7

Standard installation. Includes wannier90 2.1.0, gipaw 6.3 and want. Use this version rather than previous build nsc1. Compiled with OpenMP.

QuantumESPRESSO/6.3-nsc2-intel-2018b-eb-DEBUG

A debug version of the installation above. Use this if regular module crashes and you need more details for debugging. Compiled with OpenMP.

QuantumESPRESSO/6.3-nsc1-intel-2018a-eb

This installation includes wannier90 2.1.0, gipaw 6.2.1, EPW 5.0.0 and Yambo 4.2.1.

QuantumESPRESSO/6.1-nsc1-intel-2018a-eb

This installation includes wannier90 2.1.0, gipaw 6.1 and WEST 3.1.0.

How to run

Load the desired QE module, e.g.:

module add QuantumESPRESSO/7.5-hpc1-intel-2023a-eb

Launch the QE binary with “mpprun”:

mpprun pw.x -input file.in

The path to the default pseudopotential directory included with Quantum ESPRESSO is stored in the $PSEUDO_DIR environment variable.

Example batch script

A minimum batch script for running QE looks like this:

#!/bin/bash
#SBATCH -n 32
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH -A naiss-xxx-yyy

module add QuantumESPRESSO/7.5-hpc1-intel-2023a-eb
mpprun pw.x -input file.in

(Note that you should edit the jobname and the account number before submitting.)

Parallelization settings

Following the Quantum ESPRESSO manual: use k-point parallelization -npools=k-points whenever you can, and use -ntg=1/2/4 when you run on many processors (more cores than FFT gridpoints in the Z direction). Scalapack parallelization works, but set -ndiag to a value much less than the total number of cores.

OpenMP threading

Several Quantum ESPRESSO installations are compiled with support for optional OpenMP threading. It might e.g. be useful for increasing the efficiency when reducing the number of tasks (mpi ranks) per node in order to save memory.

To use the OpenMP threading, compare with the example job script from the top of the page, now setting e.g. 2 OpenMP threads. Note the number of tasks (mpi ranks), here 16 tasks x 2 OpenMP threads = 32 (cores/node). Also note the setting of OMP_NUM_THREADS.

#!/bin/bash
#SBATCH -n 16
#SBATCH -c 2
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy

module add QuantumESPRESSO/6.8-nsc1-intel-2018b-eb-compat-el7
export OMP_NUM_THREADS=2
mpprun pw.x -input file.in

Efficent calculations

A brief guide for efficient runs on Tetralith/Sigma, considering regular pw.x calculations. Note that the nodes have 2 CPUs with 16 cores each, optimal settings differ from CPUs with higher core counts, compare e.g. with LUMI.

  • The most computationally efficient is to run on as few resources as possible, utilizing all allocated cores.

For up to 200-300 atoms, it’s typically sufficient to run a MPI job on a single node. If more memory is needed, i.e. encountering out-of-memory error, use a fat memory node:

#SBATCH -C fat

a drawback might be longer waiting time in the queue, due to fewer fat nodes available (64 out of 1908 on Tetralith). With an even number of k-points, split the one node job into two k-point groups for some additional speed up:

mpprun --npools=2 pw.x -input file.in

To check the number of inequivalent k-points from the k-mesh, e.g. run the job for a minute and:

grep "k points" outputfile

Apart from running on more nodes (regular or fat), more memory can be made available by reducing the number of MPI-ranks per node, for example to run on 16 out of 32 cores, using all RAM:

#SBATCH --ntasks-per-node=16
#SBATCH --mem=0
  • For heavier jobs, larger supercells, increase the number of regular/fat nodes, as well as reduce MPI-ranks for memory if needed.

  • Use more nodes in order to decrease the wall time (h), with some higher computational cost (core-h).

  • For small and medium size supercells, be careful to not run on too many nodes as it might be very inefficient.

  • When reducing MPI-ranks due to memory, OpenMP threads can give a speed-up since more cores are used.

Possible problems

Some systems might have convergence problems, e.g. showing the message “problems computing cholesky”. What one can try is to use alternative gcc builds (and an intel build) of Quantum ESPRESSO, which can be found by checking among the installations in:

/software/sse/manual/QuantumESPRESSO
/software/sse/manual/QuantumESPRESSO/6.7/elpa1

as well as available DEBUG builds (e.g. see above). Another possibility is to switch in &electrons from diagonalization='david' to 'cg', though note that this typically gives a much slower convergence.

Special instructions for phonon calculations

OBS: This section is under revision & might change! Due to heavy I/O in QE phonon calculations one needs to use the local scratch disk at each node. Below is an example of one way to do this, assuming that the scf and phonon calculations are done in the same run.

Set the output directory in the scf input file for pw.x and also for the phonon input file for ph.x (i.e. in the two input files):

outdir='TMP'

In case you run on a single node #SBATCH -n 32 (since Tetralith / Sigma has 32 cores per node), in the job script, before running pw.x create the temporary folder and insert the correct path for scratch disk:

mkdir $SNIC_TMP/tmp
# below assumes outdir='TMP' set in your input files, to get correct path!
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.scf.in
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.ph.in

Now everything is set up for running on a single node.

If you want to run a parallel job on several nodes, you need to adjust the job script before running ph.x. In this example it is assumed that prefix=’Ti’ and that the scf calculation is done in the same job, with its temporary folder in the job working directory:

export WORKDIR=$PWD
mkdir -p $WORKDIR/tmp
# below assumes outdir='TMP' set in your input files, to get correct path!
sed -i.bak -e "s|TMP|${WORKDIR}/tmp|g" input.scf.in
sed -i.bak -e "s|TMP|${SNIC_TMP}/tmp|g" input.ph.in

mpprun pw.x -input input.scf.in

The phonon calculations are set up right after the scf run in the job script:

mkdir $WORKDIR/tmp/_ph0
mkdir $WORKDIR/tmp/_ph0/Ti.save
cp $WORKDIR/tmp/Ti.save/charge-density.dat $WORKDIR/tmp/_ph0/Ti.save/
cp -r $WORKDIR/tmp $SNIC_TMP/

for i in $(hostlist -e $SLURM_JOB_NODELIST | tail -n +2)
do
jobsh $i <<HERE
rsync -a $WORKDIR/tmp $SNIC_TMP
HERE
done

mpprun ph.x -input input.ph.in

This would also work with scf and phonon runs done as separate jobs, since the ‘TMP’ folder from scf is in the local work directory.


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express