WIEN2k installations at NSC


Directory list:

VersionDescription
12.1 WIEN2k 12.1 (Release 22/7/2012)
13.1 WIEN2k 13.1 (Release 17/6/2013)

Level of support

Tier 1 NSC has at least one application expert on staff with knowledge about this software, and we will try to help you with any problems. We have also ran some internal tests to make sure that the program is working as intended. NSC also commits to installing updates to this program as they become available.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

The WIEN2k installations are generally maintained by Weine Olovsson (weiol@nsc.liu.se).

Example batch script

A batch script for running WIEN2k may look like below, here running MPI over 16 cores on one node (this can be useful if one got 1 k-point):

e!/bin/bash
#SBATCH -A SNIC-xxx-yyy
#SBATCH -J jobname
#SBATCH -N 1
#SBATCH -t 12:00:00

export SCRATCH=$SNIC_TMP
export WIENROOT=/software/apps/wien2k/13.1/build01

# set .machines for parallel job
# lapw0 running on one node
echo -n "lapw0: " > .machines
echo -n $(hostlist -e $SLURM_JOB_NODELIST | tail -1) >> .machines
echo "$i:16" >> .machines

# run one mpi job on each node (splitting k-mesh over nodes)
for i in $(hostlist -e $SLURM_JOB_NODELIST)
do
  echo "1:$i:16 " >> .machines
done
echo granularity:1 >> .machines
echo extrafine:1   >> .machines

run_lapw -p

# if you need the vector files for continuing calculations, e.g.
# for DOS, spectrum etc. copy them to your current directory:
cp $SNIC_TMP/* .

(Note that you should edit the jobname and the account number before submitting.)

Further information

  • See the userguide for test examples and much information.

  • If the vector files are needed for further processing, you need to retrieve them from the temporary directory as the job finishes.

  • For very heavy jobs with several 100s of inequivalent atoms, it might be necessary to specify in the job script:

    echo “lapw2_vector_split 4” >> .machines #typical values 2, 4, 6, 8

  • In general, if you have several k-points it makes sense to use the k-point parallelization in WIEN2k. It seems that (if memory use isn’t too big) the most efficient way is 1 k-point per computer core.

  • For many small to medium size jobs it is typically OK to run on a single node.

  • Check special instructions for -so calculations furthest down on the page

k-point parallelization and MPI

It is possible to combine k-point parallelization with mpi, as already seen above. Another example there the list of k-points is split over two nodes and then split in 8 parts on each node:

...
#SBATCH -N 2
...
for i in $(hostlist -e $SLURM_JOB_NODELIST)
do
  echo "1:$i:2 " >> .machines
  echo "1:$i:2 " >> .machines
  echo "1:$i:2 " >> .machines
  echo "1:$i:2 " >> .machines
  echo "1:$i:2 " >> .machines
  echo "1:$i:2 " >> .machines
  echo "1:$i:2 " >> .machines
  echo "1:$i:2 " >> .machines
done
echo granularity:1 >> .machines
echo extrafine:1   >> .machines
...

k-point parallelization (one node)

k-point parallelization with 1 k-point/core (serial run) works by default on a single node, i.e. using the 16 cores/node on triolith. For example:

...
#SBATCH -N 1
...
for i in $(hostlist -e $SLURM_JOB_NODELIST)
do
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
  echo "1:$i:1 " >> .machines
done
echo granularity:1 >> .machines
echo extrafine:1   >> .machines
...

k-point parallelization (several nodes)

It is also possible to run full k-point parallelization over several nodes. In this case one needs to set:

#export SCRATCH=$SNIC_TMP
export WIENROOT=/software/apps/wien2k/13.1/build01/KPFIX

i.e. comment out the use of local scratch disk and using the KPFIX version.

SO-coupling more than 1 node

OBS: If running on several nodes and with spinorbit coupling calculations e.g. “run_lapw -so” do not use local scratch disk, e.g. comment out:

#export SCRATCH=$SNIC_TMP