VASP-GPU installations at NSC


Directory list:

VersionDescription
5.4.1-05Feb16 VASP-GPU 5.4.1 (2016-02-06)

Level of support

Tier 3 NSC will not be able to help you much with this program, either because we lack the in-house experience, or because it is a test installation. In general, these types of installations are untested, and will not be updated unless you send a request to NSC.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

The VASP-GPU installations are generally maintained by Weine Olovsson (weiol@nsc.liu.se). Observe that this version of VASP can only be run on the special nodes with GPU accelerators and is available primarily for testing and benchmarking, since the number of GPU-nodes at NSC are very few at the moment.

Naming scheme for binaries

For the GPU version of VASP binaries, the following naming scheme is used at NSC:

Name Description
vasp_gpu vasp compiled with -DNGZhalf, “normal” version for bulk system
vasp_gpu_nc compiled without -DNGZ*, for spin-orbit/non-collinear calculations.

There is no version available for gamma point only calculations.

How to run

Observe that the GPU version of VASP only can be run on the Triolith nodes with Nvidia Tesla accelerators. Follow the above link for more details.

To load the VASP-GPU module, first load the module for the gpu environment:

module load use.gpu

Now load the module corresponding to the version you want to use. We strongly recommend to use the latest version of VASP.

module load vasp-gpu/5.4.1-05Feb16 

Then launch the desired VASP binary with “mpprun”:

mpprun vasp_gpu

Example batch script

A minimum batch script for running VASP-GPU without modules looks e.g. like this:

#!/bin/bash
#SBATCH -A SNIC-xxx-yyy
#SBATCH -J jobname
#SBATCH --ntasks-per-node 4
#SBATCH --exclusive
#SBATCH --reservation=gpu --gres=gpu:1
#SBATCH -t 4:00:00

# load cuda/7.5
export PATH=/usr/local/cuda-7.5/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH

# Launch MPS on each node
pdsh -w $SLURM_NODELIST mkdir /tmp/nvidia-mps
pdsh -w $SLURM_NODELIST export CUDA_MPS_PIPE_DIRECTORY=/tmp/nvidia-mps
pdsh -w $SLURM_NODELIST mkdir /tmp/nvidia-log
pdsh -w $SLURM_NODELIST export CUDA_MPS_LOG_DIRECTORY=/tmp/nvidia-log
pdsh -w $SLURM_NODELIST nvidia-cuda-mps-control -d

# Launch VASP
mpprun /software/apps/vasp-gpu/5.4.1-05Feb16/build01/vasp_[gpu/gpu_nc]

(Note that you should edit the jobname and the account number before submitting.) The different accelerator queues are chosen by –gres, see more details here.

How to set up VASP-GPU calculations

Note that there are several differences in comparison with submitting regular VASP calculations. Check the official VASP wiki page. Peter Larsson also describes how to run VASP-GPU on his blog.

OBS, one must set:

LREAL = Auto | .TRUE.

Some important points:

  • NCORE isn’t supported (only NCORE=1)
  • important to maximize NSIM for efficiency, NSIM = NBANDS / (2*cores), e.g. 16–32 on Triolith
  • ALGO = Normal | Fast | VeryFast are explicitly ported for GPU
  • hybrid functionals, LHFCALC=.TRUE., is explicitly ported for GPU
  • 4–8 MPI ranks were found to work well for hybrid calculations on Triolith