How to Compile VASP 5.3.3 on Ubuntu 13.10

Here comes a more generic recipe for installing and compiling VASP using only open-source tools (i.e. without Intel’s Fortran compiler and MKL). This chould be useful if you want to run smaller calculations on a laptop or an office machine. Below follows how I did it on Ubuntu 13.10 with GCC/Gfortran, OpenMPI, OpenBLAS, FFTW and Netlib SCALAPACK. Please note that compiling VASP with gfortran is not recommended or supported by the VASP developers. From what I can tell, it appears to work, but I have only done limited testing.


First of all you need the VASP source code, which you get from the VASP home page: 

Then we need to install some Ubuntu packages. Install either through the Synaptic program or apt-get in the terminal.

  • build-essential
  • gfortran
  • openmpi1.6-bin
  • libopenmpi1.6-dev
  • libfftw-double3
  • libfttw-single3
  • libfftw-dev

This is starting from a completely new Ubuntu installation. If you have done any programming on your machine before, some of these packages could already be installed. For other Linux distributions, you will need to find out the names of the corresponding packages. They should be similar, except for “build-essential”, which is specific to Debian.

I did not have much success using Ubuntu’s BLAS/LAPACK/ATLAS, so we will need to download the latest OpenBLAS and compile it ourselves from source. The same applies to SCALAPACK, which we have to tie together with our OpenBLAS and the system OpenMPI installation.


Download the latest OpenBLAS tarball from

After decompressing it, you will have a directory called “xianyi-OpenBLAS-….”. Go inside and check the TargetList.txt file. You will have to decide which processor architecture target is appropiate for your processor. For a new Intel processor, “SANDYBRIDGE” should be best, and for a new AMD processor, “BULLDOZER”. Here, I choose the safe and conservative option “CORE2”, which should work on any recent processor. Then we compile with make.

make FC=gfortran CC=gcc USE_THREAD=0 TARGET=CORE2

This should produce a library called libopenblas_core2-r0.2.8.a (or similar). Make note of the directory in which you compiled OpenBLAS, you will need it later. Mine was “/home/pla/build/xianyi-OpenBLAS-9c51cdf”


Download the latest SCALAPACK tarball from To compile it, we need to set up a file containing some configuration parameters. Start by copying the file. You need to update the BLASLIB and LAPACKLIB variables and insert a direct reference to your OpenBLAS compilation.

CDEFS         = -DAdd_
FC            = mpif90
CC            = mpicc 
NOOPT         = -O0
FCFLAGS       = -O3
CCFLAGS       = -O3
FCLOADER      = $(FC)
CCLOADER      = $(CC)
ARCH          = ar
ARCHFLAGS     = cr
RANLIB        = ranlib
SCALAPACKLIB  = libscalapack.a
BLASLIB       = -L/home/pla/build/xianyi-OpenBLAS-9c51cdf -lopenblas

This should be enough to get SCALAPACK to compile by typing “make”. In the end, you should get a libscalapack.a file.

Compiling VASP

Proceed to compile VASP with gfortran according to the previous guide. You need to apply the source code patches described there, otherwise it is straightforward. If you have never compiled VASP before, looking through one of the more detailed system specific guides in the VASP compile section might help.

The makefiles and the source code patch I used are available for download: vasp-ubuntu.tar.gz.

Some highlights (update the paths if necessary):

FFLAGS = -ffree-form -ffree-line-length-0  -fno-second-underscore -I/usr/include

We need to include -I/usr/include to pick up the FFTW header file.

BLAS= ../../xianyi-OpenBLAS-9c51cdf/libopenblas-core2.a

And refer to the BLAS/LAPACK library from our OpenBLAS installation.

CPP    = $(CPP_) -DMPI  -DHOST=\"LinuxGfort\" \
     -DCACHE_SIZE=4000 -Davoidalloc -DNGZhalf \
     -DMPI_BLOCK=262144 -Duse_collective -DscaLAPACK -DMINLOOP=1  

And set the precompiler flags. In the MPI section of the makefile, there should be a reference to our compiled SCALAPACK:


Running VASP

The binaries you compile are MPI-enabled, so they should be launched with mpirun. For example:

mpirun -np 4 ~/build/vasp-5.3.3/vasp.5.3/vasp

You will probably find that the --bind-to-core option will help performance.

mpirun -np 4 --bind-to-core ~/build/vasp-5.3.3/vasp.5.3/vasp

If you have dual socket workstation, similar to a compute cluster node, I recommend trying:

mpirun -np 16 --bind-to-core --npersocket 8 ~/build/vasp-5.3.3/vasp.5.3/vasp