Shared Memory Communication vs Infiniband

Are bigger servers better for high-performance computing? It is often assumed that communication between processors within a compute node must be faster than using Infiniband networking in a cluster. Consequently, I come across scientists asking for big shared memory servers, believing that their simulations would run much faster there. In reality, it is not always so. In the previous post, I wrote about how the VASP application is bottlenecked by memory bandwidth. In such cases, compute work and communication will compete with each other for precious resources, with severe performance degradation as a result.

Consider this experiment. Let us first run a VASP compute job with 512 bands on a single compute node using 1 to 16 cores. This will give us a scaling graph showing what kind of improvement you can get by using more cores in a server. Now, for comparison, we run the same job but with only one core per compute node. This means that the 16-core job uses 16 compute nodes and only communicates over Infiniband. Which scenario will be faster?

VASP SMP vs Infiniband scaling

It turns out that communicating only over Infiniband is superior to shared memory. With 16 cores it gives twice as fast calculations. The reason is simply that we throw more hardware at the problem: our processors can now use all the memory bandwidth for computations, while exchanging data over the Infiniband network instead.

The graph above shows that there is no inherent benefit to run an MPI parallelized application such as VASP on a big server vs smaller servers in a cluster connected by Infiniband. The only advantage you get is the increased total amount of available memory per server.

As a user, you can apply techniques like this to speed up your calculations. For example, by using twice as many nodes, but with half the number of cores on each node. On Triolith, you can do like this:

#SBATCH -N 16
#SBATCH --ntasks-per-node 8

mpprun /software/apps/vasp/5.3.3-18Dec12/default/vasp

This will run your application with 8 cores per compute node, for a total of 128 cores. The improvement in speed compared to using 8 full nodes can be as much as 50%, and you will also have twice as much memory available per processor. The drawback is, of course, that you spend twice the core hours on your calculation. But if it is important to get the results quickly, it might be worth it.

Hardware Recommendations for VASP

I was recently asked what kind of hardware you should for running VASP calculations. I recommend looking at the configuration of the Triolith cluster at NSC. It was designed to run VASP as big part of the workload, and we did extensive benchmarking to optimize price/performance. An alternative is too look through the supercomputing Top500 list for the most recent entries, to get a feel for what the supercomputing centers are buying at the moment. At Top500, they also have a statistics section where you can follow trends in hardware over time. It is obvious, for example, that big shared memory systems have fallen out of favor.

I recommend is dual-socket servers with low clock frequency Intel Xeon E5 processors, connected by Infiniband networking. Why?

  • Type of CPU: At the moment, Intel’s server processors are significantly ahead of AMD’s in terms of performance and energy efficiency. VASP is also very reliant on Intel’s Fortran compiler. This makes Intel processors an optimal choice. It is possible to run VASP on POWER and SPARC processors, but these server platforms are generally not cost efficient for high-performance computing.

  • CPU model: VASP is very dependent on high memory bandwidth. It is one of the most memory intensive applications in high-performance computing. This means that you do not need processors with high clock frequency, because they will spend most of their time waiting for data to arrive from memory anyhow. What you need is the cheapest processor model that still comes with maximum memory bandwidth. In the current Xeon E5-2600 series, that is likely to be the 2650 and 2660 models. The quad-core 2643 model could also be interesting, because you typically gain only 25% when going from 8 to 16 cores per node.

  • Memory: It is crucial to have 1600 Mhz DDR3 memory, which is currently the fastest memory (1866 Mhz will soon come). All four memory channels should be occupied. Try to get dual rank memory modules if you have only 1 DIMM per memory channel – it will improve performance by ca 5%. Typically, 32 GB of memory is enough (2 GB/core), the reason being that you can easily get an equivalent of 4 GB per core by running with half the number of MPI ranks per node without losing too much performance. But if the majority of jobs are of hybrid type or GW, I would go for 64 GB per server instead.

  • Network: A fast network, like Infiniband, is necessary to run VASP in parallel on more than a few nodes. It is difficult to do comparative studies of different network setups due to the amount of hardware required, but in general VASP scales almost perfectly up to 16 compute nodes using both Mellanox and Qlogic (now Intel) Infiniband, so there is no given winner. FDR Infiniband does not significantly improve performance over QDR for VASP in the few tests I was able to do (+16% on a 16 node job spanning two switches), so I would look at it mostly from a price/performance perspective.

Aflow and Aconvasp

Occasionally, I get enquiries about various kinds of scripts for pre- and post-processing of VASP calculations. A comprehensive set of scripts covering the most common tasks is available within the Aflow high-throughput framework for VASP, developed by the Curtarolo group at Duke University.

To use Aflow on Triolith, just load the “vasptools/0.2” module; it will put the aconvasp binary and other scripts such as “vasp2cif” and “vaspcheck” into your PATH.

module load vasptools/0.2

Here are some examples of what can you do with Aflow:

  • aconvasp --cart: convert POSCAR from direct to Cartesian coordinates
  • aconvasp --data: show basic structure data such as volume, alpha, beta gamma, etc.
  • aconvasp --volume 170.0: Change the volume of the cell to 170 A3 .
  • aconvasp --clat 5.0 5.0 7.0 90.0 90.0 120.0: convert (a,b,c,alpha,beta,gamma) to Cartesian basis vectors which can be copy-pasted into POSCAR.
  • aconvasp --chgdiff CHGCAR.1 CHGCAR.2: subtract charge densities in CHGCAR files. (But it seems to be broken when I test it.)
  • aconvasp --supercell 2 2 2: make supercell out of an existing POSCAR.
  • aconvasp --swap 0 1: swap coordinates of atomic species 0 and 1 in the POSCAR file.
  • aconvasp --spacegroup: spacegroup and symmetry detection.
  • aconvasp --cif: generate a CIF file from POSCAR.
  • aconvasp --xyz: generate an xyz file from POSCAR.

You can find more information in the full documentation. If you use aconvasp or aflow, don’t forget to cite their paper:

S. Curtarolo, W. Setyawan, G. L. W. Hart, M. Jahnatek, R. V. Chepulskii, R. H. Taylor, S. Wang, J. Xue, K. Yang, O. Levy, M. Mehl, H. T. Stokes, D. O. Demchenko, and D. Morgan, AFLOW: an automatic framework for high-throughput materials discovery, Comp. Mat. Sci. 58, 218-226 (2012).

Auto-parallelization Applied to the Elk LAPW Code

This week we are running a course in parallel programming with OpenMP at NSC. Joachim Hein from LUNARC is teaching and a few of us from NSC are helping out with the programming labs.

It is often said that parallel programming can be incredibly hard, and that there is currently no reliable way to automatically parallelize an existing serial program. This statement is still true in general, but sometimes, parallel programming can also be embarrassingly easy. Why? Because while automatic parallelization is not perfect, it can still give you some improvement. There are also many subroutines in BLAS, LAPACK and FFTW that are already parallelized, and since many programs rely on these libraries, they can see speed-up on multicore processors by just linking the right library version and setting OMP_NUM_THREADS=X in the shell.

Let us consider the Elk FP-LAPW code . It is written in Fortran90, and has already been parallelized using both OpenMP and MPI. But what could we have done in the hypothetical case of starting out with the serial version of Elk? How good is automatic parallelization? It will surely not get us all the way, but every percent counts, because you essentially get it for free. It is merely a question of finding the relevant compiler flags.

To establish a baseline, I have Elk compiled without any special compiler flags or machine-optimized numerical libraries. This may seem naive and unrealistic, but in reality, it is not uncommon to come across scientific software built without any compiler optimizations flags or optimized linear algebra libraries such as GotoBLAS, MKL, or ATLAS. (In my experience, it is not so much a result of ignorance, but rather technical problems with compilation and/or lack of time for tinkering.)

The test case I am using is the YBCO example distributed with Elk (13 atoms) with the rgkmax parameter increased to 7.0 to get longer runtime.

The first step in our hypothetical example is to simply add “-O3” optimization. This gives us 9% speed boost. The next crucial step is to replace the bundled BLAS, LAPACK and FFT libraries with Intel’s MKL libraries, which improves the speed by 27%. And finally, we activate the compiler’s automatic threaded parallelization, which gives us +80%. The results can then be compared with the production version of Elk for Triolith, which uses aggressive compiler optimizations, MKL libraries, and has manual OpenMP parallelization.

Elk OpenMP scaling

We can see that automatic parallelization gives a modest speed-up of 1.7x using 16 cores on a Triolith compute node. Still, this is not too bad compared with the OpenMP parallelized version which gets 5.0x over the serial version in total, but only 2x of that is actually due to the OpenMP constructs in the code. So essentially, we get half of the parallelization done automatically for free, without having to change any Fortran code.

Another way of looking at this graph is that it can really pay off to spend some time looking into the best way to compile and link a program. The optimized auto-parallel version is 2.5x faster than the naive version built just from the Elk source with integrated numerical libraries.

Most of tricks I used to compile Elk in this example are listed in the Triolith User Guide. If you encounter problems compiling your own program on Triolith, or need help with choosing the best libraries, please don’t hesitate to contact support@nsc.liu.se.

Atomic Simulation Environment on Triolith

Yesterday, I installed a new version of the Atomic Simulation Environment on Triolith. ASE allows you to script your ab initio calculations using Python. Here comes a tutorial on how you can run VASP calculations with ASE on Triolith. It is a little bit more elaborate than the official documentation of VASP module which can be found on the ASE web site.

First, we need to load the Python and ASE modules. I recommend the python/2.7.4-snic-1 module.

module load python/2.7.4-snic-1 ase/3.7.1

Next, we need to tell ASE’s VASP module where to find the VASP binaries and the POTCARs files.

export VASP_COMMAND="mpprun /software/apps/vasp/5.3.3-18Dec12/default/vasp"
export VASP_PP_PATH=/software/apps/vasp/POTCARs

Note that ASE requires that there are directories called potpaw, potpaw_GGA and potpaw_PBE below the VASP_PP_PATH, so you may have to create symlinks with these names if you don’t have them in your own database.

Now, we need to create the actual ASE Python script that sets up the calculation. The idea in this little example is to calculate the equilibrium volume and bulk modulus of hcp Mg.

(mg-hcp.py) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import numpy
from ase.structure import bulk
from ase.calculators.vasp import Vasp
from ase.units import GPa
from ase.utils.eos import EquationOfState

jobs = []

# Generate cell configurations
for volume in numpy.linspace(0.90,1.10,10):
  mg = bulk('Mg','hcp',a=3.1,covera=1.62)
  mg.set_cell(mg.get_cell()*volume,scale_atoms=True)

  calc = Vasp(prec = 'Accurate',
            xc = 'PBE',
            lreal = False,
            encut = 150.0,
            istart =0,
            icharg= 2,
            nelm = 20,
            kpts = [6,6,6],
            ismear = 1,
            sigma = 0.2,
            algo = "fast")

  mg.set_calculator(calc)

  jobs.append(mg)

# Run jobs
volumes = [job.get_volume() for job in jobs]
energies = [job.get_potential_energy() for job in jobs]

print "Calculation output"
print "------------------"
print "Volumes:", volumes
print "Energies:", energies
print

# Fit EOS
eos = EquationOfState(volumes,energies)

v0,e0,B = eos.fit()

print "Equation of state parameters"
print "----------------------------"
print "E0: %2.6f eV" % (e0)
print "V0: %2.1f A^3" % (v0)
print "B: %2.1f GPa" % (B/GPa)

The script works like this: first, we create cells for 10 different volumes and at the same time attach VASP settings to them, then we call get_potential_energy() for each of these cells and collect the energies and volumes, and finally, we use ASE’s built-in equation of state subroutine to calculate the equilibrium volume and bulk modulus.

To run this script on a Triolith compute node, we need to create a job script. In the job script, we just launch the Python script, which will start ASE and then launch VASP for us.

#!/bin/bash
#SBATCH -J mg 
#SBATCH -N 1
#SBATCH --exclusive
#SBATCH -t 0:10:00

python mg-hcp.py

Submit by sbatchas usual.

sbatch mg.sh

When the job has completed (it should not take more than a few minutes), look in the slurm.out file for the output from ASE.

[pla@triolith1 asetest]$ cat slurm-693921.out 
Calculation output
------------------
Volumes: [30.469003876435877, 32.782153276649034, 35.209509626875203, 37.753824901813466, 40.417851076162975, 43.204340124622782, 46.116044021892051, 49.155714742669836, 52.326104261655324, 55.629964553547552]
Energies: [-1.86460718, -2.2747034, -2.5757448, -2.78654685, -2.92182549, -2.99416931, -3.01374966, -2.99022215, -2.93151648, -2.84418079]

Equation of state parameters
----------------------------
E0: -3.013844 eV
V0: 45.9 A^3
B: 36.5 GPa

The experimental bulk modulus of Mg is ca 45 GPa, so the result seems reasonable.

Compiling VASP With Gfortran

For some time, VASP has been centered on the x86 processors and Intel’s Fortran compiler. Inside the VASP source distribution, you can find some makefiles for other compilers, but they seldom work nowadays, and in many cases you need to make modifications to the source code to make it work with other compilers.

In particular, recent versions of Gfortran cannot compile VASP. If you try, the compiler would stop at errors concerning a circular dependency of a module (i.e. the module includes itself), and some output formatting errors.

From what I can understand, these problems are actually related to violations of the Fortran language standard, which are allowed by the Intel compiler. There are no compiler flags for gfortran that let you “relax” the standard like this to let it compile VASP, so you need to modify the source to make it compliant.

When I tested with gcc 4.7.2 and gcc 4.8.0, four files needed to be modified: us.F, vdwforcefield.F, finite_diff.F, and spinsym.F. I have prepared the patches as a “patch file” which you can download. To apply the patches to the source code, locate your VASP 5.3.3 source code directory and do

cd vasp.5.3
patch -p0 < vasp533gcc.patch

In the makefile, you need to set the following compiler flags for gfortran.

FC = mpif90 (or similar depending on the MPI)
FFLAGS = -ffree-form -ffree-line-length-0  -fno-second-underscore
OFLAG=-O3 -march=corei7-avx -mtune=corei7-avx

Global -O3 optimization seems to work for me on Triolith (Xeon E5 processors), but I haven’t tested all functionality of the gfortran version yet. As with the Intel compiler, you may have to decrease the optimization or disable aggressive inlining in certain files.

In the preprocessor section, put something like this. Note that you should not use the -DPGF90 flag when compiling with gfortran.

CPP     = $(CPP_)  -DHOST=\"NSC-GFORTRAN-B01\" -DMPI 
    -DMPI_BLOCK=262144 \
    -Duse_collective -DCACHE_SIZE=12000 -Davoidalloc -DNGZhalf\

These tricks made it for me, and I now have a reference version of VASP compiled with Gfortran on Triolith. The speed seems to be about same as when compiled with Intel Fortran, since VASP relies heavily on FFTWs and BLAS calls and I still link with MKL and Intel’s MPI.

Later, I will try to make a longer guide how to compile VASP with a fully free software stack, and compare performance and stability.

Scaling of Small Jobs on the Abisko Interlagos Cluster

I promised some multi-node scaling tests of the LiFeSiO4 128-atom job in the previous post. Here they come! The choice of NPAR is of particular interest. Do the old rules of NPAR=compute nodesor NPAR=sqrt(number of MPI ranks) still apply here?

To recap: when running on one node, I found that NPAR=3 with 24 cores per compute node and a special MPI process binding scheme (round-robin over NUMA zones) gave the best performance. To check if it still applies across nodes, I ran a full characterization again, but this time with 2 compute nodes. In total, this was 225 calculations!

Speed of VASP on Abisko as a function NPAR and binding

Inspecting the data points shows us that the same approach comes out winning again. Using 24 cores/compute node is still much more effective (+30%) than using all the cores, and NPAR=6 is the best choice. Specifying process binding is essential, but the choice of a particular scheme does not influence as much as in the single node case, presumably because some of the load imbalance now happens in between nodes, which we cannot address this way.

From this I conclude that a reasonable scheme for choosing NPAR indeed seems to be:

NPAR = 3 * compute nodes

Or, if we have a recent version of VASP:

NCORE = 8

The “RR-NUMA” process binding has to be specified explicitly when you start VASP on Abisko:

srun --cpu_bind=map_cpu=0,6,12,18,24,30,36,42,2,8,14,20,26,32,38,44,4,10,16,22,28,34,40,46 /path/to/vasp

When using these settings, the parallel scaling for 1-8 compute nodes looks decent up to 4 compute nodes:

Inter-node scaling on Abisko

Remember that each node has 48 cores, of which we are using 24 cores, so 4 nodes = 96 MPI ranks. We get a top speed of about 30 Jobs/h. But what does this mean? It seems appropriate to elaborate on the choice of units here, as I have gotten questions about why I measure the speed like this instead of using wall time as a proxy for speed. The reasons is that you could interpret the “Speed” value on the y-axis as the number of geometry optimization steps you could run in one hour of wall time on the cluster. This is something which is directly relevant when doing production calculations.

For reference, we can compare the speeds above with Triolith. On Triolith, the same job (but with 512 bands instead of 480) tops out at about 38 Jobs/h with 16 compute nodes and 256 ranks. So the parallel scaling looks a bit weak compared Triolith, but the absolute time to solution is still good.

VASP on the Abisko Cluster (With AMD Interlagos)

When Triolith had a service stop recently, I took to the opportunity to explore the Abisko cluster at HPC2N in Umeå. Abisko has 4-socket nodes with 12-core AMD Opteron “Interlagos” processors and lots of memory. Each compute node on Abisko features an impressive 500 gigaflop/s theoretical compute capability (compared to e.g. Triolith’s Xeon E5 nodes with 280 gigaflop/s). The question is how much of this performance we can get in practice when running VASP and how to set important parallelization parameters such as NPAR and NSIM.

Summary of findings

  • One Abisko node is about 20% faster than one Triolith node, but you have to use 50% more cores and twice the memory bandwidth to get the job done.
  • You should run with 24 cores per node. More specifically, one core per Interlagos “module”.
  • It is imperative that you specify MPI process binding explicitly either with mpirun or srun to get good speed.
  • Surprisingly, process binding in a round-robin scheme over NUMA zones is preferable to straight sequential binding of 1 rank per module. (Please see below for the binding masks I used)
  • NPAR: MPI ranks should be in groups of 8, this means NPAR=3*nodes.
  • NSIM: 8, brings you +10% performance vs the default choice.

Background on AMD Interlagos

To understand the results, we first need to have some background knowledge about the Interlagos processors and how they differ from earlier models.

The first aspect is the number of cores vs the number of floating-point units (FPU:s). The processors in Abisko are marketed by AMD as having 12 cores, but in reality there are only 6 FPU:s, each which are shared between 2 cores (called a “module”). So I consider them more like 6-core processors capable of running in two modes: either a “fat mode” with 1 thread with 8 flops/cycle or a “thin mode” with 2 threads with 4 flops/cycle. Which one is better will depend on the mix of integer and floating point instructions. In a code like VASP, which is heavily dependent on floating point calculations and memory bandwidth, I would expect that running with 6 threads is better because there is always some overhead involved with using more threads.

The second aspect to be aware of is the memory topology. Each node on Abisko has 48 cores, but they are separated into 8 groups, each of which have their own local memory. Each core can still access memory from everywhere, but it is much slower to read and write memory from a distant group. These groups are usually called NUMA zones (or nodes, or islands). We would expect that we need to group the MPI processes by tweaking the NPAR parameter to reflect the NUMA zone configuration. Specifically, this means 8 groups of 6 MPI ranks per compute node on Abisko, but more about that later.

Test setup

Here, we will be looking at the Li2FeSiO4 supercell test case with 128 atoms. I am running a standard spin-polarized DFT calculation (no hybrid), which I run to self-consistency with ALGO=fast. I adjusted the number of bands to 480 to better match the number of cores per node.

Naive single node test

A first (naive) test is to characterize the parallel scaling in a single compute node, without doing anything special such as process binding. This produced an intra-node scaling that looks like this after having tuned the NPAR values:

Intra-node scaling on Abisko

Basically, this is what you get when you ask the queue system for 12,16,24,36,48 cores on 1 node with exclusively running rights (no other jobs on the same node), and you just launch VASP with srun $VASPin the job script. We see that we get nice intra-node scaling. In fact, it is much better than expected, but we will see in the next section that this is an illusion.

The optimal choice of NPARturned out to be:

12 cores NPAR=1
16 cores NPAR=1
24 cores NPAR=3
36 cores NPAR=3
48 cores NPAR=6

This was also surprising, since I had expected NPAR=8 to be optimal. With these settings, there would be MPI process groups of 6 ranks which exactly fit in a NUMA zone. Unexpectedly, NPAR=6 seems optimal when using all 48 cores, and either NPAR=1 or NPAR=3 for the other cases. This does not fit the original hypothesis, but a weakness in our analysis is that we don’t actually know were the processes end up in this scenario, since there is no binding. The only way that you can get a symmetric communication pattern with NPAR=6 is to place ranks in a round robin scheme around each NUMA zone or socket. Perhaps this is what the Linux kernel is doing? An alternative hypothesis is that the unconventional choice of NPAR creates a load imbalance that may actually be beneficial because it allows for better utilization of the second core in each module. To explore this, I decided to test different binding scenarios.

The importance of process binding

To bind MPI processes to a physical core and prevent the operating system from moving them on around inside the compute node, you need to give extra flags to either srun or your MPI launching command such as mpirun. On Abisko, we use srun, where binding is controlled through SLURM by setting e.g. in the job script:

srun --cpu_bind=rank ...

This binds the MPI rank #1 to core #1, and so on in a sequential manner. It is also possible to explicitly specify where each rank should go. The following example binds 24 ranks to alternating cores, so that there is one rank running per Interlagos module:

srun --cpu_bind=map_cpu=0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46 ...

In this scheme, neighboring ranks are close to each other: i.e. rank #1 and #2 are in the same NUMA zone. The aim is to maximize the NUMA locality.

The third type of binding I tried was to distribute the ranks in a round-robin scheme in steps of 6 cores. The aim is to minimize NUMA locality, since neighboring ranks are far apart from each other, i.e. rank #1 and #2 are in different NUMA zones.

srun --cpu_bind=map_cpu=0,6,12,18,24,30,36,42,2,8,14,20,26,32,38,44,4,10,16,22,28,34,40,46 ...

Below are the results when comparing the speed of running with 48 cores and 24 cores with different kinds of process binding. The 48 core runs are with NPAR=6 and the 24 cores with NPAR=3.

Speed vs process binding on Abisko

It turns out that you can get all of the performance, and even more, by running with 24 cores in the “fat” mode. The trick is, however, that we need to enable the process binding ourselves. It does not happen by default when you run with half the number of cores per node (the “None” section in the graph).

We can further observe that straight sequential process binding actually worsens performance in the 48 core scenario. Only in the round-robin NUMA scheme (“RR-NUMA”) can we reproduce the performance of the unbound case. This leads me to believe that running with no binding gets you in similar situation with broken NUMA locality, which explains why NPAR=3/6 is optimal, and not NPAR=4.

The most surprising finding,however, is that the top speed was achieved not with the “alternate” binding scheme, which emphasizes NUMA memory locality, but rather with the round-robin scheme, which breaks memory locality of NPAR groups. The difference in speed is small (about 3%), but statistically significant. There are few scenarios where this kind of interleaving over NUMA zones is beneficial, so I suspect that it is not actually a NUMA issue, but rather related to memory caches. The L3 cache is shared between all cores in a NUMA zone, so perhaps the L3 cache is being trashed when all the ranks in an NPAR group are accessing it? It would be interesting to try to measure this effect with hardware counters…

NSIM

Finally, I also made some tests with varying NSIM:

“Speed

NSIM=4 is the default setting in VASP. It usually gives good performance in many different scenarios. NPAR=4 works on Abisko too, but I gained about 7% by using NPAR=8 or 12. An odd finding was that NPAR=16 completely crippled the performance, doubling the wall time compared to NPAR=4. I have no good explanation, but it obviously seems that one should be careful with too high NPAR values on Abisko.

Conclusion and overall speed

In terms of absolute speed, we can compare with Triolith, where one node with 16 cores can run this example in 380s (9.5 jobs/h) with 512 bands, using the optimal settings of NPAR=2 and NSIM=1. So the overall conclusion is that one Abisko node is roughly 20% faster than one Triolith node. You can easily become disappointed by this when comparing the performance per core, which is 2.5x higher on Triolith, but I think it is not a fair comparison. In reality, the performance difference per FPU is more like 1.3x, and if you compensate for the fact that the Triolith processors in reality run at much higher frequency than the listed 2.2 Ghz, the true difference in efficiency per core-GHz is closer to 1.2x.

Hopefully, I can make some multi-node tests later and determine whether running with 24 cores per node and round-robin binding is the best thing there as well.

Making Supercells for VASP

I often get questions on how to make supercells for VASP calculations. The problem is typically that you have a structure in a POSCAR file and then want to expand it to a bigger supercell to study e.g. defects. There are many programs available that can perform this task, like PHON and Phonopy, and if you google, you can find many scripts, usually called “vasputil” that can do this task specifically. Many people in your research group have probably written their own program as well.

What I really recommend, however, is to not reinvent the wheel and instead use already available libraries to analyze and work with your ab initio calculations. One such library is the Atomic Simulation Environment (ASE) for Python, which supports many programs, including VASP. With ASE, you can do really cool stuff like making small Python programs which read your VASP input/output and then work on them programmatically. In fact, with ASE, it is almost trivial to make supercells. You can do it with 3 lines of Python code:

import ase.io.vasp
cell = ase.io.vasp.read_vasp("POSCAR")
ase.io.vasp.write_vasp("POSCAR.4x4x4",cell*(4,4,4), label='444supercell',direct=True,sort=True)

The code above reads a POSCAR file in the current working directory, transforms it to a 4x4x4 supercell and writes the results to disk as “POSCAR.4x4x4”. Note that to run this on Triolith, you need to have the ASE module loaded

module load ase/3.6.0

I thought this trick might be useful, so I made a script that can perform this procedure available in the vasptools module on Triolith. The script is called supersize and takes a POSCAR file as the first argument, and the supercell sizing as the second:

$ supersize POSCAR 4x4x4

The output is a new POSCAR file called “POSCAR.4x4x4” containing the supercell repeated 4 times in a,b,c directions.

Quantum Espresso vs VASP (Round 2)

In the second round, I wanted to do a medium-sized, more complicated system. The original plan was to run the Li2FeSiO4 supercell with spin polarization, which I have run extensively in VASP before. It is a nontrivial example from my previous research, and it can be tricky to get fast convergence to the right ground state. Unfortunately, I failed at getting the Li2FeSiO4 system to run. PWscf kept crashing, despite much tinkering, and all I got was the following error message:

Error in routine rdiaghg (1539): S matrix not positive definite

The QE documentation does mention these kinds of errors saying that they are related to negative charges densities in the cores, which is basically either due to an unreasonable crystal structure or a poor choice of pseudopotentials. Standard tricks like increasing the encutrho parameter or changing the diagonalization algorithm did not help either, so my guess is that something was wrong with the available PAW datasets. The all-electron Li PAW, for example, comes with a suggested plane wave cutoff of 1100 eV, unlike 272 eV in VASP. I am not sure if it is numerically sane to mix it with the other ones with much lower cutoff. I have seen this give rise to numerical instabilities in VASP, for example.

New test case: Fe-N-doped graphene

Instead, I constructed a 128-atom supercell of graphene. I inserted an Fe cation site coordinated by four pyridinic sites, to make it a little bit more exciting and also have a reason to do spin polarization.

N-Fe-doped graphene

The PAW datasets were:

Fe.pbe-spn-kjpaw_psl.0.2.1.UPF (16 valence)
C.pbe-n-kjpaw_psl.0.1.UPF (4 valence)
N.pbe-n-kjpaw_psl.0.1.UPF (5 valence)

In total, this system has 127 atoms and 524 electrons, so 512 bands per spin channel is a nice and even number here. There is a similar issue with plane-wave cutoff as in the previous example. I set it to 500 eV to compare with the VASP calculation. It could be argued that it is artificially low, and that a real production calculation with QE using the PAW potentials would need to have a bigger cutoff.

A 500 eV basis requires an FFT grid of 144x144x72 points, which in the case of VASP means that an optimal plan-wise decomposition of the FFTs can be achieved for 1,2,4,8, and 12 compute nodes by using NPAR=1,2,4,8,12, respectively. If I understand the PWscf documentation correctly, 72 FFT planes in Z-direction means that we should be able to scale up to 2x72 MPI ranks, since we have spin polarization (2 “effective” k-points), and that we are also likely to be helped by FFT task group parallelization using the -ntg command line flag.

I ran the jobs on 1,2,4,8,12, and 16 Triolith compute nodes, using all available cores (16 cores per node). For VASP, the optimal setting is band parallelization using NPAR=compute nodes, and LPLANE=.TRUE.. All PWscf calculations were run with -npool 2, which activates k-point parallelization, together with several combinations of -ntg and -ndiag, which controls FFT task parallelization and SCALAPACK linear algebra parallelization. There is experimental support for band parallelization in QE (-nband flag), but it either crashed the program or ran horribly slow, so the results below are using the standard parallelization options.

Results

Fe-N-doped graphene QE vs VASP speed

VASP scales acceptably up to 12 nodes / 192 cores, whereas QE only has decent scaling from 1 to 2 nodes. I believe that the reason is that VASP has band parallelization, but QE not. To test my theory, I ran the VASP jobs with as low NPAR as possible, which is shown as the blue dotted line. This meant NPAR=1 (no band parallelization) for 1-8 nodes, and NPAR=2 for 12-16 nodes. The parallel scaling is much worse then, and essentially flat from 8 nodes and upwards, which is similar to the QE results.

In terms of absolute performance, VASP and QE are tied again when running on 16 and 32 cores, with PWscf actually being about 10% faster on 32 cores. But when comparing the top speed, VASP achieves at least 25 Jobs/h with 16 nodes vs. 10 Jobs/h with PWscf on 8 nodes. So we are looking at half the time to solution with VASP.

Another purpose of this study was to characterize the parallelization settings for QE when running on Triolith. The best parallelization settings for this system turned out to be:

Nodes -ntg  -ndiag
1        1      16
2        1      16
4        2      16
8        2      16
12       4      16
16       4      16

FFT task groups (-ntg) seems to be necessary for higher core counts, just as suggested in the QE manual. The rule of thumb in the manual is to enable -ntg when the number of cores exceeds the number of FFT mesh points in the z direction, which seems accurate in this case.

I found the performance curve for SCALAPACK parallelization very flat for ndiag=16/25/36, so I was unable to resolve any difference with just 3-5 samples per point, but it seems like the performance flattens out above 16 cores for this system. Diagonalizing a 512x512 matrix is not that big of a task in the context of SCALAPACK, so this is not surprising.

Mixing and SCF stability turned out to be an influential factor when making the comparison between VASP and PWscf. The default mixing scheme in VASP is very good and can converge the graphene system studied here in ca 32 SCF iterations using default settings, but getting down to that level with PWscf required tuning of beta (0.2) and change of the mixing mode from plain to local-TF.

Conclusion

When it comes to bigger, more realistic calculations, PWscf is not as straightforward to work with as VASP. This is a combination of the robustness and availability of PAW datasets, and the increased need of parameter tuning necessary to get decent performance. The speed is on par with VASP for 1-2 compute nodes, but VASP has a much faster and more predictable parallel scaling beyond that. It was a surprise to me to not find working band parallelization in PWscf.