OpenMX installations at NSC


Directory list:

VersionDescription
3.7.6 OpenMX 3.7.6

Level of support

Tier 2 NSC has only limited experience with this software, but we will try to help as much as possible. We have ran some tests, for example, if the program comes with a test suite, but they may be far from exhaustive. We will try to install and test new versions, as soon as we can.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

The OpenMX installations are generally maintained by Peter Larsson (pla@nsc.liu.se).

How to run

Typically, you want to run OpenMX in hybrid MPI/OpenMP mode. This requires some care when writing the batch script. For example, to run on 256 cores using 128 MPI ranks and 2 OpenMP threads per rank, we request 16 nodes (16x16=256 cores) and ask for two cores to be assigned to each task (=rank):

#!/bin/bash
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
#SBATCH -N 16
#SBATCH --cpus-per-task=2 

export OMP_NUM_THREADS=2
mpprun /software/apps/openmx/3.7.6/default/openmx input.dat -nt 2

Pseudopotentials

The “pseudopotentials” folder, called DFT_DATA13 is installed here:

/software/apps/openmx/DFT_DATA13

You need to specify the path in the input file, like this:

Data.Path /software/apps/openmx/DFT_DATA13

The DFT_DATA folder may not be regularly updated, or checked for correctness, so I advise users to download and maintain there own pseudopotential library.

Performance considerations

Parallel benchmarking is ongoing; so far I can only say that MPI/OpenMP hybrid mode seems faster than pure MPI running on the same number of cores. I would expect that you need to run at least 2 MPI ranks per node (one per NUMA zone) to get good performance, though.