Gromacs installations at NSC

Directory list:

2016.4 GROMACS 2016.4
2016.5 GROMACS 2016.5 (plumed)
4.5.5 GROMACS 4.5.5
4.6.1 GROMACS 4.6.1
4.6.3 GROMACS 4.6.3
4.6.7 GROMACS 4.6.7
5.0.4 GROMACS 5.0.4
5.1.2 GROMACS 5.1.2
5.1.4 GROMACS 5.1.4

Level of support

Tier 3 NSC will not be able to help you much with this program, either because we lack the in-house experience, or because it is a test installation. In general, these types of installations are untested, and will not be updated unless you send a request to NSC.

Please see the page describing our software support categories for more information.
You can also contact for further information.

GROMACS is a software used to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

Modern installations on Triolith (from 5.x and up), uses single precision builds by default and only carry a suffix denoting precision for double precision builds. All modern Gromacs version on triolith are built MPI-parallel, version 5.1.2 and later have PLUMED metadynamics support built in.

Older Gromacs installations may carry these suffixes:

_s = Single precision version
_d = Double precision version
_mpi_s = MPI parallel version, single precision
_mpi_d = MPI parallel version, double precision

How to run

Load the gromacs module corresponding to the version you want to use, e.g

module load gromacs/5.1.2

Then launch the desired binary. For MPI binaries, use “mpprun”, e.g

mpprun /path/to/gmx_mpi mdrun ...

The following example batch script uses the Gromacs ADH (cubic) benchmark as an example and as such leaves out all steps before running the Gromacs preprocessor. The benchmark is available at, and assumes that the benchmark is unpacked and the job is submitted in that unpacked directory.

#SBATCH -t 08:00:00
#SBATCH -n 64
#SBATCH -A <Your_SNIC_project_ID>
#SBATCH --exclusive

module load gromacs/5.1.4

# Only use one process for the Gromacs pre-processing. Keep a log of the
# process to help debugging in case needed. This step is actually best performed
# separately so that a valid .tpr file is available before the trajectory
# run. It is shown here for completeness. 

mpprun -n 1 $(type -p gmx_mpi) grompp \
    -f pme_verlet.mdp \
    -c conf.gro \
    -p \
    -o mdrun.tpr > my_adh_cubic_run.grompp.log 2>&1

# Now use all processors allocated for the trajectory generation.

mpprun $(type -p gmx_mpi) mdrun \
-s mdrun.tpr \
-deffnm my_adh_cubic_run \
-pin on

Please note that you will need to edit the SBATCH section to match your specific needs before submitting.