Build environment

NSC provides some utilities that makes it easier to compile, link and run programs. These are the compiler wrappers, the parallel job launcher mpprun, the inspection tool dumptag. The purpose of the compiler wrappers are to record information into the compiled binaries about which libraries that were used to compile the program. That information is later used by mpprun to automatically find the correct shared libraries when you launch the program. The purpose is to reduce the need to load specific compiler, MPI and Intel MKL modules in the job script.

If you have used other supercomputing facilities, you might be used to writing job scripts in the following manner:

#SBATCH ...
#SBATCH ...
#
module load vasp/5.3.5 intel/12.1.5 openmpi/1.5.4 mkl/11.1
mpirun vasp

In contrast, on NSC's systems you only need to write:

#SBATCH ...
#SBATCH ...
#
module load vasp/5.3.5-01Apr14
mpprun vasp

This works because mpprun inspects the binary and determines which shared libraries and mpirun-program to use when launching the program. The reason we do this is to simplify the launching process, i.e. you don't need to remember exactly which version of an MPI library that was used to compile the program. It also ensures that the results from a program are reproducible over time.

Compiler wrapper

The compiler wrappers are currently enabled for the Intel and GNU compiler suites. The wrapper provides a few extra compiler flags in addition to native compiler options. The extra flags control linking of programs using MPI or MKL or both. The main feature of the compiler wrapper is to embed the paths to libraries in the binary using RPATH. So at runtime there is no need to set LD_LIBRARY_PATH or load modules.

On Triolith, the path to every library that you include from the /software/ directory is embedded, but not libraries from your own home directory. The underlying reason is that we assume that libraries that you have installed yourself are used for active software development, and you might want to exert full control of these. You can change this behavior by setting the NSC_LD_FLAG environment variable (fully described here).

Please note that on SMHI's clusters, and the older SNIC clusters, Matter and Kappa, complete library embedding is not active for everything in /software. It only works for MPI and MKL libraries.

The compiler wrapper is enabled by default when you load a compiler module. The name of the compiler wrapper is same as the name of the original compiler. So for Intel Fortran name of the compiler wrapper is ifort, for GNU C compiler, gcc and so on. In most cases, the compiler wrapper will be transparent to users.

For more details on recommended ways to compile programs on NSC resources please see the section on compilers.

mpprun

NSC provides the MPI job launching tool called mpprun. We strongly recommend that you use mpprun instead of mpirun or similar to start an MPI job. The command mpprun is provided by the corresponding module, but it is loaded by default when you log in to a cluster, so it should always be there.

As mentioned before, the main benefit of mpprun is that it can load modules automatically. If a MPI binary is built according to NSC recommendations, then mpprun can detect the correct MPI library, and the corresponding command to start an MPI job, e.g.mpirun or mpiexec.hydra. What happens then is that mpprun calls the native MPI launcher from within itself. If the OpenMP enironment variable controlling the number of threads is unset when launching an MPI application with mpprun, mpprun will by default set OMP_NUM_THREADS=1. mpprun also writes some useful information about the job in the system log file. For a detailed description see here. For full list of mpprun options use mpprun --help.

Here is example job script using mpprun. It runs an MPI application on two nodes on Triolith (32 cores):

#!/bin/bash
#
#SBATCH -J myjobname
#SBATCH -t 00:30:00
#SBATCH --mem=6000
#SBATCH -N 2
#SBATCH --exclusive
#
mpprun ./mympiapp
# Script ends here

Mpprun also works in interactive sessions. Below, we ask for an interactive session on two nodes and test an MPI program:

[kronberg@triolith1 mpi] interactive -N2 --exclusive -t 00:10:00 --reservation=devel
Waiting for JOBID 77079 to start
...
[kronberg@n1137 mpi]$ mpprun mpitest_c
mpprun INFO: Starting impi run on 2 nodes (32 ranks)...
Hello, world, I am 16 of 32
[...]
Hello, world, I am 31 of 32
Hello, world, I am 6 of 32
[kronberg@n1137 mpi]$ mpprun mpitest_c_openmpi 
nberg@n1137 mpi]$ 
[kronberg@n1137 mpi]$ exit
[screen is terminating]
Connection to n1137 closed.
[kronberg@triolith1 mpi]$

dumptag

The dumptag utility can be used to inspect which build information are encoded inside a binary, e.g. name & version of the MPI library, which MKL that was used, build date etc. The dumptag utility comes with mpprun, so you don't need load a module to have access to it.

Sample out of an MPI program that was compiled with Intel 12.1.4 compilers and Intel MPI 4.0.3.008, but without any MKL library:

[kronberg@triolith1 mpi]$ dumptag mpitest_c -- NSC-tag
---------------------------------------------------------- File name:
/home/kronberg/mpi/mpitest_c
Properly tagged:        yes
Tag version:            4
Build date:             121024
Build time:             131142
Built with MPI:         impi 4_0_3_008
Built with MKL:         no (or build in an unsupported way)
Linked with:            intel 12_1_4
---------------------------------------------------------------------

If you would have compiled with e.g. OpenMPI instead, it would have been visible in the "Built with MPI" section:

[kronberg@triolith1 mpi]$ dumptag mpitest_c_openmpi
-- NSC-tag ----------------------------------------------------------
File name:              /home/kronberg/mpi/mpitest_c_openmpi
Properly tagged:        yes
Tag version:            4
Build date:             121024
Build time:             131446
Built with MPI:         openmpi 1_6_2_build1                  
Built with MKL:         no (or build in an unsupported way)
Linked with:            intel 12_1_4
---------------------------------------------------------------------
[kronberg@triolith1 mpi]$