The Message Passing Interface (MPI) is the typical way to parallelize applications on clusters, so that they can run on many compute nodes simultaneously. An overview of MPI is available on Wikipedia. The MPI libraries we have on the clusters are mostly tested with C/C++ and Fortran, but bindings for many other programming languages can usually be found on the internet. For example, for Python, you can use the
NSC mainly provides Intel MPI and OpenMPI. Other MPI libraries might be also available on specific systems, or as part of a commericial software package, but they are not actively maintained for general production use. Here is a list of MPI libraries installed on Triolith:
impi/[version] for each version. Intel has official documention online, and the Triolith specific details of our installations are covered in the Triolith software guide.
openmpi/[version]. The installations that lack modules are generally experimental, detailed descriptions can be found at OpenMPI. The OpenMPI documentation might be useful.
One way to see which versions of Intel MPI and OpenMPI that are installed on a cluster is by running "module avail" (look for "impi" and "openmpi"), e.g.:
$ module avail|grep ^impi
impi/4.0.3.008 2013/03/13 14:47:43
impi/4.1.0.024 2013/03/13 14:49:53
impi/4.1.0.027 2013/03/13 14:51:46
impi/4.1.0.030 2013/03/13 14:53:57
impi/4.1.1.036 2013/10/18 15:40:01
impi/4.1.3.048 2014/02/19 12:18:19
impi/recommendation default 2013/11/14 9:17:58
If your application experiences unexpected crashes with Intel MPI, try OpenMPI, or explore the
I_MPI_COMPATIBILITY environment variable to disable optimizations that depend on MPI-2.2 compliant behavior.
We recommend the first-time user to start from NSC's default build environment, by loading the module:
module load build-environment/nsc-recommended
which currently loads Intel MPI version along with default compiler and math library. If the user is experienced and hopes to use another MPI library, simply loading that specific MPI installation will replace the above-specified one. For example, to switch from using the default Intel MPI to OpenMPI 1.6.4:
$ module load openmpi/1.6.4-build1
Unloading conflicting module 'impi/4.0.3.008' before proceeding
All versions of Intel MPI libraries are compatible with both Intel and GNU compilers. NSC's MPI wrapper automatically detects which compiler is being used and links the right MPI library at time of compilation.
On the contrary, individual OpenMPI or MVAPICH2 installations are usually compatible with only a specific compiler, so there could be several installations of the same MPI for different compilers. Some of the available combinations are documented on NSC's software page. In some cases, it is also possible to decide the intended usage of an MPI installation based on the directory path. The path to an MPI library is typically:
where the convention of 'COMPILER_VERSION' is the compiler's name + version, e.g., i1312 representing Intel compiler version 13.1.2 and g472 representing GNU compiler version 4.7.2.
MPI-parallelised code is usually compiled by calling special compiler wrappers command provided by the MPI library. The table below lists the wrappers that are commonly found:
On Bi you can compile MPI-based codes either by a call to the MPI compilation wrapper (e.g.,
mpiicc) or by an NSC-specific way of adding the MPI flag to a non-MPI compilation command (e.g.,
icc -Nmpi). That is,
mpiicc -o mpihello -O2 mpihello.c
icc -Nmpi -o mpihello -O2 mpihello.c
should be identical. Note that NSC's compiler wrapper will be called indirectly in a call to MPI compilation wrapper, since mpicc calls the C compiler, which by itself calls the linker wrapper.
On the other clusters at NSC, we do not recommend using
mpicc et al. directly. Instead, please use NSC's special MPI compilation flag:
icc -Nmpi -o mpi -O2 mpihello.c
This compile and link your program with the currently loaded MPI module.
In general, you launch your parallel executable by calling the
mpirun command or equivalent. On all NSC's systems, we have a utility that simplifies MPI launching called mpprun.
We highly recommend users to launch an MPI application through mpprun, since:
-ilp64 as a compilation flag or as a global option at runtime. It may accompany the 8-byte integer compilation flag from the compiler.
I_MPI_DEBUG environment variable. Debug level 4 or higher will provide enough information. To print out these information, the executable is launched as follows:
mpprun --pass="-genv I_MPI_DEBUG 5" $(EXE)
-binding flag. It can also be manually handled by changing the
I_MPI_PIN_PROCESSOR_LIST variable. Details are found on IntelMPI website.
ompi_info command, including the information on processor affinity.
--bind-to-core --bycore if you are using all cores on a node. That is enabled by launching the application as follows:
mpprun --pass="--bind-to-core --bycore" $(EXE)