Running an MPI application using Singularity

On the Singularity website you can find some notes on using OpenMPI to run MPI applications. However, there's very little information available on using IntelMPI (which is the MPI that NSC recommends for most applications). Luckily, IntelMPI also works in Singularity.

Compared to "simple" standalone containers, making MPI work takes a little more effort. You will need to install interconnect (Infinniband, Omni-Path) libraries in the container that somewhat matches what the actual compute node is using. You then need to launch once instance of Singularity per MPI rank (e.g "mpiexec.hydra -bootstrap slurm singularity myimage.sif ~/mympiapp").

If you are interested in using an MPI application with Singularity, feel free to contact NSC Support and ask for help.

Requirements to run an application in a container using Intel MPI:

Performance

In theory, the overhead of using Singularity should be almost zero.

We have tested a very communication-intensive MPI application and seen no performance impact from using Singularity.

If you see worse performance than you expected when using Singularity, please let us know.


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express