OpenFOAM Installations on Tetralith & Sigma

OpenFOAM is a free, open source software for computational fluid dynamics (CFD).

Official homepage of OpenFOAM trademark holders: www.openfoam.com (versions named YYMM, e.g. 1806).
Homepage of OpenFOAM Foundation: www.openfoam.org (versions numbered N.n, e.g. 5.0).


OpenFOAM Learning Resources

Resource Description
Håkan Nilsson, Chalmers CFD with OpenSource Software
Håkan Nilsson, Chalmers Tips and Tricks to install OpenFoam



Installed OpenFOAM Versions


OpenFoam.org
Version NSC Module
2.3.1 OpenFOAM/2.3.1-nsc1-intel-2018a-eb
3.0.1 OpenFOAM/3.0.1-nsc1-intel-2018a-eb
4.1 OpenFOAM/4.1-nsc1-intel-2018a-eb
4.1 OpenFOAM/4.1-nsc1-intel-2018b-eb
5.0 OpenFOAM/5.0-nsc1-intel-2018a-eb
7 OpenFOAM/7-nsc1-intel-2018b-eb
7 OpenFOAM/7-nsc1-gcc-2018a-eb-opt
7 OpenFOAM/7-nsc1-gcc-2018a-eb-debug


OpenFoam.org Installation Paths (click for details)


OpenFoam.org
Version NSC Installation Path
2.3.1 /software/sse/manual/OpenFOAM.org/2.3.1/
3.0.1 /software/sse/easybuild/prefix/software/OpenFOAM/3.0.1-intel-2018a-nsc1
4.1 /software/sse/easybuild/prefix/software/OpenFOAM/4.1-intel-2018a-nsc1
4.1 /software/sse/manual/OpenFOAM.org/4.1/intel-2018b/nsc1
5.0 /software/sse/easybuild/prefix/software/OpenFOAM/5.0-20180108-intel-2018a-nsc1
7 /software/sse/manual/OpenFOAM.org/7/intel-2018b/nsc1
7 /software/sse/manual/OpenFOAM.org/7/gcc-2018a/nsc1/Opt
7 /software/sse/manual/OpenFOAM.org/7/gcc-2018a/nsc1/Debug


OpenFoam.com
Version NSC Module
1706 OpenFOAM/1706-nsc1-intel-2018a-eb
1806 OpenFOAM/1806-nsc2-intel-2018a-eb
1906 OpenFOAM/1906-nsc1-intel-2018b-eb
1912 OpenFOAM/1806-nsc1-intel-2018b-eb


OpenFoam.com Installation Paths (click for details)


OpenFoam.com
Version NSC Installation Path
1706 /software/sse/manual/OpenFOAM.com/1706/nsc1/OpenFOAM-v1706
1806 /software/sse/manual/OpenFOAM.com/1806/nsc2/OpenFOAM-v1806
1906 /software/sse/manual/OpenFOAM.com/1906/intel-2018b/nsc1/OpenFOAM-v1906
1912 /software/sse/manual/OpenFOAM.com/1912/intel-2018b/nsc1/OpenFOAM-v1912


Most versions are compiled with Intel Compiler 2018, using Intel MPI. Some versions are also available with gcc, which use OpenMPI. These are OpenFOAM/7-nsc1-gcc-2018a-eb-opt and OpenFOAM/7-nsc1-gcc-2018a-eb-debug. OpenFOAM/7-nsc1-gcc-2018a-eb-opt is the optimized version, which should be used for production runs. For debugging purpose, we provide OpenFOAM/7-nsc1-gcc-2018a-eb-debug.

For version 4.1, there are two modules: 4.1-nsc1-intel-2018a-eb and 4.1-nsc1-intel-2018b-eb. intel-2018b refers to a slightly newer compiler version. Version 4.1-nsc1-intel-2018a-eb has a few problems, which are fixed in 4.1-nsc1-intel-2018b. See also "Known Problems"

For version 1806, there is also 1806-nsc1-intel-2018a-eb. Parts of this installation appear to be broken. We recommend to use 1806-nsc2-intel-2018a-eb.

How to run

Load the OpenFOAM module corresponding to the version you want to use, e.g

module load OpenFOAM/5.0-nsc1-intel-2018a-eb

You have to source the OpenFOAM bashrc file to set the OpenFOAM environment variables:

source $FOAM_BASHRC

Then launch using "mpprun" for mpi binary, e.g

mpprun interFoam

Example of Job script

example for testcase $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/

#!/bin/bash
#
#SBATCH -n 4
#SBATCH -t 00:20:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy

module load OpenFOAM/5.0-nsc1-intel-2018a-eb
source $FOAM_BASHRC

blockMesh -case damBreak
setFields -case damBreak
decomposePar -case damBreak
mpprun interFoam -parallel -case damBreak &> result.out

How to Compile Extra Libraries for OpenFOAM

OpenFOAM allows to add extra, user defined libraries to OpenFOAM. In order to compile these extra libraries, one has to use the same compiler that was used for OpenFOAM itself. The proper build environment is determined by the naming of the OpenFOAM module. For example, the module 4.1-nsc1-intel-2018a-eb refers to the build environment: buildenv-intel/2018a-eb. In this case, you first load the correct module as follows: module load buildenv-intel/2018a-eb. Next, you have to load the OpenFOAM version you intend to use, e.g. module load OpenFOAM/4.1-intel-2018a-nsc1. And finally you have to call the OpenFOAM bashrc file, which sets the OpenFOAM environment variables: source $FOAM_BASHRC. These are just the very basic steps. It may be required to set additional variables, depending on the library that you want to compile.

Known Problems

The module 4.1-nsc1-intel-2018a-eb appears to have a problem when using the OpenFOAM class chemkinReader. Instead, please use the module 4.1-nsc1-intel-2018b-eb. Accordingly, you have to use the build environment buildenv-intel/2018b-eb to compile your own subroutines.

New OpenFOAM functionality to reduce the number of output files

OpenFOAM creates a large number of output files, when running in parallel. To circumvent this problem, we refer to a functionality in OpenFOAM which is supposed to reduce the number of files.

OpenFOAM.org: https://openfoam.org/news/parallel-io/

OpenFOAM.com: https://www.openfoam.com/releases/openfoam-v1712/parallel.php

Visualization of OpenFOAM Results with Paraview

There are several ways to read your OpenFOAM output data with Paraview:

  1. paraFoam
    In the OpenFOAM userguide, the script paraFoam is mentioned to read your OpenFOAM data with Paraview. The script paraFoam is not available on Tetralith or Sigma. This is a matter how OpenFOAM and Paraview are installed. Instead, you can use a simple alternative, decribed next.

  2. paraFoam alternative
    paraFoam creates a *.foam file in your directory, which indicates that the data is saved in the OpenFOAM format. You can create a *.foam file using the following command: touch <filename>.foam. For example, touch test.foam. It creates an empty file, named test.foam. This is sufficient for Paraview to detect the OpenFOAM data format. In Paraview, you can now open the file test.foam.

    Important: It is important to place the *.foam file in the correct directory. It must be created in the main directory of your test case, which has subdirectories containing the output data. On this directory level, you will typically find directories such as constant or system as well. That means, the *.foam file should not be placed in the same directory as the individual output files (e.g. the data for U, alpha.water, p), but one level above. Otherwise it will not work.

  3. foamToVTK
    Another alternative is to convert the OpenFOAM data into the VTK data format, which can be read by Paraview. This can be done via the command foamToVTK. It will create a directory VTK, where all the converted data is stored. In the subdirectories you will find output files in the VTK format, namely *.vtk files. The disadvantage of this approach is, that the data is stored in two different formats, such that the amount of data is roughly doubled.

How to run OpenFOAM within Singularity containers

It is generally possible to run OpenFOAM within a Singularity container. This can be convenient, if a Singularity or docker image already exists that contains the required OpenFOAM version. For more information about Singularity, we refer to the following NSC webpage: https://www.nsc.liu.se/support/singularity/ We only support Singularity, but not docker. One can build a Singularity image from a docker image.

Where to run and where to store Singularity images at NSC

  1. On Tetralith/Sigma, you can only execute Singularity commands on compute nodes. Singularity is not available on login nodes
  2. Singularity images have to be stored in your project directory, under /proj. Singularity images cannot be executed in your home directory

How to obtain an OpenFOAM Singularity image

Several OpenFOAM versions can be found on docker hub: https://hub.docker.com/, where you can search for "openfoam". As an example, we use the version openfoam/openfoam7-paraview56, which should be listed as you search for "openfoam" on https://hub.docker.com/.

For the OpenFOAM version of your choice, you find the following information on the right hand side of the webpage for the selected version:

Docker Pull Command
    docker pull openfoam/openfoam7-paraview56

Do not execute this command! We only need to know the source of this version, that is "openfoam/openfoam7-paraview56" in this case.

Next, we want to build a Singularity image on Tetralith/Sigma from the image that is available on docker hub:

singularity build <image_name.sif> docker://<source>
  • <image_name.sif> is the name of the Singularity image, that you are going to create. Singularity images always have the extension ".sif". You can choose any descriptive name.
  • <source> is the image source that we just found in the "Docker Pull Command" section, e.g. "openfoam/openfoam7-paraview56".

Example:

mkdir <your_proj_directory>/OPENFOAM_7_SINGULARITY
cd <your_proj_directory>/OPENFOAM_7_SINGULARITY
singularity build openfoam7.sif docker://openfoam/openfoam7-paraview56

To interactively access the Singularity container, you have to create an interactive shell within the container:

singularity shell <container image>

Example: singularity shell openfoam7.sif

You can see that you are inside the container, as the command line prompt will change to: "Singularity <container image>:" To exit the container, simply type "exit".

How to find the bashrc file, MPI and GCC version in the Singularity container

To run OpenFOAM within the Singularity container, we have to know the following details how OpenFOAM is installed within the container:

  1. The location of the bashrc file, containing the OpenFOAM environmental variables. Typically, OpenFOAM is installed in the directory "/opt" within the container. The exact path depends on the individual Singularity image.
  1. If you want to run OpenFOAM in parallel, you have to know which MPI version is used within the Singularity container. The MPI version can be identified using "mpirun -version", which is available after sourcing the bashrc file:

     source <bashrc_file>
     mpirun -version
    
     Example, openfoam/openfoam7-paraview56:
     source /opt/openfoam7/etc/bashrc
     mpirun -version
     Output: mpirun (Open MPI) 2.1.1

    More details of the specific Open MPI installation can be obtained via the command: "ompi_info -a"

  1. The version of the compiler, which is typically gcc for OpenFOAM. The gcc version can be identified via the command: gcc -v. The gcc version is useful to closely match the installed MPI version on Tetralith/Sigma.

As an example, these details apply for the two following OpenFOAM versions:

docker source bashrc path MPI version GCC
openfoam/openfoam7-paraview56 /opt/openfoam7/etc/bashrc Open MPI 2.1.1 7.3
openfoamplus/of_v1812_centos73 /opt/OpenFOAM/OpenFOAM-v1812/etc/bashrc Open MPI 1.10.4 4.8

At this stage, we have found the exact location of the bashrc file, the MPI version and the GCC version within the container. We exit the Singularity container by typing "exit"

Serial Execution of an OpenFOAM Command Within the Singularity Container

Any command within the Singularity container can be executed via the Singularity sub-command "exec"

singularity exec <container image > <command>

In order to execute OpenFOAM commands, we first have to source the bashrc file within the container. Since we want to start an OpenFOAM command from outside of the container, we have to source the bashrc file and call the OpenFOAM command within the same "exec" command line. Otherwise, we cannot set the environmental variables of the bashrc file properly. This is done as following:

singularity exec <container image> bash -c "source <bashrc> && <OpenFOAM command>"

Example, openfoam/openfoam7-paraview56:
singularity exec openfoam7.sif bash -c "source /opt/openfoam7/etc/bashrc && interFoam -case damBreak"

Here, <container image> is the full path to the Singularity container image, <bashrc> is the full path to the bashrc file within the container, <OpenFOAM command> is the OpenFOAM command that you want to execute within the container.

Parallel Execution of an OpenFOAM Command Within the Singularity Container

The standard way to execute MPI applications with Singularity containers is to run the native mpirun command from the host (Tetralith/Sigma), which will start Singularity containers and ultimately MPI ranks within the containers.

General way to execute a parallel OpenFOAM application:

mpirun singularity exec <container image> bash -c "source <bashrc> && <OpenFOAM command>"

IMPORTANT: MPI Versions on Tetralith/Sigma and in the Singularity Container have to match exactly

To our experience, it is mandatory that the MPI version of the host (Tetralith/Sigma) and the MPI version within the container are EXACTLY the same. This is the reason, why we first identified the MPI version within the container, using mpirun -version. In case of the docker image "openfoam/openfoam7-paraview56", the MPI version in the container is Open MPI 2.1.1. That means, on Tetralith/Sigma we have to use an "mpirun" which belongs to Open MPI 2.1.1. Any other version, for example Open MPI 2.1.2, will not work. In this case, the application will complain about two different versions and finally crash. Even when using identical MPI versions, the process to start parallel tasks may still fail. The MPI within the container must contain all necessary MPI components that are needed on the host site. Depending how MPI was configured/compiled within the Singularity container, important components may be missing (e.g. components from the Modular Component Architecture, MCA). It depends on the individual Singularity image if it will harmonize with the environment on Tetralith/Sigma.

Open MPI versions on Tetralith/Sigma

You find different Open MPI versions on Tetralith/Sigma in the following directory

/software/sse/manual/openmpi/

There, you have to identify the mpirun command, which resides in the corresponding "bin" sub-directory. You find sub-directories for different gcc compilers, e.g. g48=GCC 4.8, g73=GCC 7.3. If possible, it is recommended to use the compiler version, which matches the compiler version within the Singularity image.

For example, the mpirun command for the following docker images are as follows:

docker source mpirun on Tetralith/Sigma
openfoam/openfoam7-paraview56 /software/sse/manual/openmpi/2.1.1/g73/nsc1/bin/mpirun
openfoamplus/of_v1812_centos73 /software/sse/manual/openmpi/1.10.4/g48/nsc1/bin/mpirun

You have to use such a specific version of "mpirun" on Tetralith/Sigma to start your Singularity container in parallel. Nevertheless, there may be still conflicts if the MPI version within the container and the MPI version on Tetralith/Sigma is compiled differently.

Example, openfoam/openfoam7-paraview56, Open MPI 2.1.1:

/software/sse/manual/openmpi/2.1.1/g73/nsc1/bin/mpirun singularity exec openfoam7.sif bash -c "source /opt/openfoam7/etc/bashrc && interFoam -parallel -case damBreak"

OpenFOAM Example: DamBreak

DamBreak Example, Part 1:
  1. Prepare the Singularity image
  2. find the bashrc file in the Singularity container
  3. find the MPI version in the Singularity container
  4. copy the test example from the Singularity container to Tetralith/Sigma
interactive -n1

mkdir <your_proj_directory>/OPENFOAM_7_SINGULARITY
cd <your_proj_directory>/OPENFOAM_7_SINGULARITY
singularity build openfoam7.sif docker://openfoam/openfoam7-paraview56

singularity shell openfoam7.sif
source /opt/openfoam7/etc/bashrc
mpirun -version # Output: mpirun (Open MPI) 2.1.1

# copy damBreak tutorial example to Tetralith/Sigma
cp -R $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/ <your_proj_directory>/OPENFOAM_7_SINGULARITY

exit #exit from Singularity shell
exit #exit from interactive session

where <your_proj_directory> is your project directory under /proj. For simplicity, we copy the example data of damBreak in the same directory as the Singularity image.

DamBreak Example, Part 2: How to run OpenFOAM in the Singularity Container, using 4 cores.
# In case of an interactive session:
interactive -n4

# In case of a slurm batch script:
#!/bin/bash
#
#SBATCH -n 4
#SBATCH -t 00:20:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
# --- END OF SLURM BATCH SCRIPT HEADER


cd <your_proj_directory>/OPENFOAM_7_SINGULARITY/damBreak

# full path to Singularity image
FOAM_SINGULARITY_IMAGE=<your_proj_directory>/OPENFOAM_7_SINGULARITY/openfoam7.sif

# Location of the bashrc file within the Singularity container
FOAM_BASHRC=/opt/openfoam7/etc/bashrc

# mpirun command on Tetralith/Sigma, according to correct MPI version
FOAM_MPI_RUN=/software/sse/manual/openmpi/2.1.1/g73/nsc1/bin/mpirun


# execute blockMesh (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && blockMesh -case damBreak"

# execute setFields (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && setFields -case damBreak"

# execute decomposePar (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && decomposePar -case damBreak"

# execute interFoam in parallel
$FOAM_MPI_RUN singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && interFoam -parallel -case damBreak &> result.out"

How to deal with problematic MPI versions using directory binding

Even if the Open MPI version on Tetralith/Sigma and the Open MPI version within the Singularty container are of the same version number, there still may be problems. Depending how MPI was configured/compiled within the Singularity container, important components may be missing (e.g. components from the Modular Component Architecture, MCA). As mentioned earlier, the Singularity image docker://openfoamplus/of_v1812_centos73 works without problems. But the image docker://openfoamplus/of_v1806_centos73 does not work, although both versions are using Open MPI 1.10.4. In version 1806, some MCA components are missing, compared to version 1812.

In this case we can get version 1806 to work, binding the host MPI related files to the container at launch time. That means, we create link (binding) between directories within the container and the corresponding directories on Tetralith. This way, the container picks the directories directly from Tetralith, and not from the container itself. This can be done at runtime, when we execute a Singularity command. This approach makes it flexible to be applied on different platforms and does not require any changes of the Singularity image.

The concept of directory binding is also decribed in the following publication: http://www.hpc-europa.eu/public_documents. Document D12.3 - Using container technologies to improve portability of applications in HPC (30/04/2019).

Binding of directories between Tetralith and the Singularity image

Binding of directories is accomplished using the Singularity option -B. Examples for the Singularity commands "exec" and "shell" are as follows:

singularity exec -B "<directory on Tetralith>:<directory in container>" <container image> <command>

singularity shell -B "<directory on Tetralith>:<directory in container>" <container image>

Singularity binding option -B

The binding option has the general form:

  • -B "source:target"
  • -B "source"
  • -B "source1, source2, source3:target3"

Where "source" is the directory on Tetralith, and "target" is the corresponding directory in the Singularity image, that we want to redirect (bind) to a directory on Tetralith. If the naming of the source directory and the target directory are identical, one only needs to mention the source directory. The option -B "source" is identical to -B "source:source". Typically, this short form is used when dealing with directories such as /lib64 or /etc, which are at the same location in the directory structure on Tetralith as well as in the Singularity image. Several bindings can be specified at once, where each binding is separated by a comma.

Example: Singularity image docker://openfoamplus/of_v1806_centos73

DamBreak Example, Part 1:

  1. Prepare the Singularity image
  2. find the bashrc file in the Singularity container
  3. source the bashrc file,
  4. find the MPI location and version in the Singularity container (which mpirun)
  5. find the gcc version (gcc -v)
  6. copy the test example from the Singularity container to Tetralith/Sigma
interactive -n1

mkdir <your_proj_directory>/OPENFOAM_1806_SINGULARITY
cd <your_proj_directory>/OPENFOAM_1806_SINGULARITY
singularity build openfoam1806.sif docker://openfoamplus/of_v1806_centos73

singularity shell openfoam1806.sif
source /opt/OpenFOAM/OpenFOAM-v1806/etc/bashrc

gcc -v # Output: gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
which mpirun # Output: /opt/OpenFOAM/ThirdParty-v1806/platforms/linux64Gcc/openmpi-1.10.4/bin/mpirun

# copy damBreak tutorial example to Tetralith/Sigma
cp -R $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/ <your_proj_directory>/OPENFOAM_1806_SINGULARITY

exit #exit from Singularity shell
exit #exit from interactive session

From the interactive Singularity shell, we find the following information about the Singularity image:

  1. bashrc location: /opt/OpenFOAM/OpenFOAM-v1806/etc/bashrc
  2. Open MPI directory: /opt/OpenFOAM/ThirdParty-v1806/platforms/linux64Gcc/openmpi-1.10.4
  3. Open MPI version 1.10.4
  4. gcc version 4.8
DamBreak Example, Part 2:
How to run OpenFOAM 1806 in the Singularity Container. How to bind the openmpi directory within the container to the openmpi directory on Tetralith. Example uses 4 cores.
# In case of an interactive session:
interactive -n4

# In case of a slurm batch script:
#!/bin/bash
#
#SBATCH -n 4
#SBATCH -t 00:20:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
# --- END OF SLURM BATCH SCRIPT HEADER


cd <your_proj_directory>/OPENFOAM_1806_SINGULARITY/damBreak

# full path to Singularity image
FOAM_SINGULARITY_IMAGE=<your_proj_directory>/OPENFOAM_1806_SINGULARITY/openfoam1806.sif

# Location of the bashrc file within the Singularity container
FOAM_BASHRC=/opt/OpenFOAM/OpenFOAM-v1806/etc/bashrc

# mpirun command on Tetralith/Sigma, according to correct MPI version
FOAM_MPI_RUN=/software/sse/manual/openmpi/1.10.4/g48/nsc1/bin/mpirun

# Set the openmpi directory on Tetralith and in the Singularity container
OMPI_DIR_TETRALITH=/software/sse/manual/openmpi/1.10.4/g48/nsc1
OMPI_DIR_CONTAINER=/opt/OpenFOAM/ThirdParty-v1806/platforms/linux64Gcc/openmpi-1.10.4

# Besides the Open MPI directory, we also have to bind /lib64 and /etc
BIND_DIRS="$OMPI_DIR_TETRALITH:$OMPI_DIR_CONTAINER,/lib64,/etc"

# execute blockMesh (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && blockMesh -case damBreak"

# execute setFields (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && setFields -case damBreak"

# execute decomposePar (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && decomposePar -case damBreak"

# execute interFoam in parallel. Use the binding option -B.
$FOAM_MPI_RUN singularity exec -B $BIND_DIRS $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && interFoam -parallel -case damBreak &> result.out"

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express