Migrating to Tetralith from Triolith

The Tetralith cluster is the replacement for Triolith. This page contains information that might help you to migrate to Tetralith if you have been using Triolith and outlines a few key subjects.

Setting the initial account password

When you requested a Triolith account, you got an email with a temporary password that you used to login via SSH and set the real password. When you request a Tetralith account, you will instead get an email with a URL in it that you use to choose the password that you will use.

Storage file systems

Both the project, /proj, and home, /home, file systems mounted on Triolith will be mounted on Tetralith. However, you will have to update any Triolith specific content in your files.

If you use centre-provided code

Check the Tetralith/Sigma software page. If you do not find your application there, run "module avail" on the cluster. If you stil cannot find what you're looking for, please contact NSC Support. Please note that you are allowed to install software yourself (with or without NSC assistance), see our software installation policy.

If you build code yourself

You will most likely have to rebuild your code! This is both to get proper performance but also since all build environments from Triolith will not be available on Tetralith.

If you run MPI applications

Tetralith nodes have 32 cores while Triolith nodes had 16. A consequence of this is that you get twice as many MPI ranks on Tetralith as on Triolith if you only use the Slurm flag "-N". For example, a flag "-N 4" to Slurm will give you 128 ranks on Tetralith but only 64 ranks on Triolith. In other words, the flag "-N 4" will be equivalent to "-n 128" on Tetralith but "-n 64" on Triolith.

Tetralith nodes have more primary memory (RAM) than Triolith nodes. Tetralith nodes even have more memory per core than Triolith nodes, 3 GiB versus 2 GiB on "thin" nodes and 12 GiB versus 8 GiB on "fat" nodes. As a consequence you might be able to use a higher ratio of the cores on a Tetralith node than you could on Triolith without running into out-of-memory issues.

Hardware differences

Full hardware specifications:

The main differences are:

Tetralith has 32 cores per compute node. If you have a working job configuration for Triolith, you should be able to run exactly the same job on Tetralith using half the number of compute nodes you used on Triolith -- it will still run faster (improvements in speed per core range from 1.0-3.0x).

Tetralith has Intel Xeon processors of the "Skylake" generation. Our processor model is called Xeon Gold 6130. Skylake CPUs have improved vectorization with AVX 512-bit instructions and registers. In theory, up to 32 double precision floating point operations can be handled per clock cycle (up from 8 using AVX on Triolith). To benefit from this, you need to recompile your software with high optimization (like -O2 -xCORE-AVX512) or at least link with an external library that has AVX512 support (like Intel's MKL libraries found, for example, in the module buildenv-intel/2018a-eb).

Tetralith has 96 GB of memory in the thin compute nodes, and 384 GB in the fat nodes. This is 3x the amount on Triolith. The memory speed and memory bandwidth has also improved. Tetralith has 6 channels of 2666 Mhz DDR4 memory per CPU socket. In low-level memory benchmarks like STREAM, we can see up to (.. x) improvement. For certain applications, this can lead to substantial speed-up, even without recompiling them.

Tetralith has Intel Omni-Path ("OPA") for fast networking -- earlier clusters at NSC have had Infiniband from Mellanox and Qlogic/Intel TrueScale. As a user, you will probably not notice this, but if you are using your own MPI library, you may have to supply special flags or recompile it with "PSM2" to get the best performance. The available networking bandwidth is approximately 2x that of Triolith (100 gigabit/s vs 56 gigabit/s).


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express