During 2018, NSC's largest HPC cluster Triolith is being replaced with a new system named Tetralith.

All user and project data stored on Triolith (in /proj and /home) is also available on Tetralith, users will not need to copy or move any data.

The Tetralith installation will take place in two phases.

The first phase is now available, and has a capacity that exceeds the capacity of NSC's previous system Triolith. It consists of 592 "thin" compute nodes with 96 GiB RAM and 60 "fat" compute nodes with 384 GiB RAM.

The installation of the second phase will begin in early November. NSC plans to make the second phase (1240 nodes, making the total system size 1892 nodes) available to users by January 1st (i.e for the next round of SNAC Large projects)

During the second phase installation period there will be one or more planned short (days, not weeks) maintenance windows during which Tetralith will not be available. These are necessary to install the second phase and test the combined system. As always, we will announce planned maintenance as far ahead as possible (at least one week in advance)

Hardware and software environment

Tetralith is running a CentOS 7 version of the NSC Cluster Software Environment. This means that most things are familiar to Triolith users.

You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are still selected using "module".

All Tetralith compute nodes have 32 CPU cores. There will eventually be 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node has a local SSD disk where applications can store temporary files (approximately 200GiB per thin node, 900GiB per fat node).

All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect to the existing disk storage. The Omni-Path network is similar to the FDR Infiniband network in Triolith (e.g still a fat-tree topology).

The hardware will be delivered by ClusterVision B.V.

The servers used are Intel HNS2600BPB compute nodes, hosted in the 2U Intel H2204XXLRE chassis and equipped with Intel Xeon Gold 6130 for a total of 32 CPU cores per compute node.

GPU, Xeon Phi and large-memory nodes on Triolith

There are currently no GPU nodes available in Tetralith.

NSC plans to buy a small number of nodes with high-end GPUs for Tetralith. Just as on Triolith, these nodes will mostly be for test and development, not large-scale production jobs. The Tetralith GPU nodes will not become available until late 2018 at the earliest, so there will be a gap in GPU node availability at NSC. GPU nodes are available on other SNIC systems, e.g Kebnekaise and Tegner.

There are currently no plans to replace the Xeon Phi and large-memory nodes (but please note that the "fat" nodes in Tetralith will have 384 GiB of RAM) that were available on Triolith. Large-memory and Xeon Phi nodes are available on other SNIC systems, e.g Kebnekaise.

Additional information

Tetralith migration guide

Tetralith getting started guide

Tetralith applications

Module system

Disk storage

Link to Tetralith press release

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems


NSC Express