Tetralith

Tetralith is NSC's largest HPC cluster. It replaced NSC's previous HPC cluster Triolith in 2018. Tetralith is funded by SNIC and uses for research by Swedish research groups. Access to Tetralith is granted by SNIC.

The Tetralith installation took place in two phases.

The first phase had a capacity that exceeded the capacity of NSC's previous system Triolith. It consists of 592 "thin" compute nodes with 96 GiB RAM and 60 "fat" compute nodes with 384 GiB RAM.

After the first phase was installed, Triolith was retired, and removed to make room for the second phase of Tetralith.

The second phase of 1240 "thin" compute nodes was installed and tested in November and December of 2018, and made available to users January 2nd, 2019.

Hardware and software environment

Tetralith is running a CentOS 7 version of the NSC Cluster Software Environment. This means that most things are familiar to Triolith users.

You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are still selected using "module".

All Tetralith compute nodes have 32 CPU cores. There are 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node has a local SSD disk where applications can store temporary files (approximately 200GiB per thin node, 900GiB per fat node).

(As of 2019-01-03, Tetralith also contains 16 nodes not funded by SNIC that will be used for other purposes later, so the actual number of usable nodes is 1908.)

All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect to the existing disk storage. The Omni-Path network is similar to the FDR Infiniband network in Triolith (e.g still a fat-tree topology).

The hardware will be delivered by ClusterVision B.V.

The servers used are Intel HNS2600BPB compute nodes, hosted in the 2U Intel H2204XXLRE chassis and equipped with Intel Xeon Gold 6130 for a total of 32 CPU cores per compute node.

GPU, Xeon Phi and large-memory nodes on Triolith

There are currently no GPU nodes available in Tetralith.

NSC plans to buy a small number of nodes with high-end GPUs for Tetralith. Just as on Triolith, these nodes will mostly be for test and development, not large-scale production jobs. The Tetralith GPU nodes will not become available until Spring 2019 at the earliest.

GPU nodes are available on other SNIC systems, e.g Kebnekaise and Tegner.

There are currently no plans to replace the Xeon Phi and large-memory nodes that were available on Triolith (Triolith had two nodes with 1TB RAM). Please note that the "fat" nodes in Tetralith have 384 GiB RAM, which should be sufficient for most users' needs. Large-memory and Xeon Phi nodes are available on other SNIC systems, e.g Kebnekaise.

Additional information

Tetralith migration guide

Tetralith getting started guide

Tetralith applications

Module system

Disk storage

Link to Tetralith press release


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express