NSC's director Patrick Norman in front of Triolith. Photo by Göran Billeson.

Triolith was NSC's flagship system from 2012 to 2018. It was named after two previous systems with similar architectures: Monolith and Neolith.

Triolith was equipped with fast interconnect network (Mellanox Infiniband FDR) for high performance when running massively parallel applications, but its cost-efficient design also made it suitable for smaller jobs.

Triolith had (when retired in 2018) a combined peak performance of 260 Teraflop/sec, 16,368 compute cores, 35 Terabytes of memory, and 56 Terabit/s aggregated network performance.

The system was based on HP's Cluster platform 3000 with SL230s Gen8 compute nodes, and was delivered in 2012 (1200 nodes) and 2013 (expanded to 1600 nodes) by GoVirtual AB. In 2017 it was reduced in size to 1017 nodes.

System Details

Hardware HP Cluster Platform 3000 with SL230s Gen8 and DL980 Gen 7 ("huge" nodes) compute nodes
Processors 8-core Intel Xeon E5-2660 "Sandy Bridge" processors at 2.2GHz
8-core Intel Xeon CPU E7-2830 at 2.13GHz ("huge" nodes)
Number of compute nodes 1017
Compute node (thin) 2 sockets (16 cores) with 32 GB DDR3 1600 MHz memory (896 nodes)
Compute node (fat) 2 sockets (16 cores) with 128 GB DDR3 1600 MHz memory (48 nodes)
Compute node (gpu,phi) 2 sockets (16 cores) with 64 GB DDR3 1600 MHz memory (7 nodes, 4 with GPUs, 3 with Intel Xeon Phi)
Compute node (huge) 8 sockets (64 cores) with 1 TB memory (2 nodes)
Analysis nodes (DCS) 2 sockets (16 cores) with 256 GB DDR3 1600 MHz memory (12 nodes)
WLCG grid nodes 2 sockets (16 cores) with 64 or 32 GB DDR3 1600 MHz memory (52 nodes)
Login nodes 2 HP ProLiant DL380p Gen8 servers, accessible using SSH and ThinLinc
High speed interconnect Mellanox Infiniband FDR high-speed interconnect (1 us PI latency, ~7 GB/sec MPI bandwidth)
Node scratch storage 500 TB (500 GB per node)
Global file system Triolith used NSC's Centre Storage system (shared between Triolith and Gamma)
Operating system CentOS Linux 6
Batch queue system Slurm

Computing services provided by Triolith

  • Running traditional HPC batch jobs, submitted from the login node ("head node")
  • Interactive use of compute nodes (for e.g test, development or to run graphical applications)
  • High-performance visualization of data using ThinLinc (on the login nodes)
  • Analysis nodes (available to certain user groups - "DCS" projects)
  • Grid computing (for the ATLAS and ALICE experiements)
  • Nodes fitted with nVIDIA GPUs and Intel Xeon Phi accelerators for test and develoment

Research performed with Triolith

Triolith was open for users within Swedish academia. You applied for computer time through SNIC. In general, most Triolith use was for simulations within materials science, fluid dynamics, and quantum chemistry.


  • Phase I (July 2012) - initial delivery of 240 nodes June 16th. Available to users from July 6th.
  • Phase II (Dec 2012) - all 1200 nodes available to users from November 10th. Triolith was placed in position 83 on the November 2012 TOP500 list.
  • Phase III (Nov 2013) - expanded to 1600 nodes. Position 79 in the November 2013 TOP500 list.
  • Triolith was shrunk by 576 nodes on April 3rd, 2017 as a result of a delay in funding a replacement system.
  • Triolith was retired September 21st, 2018 (replaced by the cluster "Tetralith")


The total budget for the Triolith project (Phase I-III) was approx. 60MSEK for the first 4 years.


The average power consumption of Triolith with a normal utilization (95% of nodes in use) and an average application mix was around 266 kW (260W per compute node, 16.25 Wh/core hour)

The maximum power consumption (only ever achieved during the TOP500 Linpack run using 1600 nodes) was 519 kW.

At idle (which did not happen very often), a single compute node used around 80W.

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems


NSC Express