NSC's director Patrick Norman in front of Triolith. Photo by Göran Billeson.

Triolith is NSC's current flagship system. It is named after two previous systems with similar architectures: Monolith and Neolith.

Triolith is equipped with fast interconnect network (Mellanox Infiniband FDR) for high performance when running massively parallel applications, but its cost-efficient design also makes it suitable for smaller jobs.

Triolith has a combined peak performance of 407 Teraflop/sec, 25,600 compute cores, 55 Terabytes of memory, and 87 Terabit/s aggregated network performance.

The system is based on HP's Cluster platform 3000 with SL230s Gen8 compute nodes, and was delivered in 2012 by GoVirtual AB.

System Details

Hardware HP Cluster Platform 3000 with SL230s Gen8 and DL980 Gen 7 ("huge" nodes) compute nodes
Processors 8-core Intel Xeon E5-2660 "Sandy Bridge" processors at 2.2GHz
8-core Intel Xeon CPU E7-2830 at 2.13GHz ("huge" nodes)
Number of compute nodes 1594
Compute node (thin) 2 sockets (16 cores) with 32 GB DDR3 1600 MHz memory (1524 nodes)
Compute node (fat) 2 sockets (16 cores) with 128 GB DDR3 1600 MHz memory (56 nodes)
Compute node (gpu,phi) 2 sockets (16 cores) with 64 GB DDR3 1600 MHz memory (8 nodes)
Compute node (huge) 8 sockets (64 cores) with 1 TB memory (2 nodes)
Analysis nodes (DCS) 2 sockets (16 cores) with 256 GB DDR3 1600 MHz memory (12 nodes)
Login nodes 2 HP ProLiant DL380p Gen8 servers, accessible using SSH and ThinLinc
High speed interconnect Mellanox Infiniband FDR high-speed interconnect (1 us PI latency, ~7 GB/sec MPI bandwidth)
Node scratch storage 800 TB (500 GB per node)
Global file system Triolith uses NSC's Centre Storage system (shared between Triolith and Gamma)
Operating system CentOS Linux 6
Batch queue system Slurm

For software availability, please see the NSC software page.

Computing services provided by Triolith

  • Running traditional HPC batch jobs, submitted from the login node ("head node")
  • Interactive use of compute nodes (for e.g test, development or to run graphical applications)
  • High-performance visualization of data using ThinLinc (on the login nodes)
  • Analysis nodes (available to certain user groups, see DCS)
  • Grid computing (for the ATLAS and ALICE experiements)
  • Nodes fitted with nVIDIA GPUs and Intel Xeon Phi accelerators for test and develoment (see Using the Accelerator Equipped Nodes in Triolith

Research performed with Triolith

Triolith is open for users within Swedish academia. You apply for computer time through SNIC. A full list of science project currently using Triolith is available in Science at NSC section. In general, most Triolith use is for simulations within materials science, fluid dynamics, and quantum chemistry.


  • Phase I (July 2012) - initial delivery of 240 nodes June 16th. Available to users from July 6th.
  • Phase II (Dec 2012) - all 1200 nodes available to users from November 10th. Triolith was placed in position 83 on the November 2012 TOP500 list.
  • Phase III (Nov 2013) - expanded to 1600 nodes. Position 79 in the November 2013 TOP500 list.


  • The part of Triolith available to SNAC projects will shrink from 1536 nodes to 960 nodes on April 1st, 2017. This is a result of the delay in funding a replacement system. By reducing the number of nodes we can keep the rest of the system running at least until October 31st, 2017 (an extension until December 31st 2017 is not decided yet, but very likely). The size of new projects will be adjusted accordingly.
  • A replacement system is being discussed, but no decision on funding has been made yet (as of 2016-12-07). If Triolith is replaced, it will likely be kept in operation until the replacement system is ready.

Who can use Triolith?

Access to Triolith is granted by the Swedish National Infrastructure for Computing (SNIC). See the SNIC application page and the NSC application page for more details.


The total budget for the Triolith project (Phase I-III) is approx. 60MSEK for the lifetime of the system. This works out to a total cost of around 0.07 SEK per core hour. This includes hardware, power, cooling, compute room, hardware support, software and the staff required to run the system (but no application experts), and assumes a 95% utilization (which is typical for Triolith).

For comparison, an equivalent virtual machine from a prominent US cloud vendor costs (as of 2014-10-30) from 0.07 SEK/core hour (lowest possible spot price, often more expensive than this) up to 0.43 SEK/core hour (on-demand instance), and this includes just the virtual machine running Linux, no applications or user support.


The average power consumption of Triolith with a normal utilization (95% of nodes in use) and an average application mix is around 415 kW (260W per compute node, 16.25 Wh/core hour)

The maximum power consumption (only ever achieved during the TOP500 Linpack run) is 519 kW.

At idle (which does not happen very often), a single compute node uses around 80W.

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems


NSC Express