The NSC Centre Storage system provides shared file storage for the Tetralith and Sigma systems.
It is intended for short- and medium-term storage during an active project using Tetralith or Sigma. It also provides longer-term storage for DCS projects.
For information on how to use the system, please see our support pages
The storage system consists of three IBM ESS GL6 building blocks.
The system occupies two 19" racks and consists of six and 18 disk enclosures. In total there are 1044 spinning hard disks and 6 SSD disks (200 GB each).
The total disk space available to store files is approximately 2800 TiB. The difference between "raw" space on the disks and the available space on the file system is mostly due to:
The storage system is connected to Tetralith and Sigma using two Intel Omni-Path 100 Gbits/s links per server. In practice, the hard disks will often be the bottleneck for I/O, and the maximum sustained aggregated transfer speed (when writing or reading from many compute nodes simultaneously) is around 60 GiB per second.
From a single thread/core on a single Tetralith compute node you can expect to read or write up to around 1 GiB per second (as long as the disk system is not overloaded by other jobs). On login and analysis nodes this figure will be higher, around 3.5 GiB/s.
The total cost (including computer room space, power, cooling, hardware, NSC staff, hardware support, ...) for the planned lifetime of the system (5 years) will be around 15 million SEK, or around 700 SEK per usable TiB per year.
The power consumption (included in the total cost above) is around 24 kW, or around 5.6 Watt per TiB of available space.
The current system was put into operation in October 2014 and was upgraded before being connected to Tetralith and Sigma in 2018.
File data data is protected by 8+2 Reed-Solomon code, i.e 8 data blocks require 2 parity blocks to be stored on disk. Meta data (file system structure, directories, contents of small files) is protected by 3-Way replication.↩