What is NSC Centre Storage?

The NSC Centre Storage system provides shared file storage for the Triolith and Gamma systems.

It is intended for short- and medium-term storage during an active project using Triolith or Gamma. It also provides longer-term storage for DCS projects.

Using NSC Centre Storage

For information on how to use the system, please see our support pages

Storage system hardware, software and performance

The storage system consists of four IBM/Lenovo "GSS26" building blocks

The system occupies three 19" racks and consists of eight servers and 24 disk enclosures. In total there are 1392 spinning hard disks (1044 * 4 TB and 348 * 6 TB) and 20 SSD disks (200 GB each).

On this hardware we currently (as of 2017-03-10) run version 3.0b of the GSS software stack, which consists of:

  • Linux on the servers (Red Hat Enterprise Linux version 7.2)
  • IBM's "GNR" software RAID layer (part of GPFS)
  • GPFS 4.2.1

The total disk space available to store files is approximately 4300 TiB. The difference between "raw" space on the disks and the available space on the file system is mostly due to:

  • RAID overhead1
  • The difference between a Tebibyte/TiB and a Terabyte/TB
  • "Spare space" - unused disk space that is used to restore data redundancy when a disk fails.

The storage system is connected to Triolith using four Mellanox FDR 56Gbits/s InfiniBand links per server. In practice, the hard disks will often be the bottleneck for I/O, and the maximum sustained aggregated transfer speed (when writing or reading from many compute nodes simultaneously) is around 60 GiB per second.

From a single thread/core on a single Triolith compute node you can expect to read or write up to around 1 GiB per second (as long as the disk system is not overloaded by other jobs). On login and analysis nodes this figure will be higher, around 3.5 GiB/s.

Gamma is connected to the system using Ethernet, with a maximum total bandwith of 5 GiB/s.

The total cost (including computer room space, power, cooling, hardware, NSC staff, hardware support, ...) for the planned lifetime of the system (5 years) will be around 15 million SEK, or around 700 SEK per usable TiB per year.

The power consumption (included in the total cost above) is around 24 kW, or around 5.6 Watt per TiB of available space.

History

The current system was put into operation in October 2014 and was expanded by 1.4 PiB in 2017. It is expected to be kept in operation at least until October 2018.


  1. File data data is protected by 8+2 Reed-Solomon code, i.e 8 data blocks require 2 parity blocks to be stored on disk. Meta data (file system structure, directories, contents of small files) is protected by 3-Way replication.


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express