File systems on SMHI resources

Home directories (/home)

Each user has a home directory. If you are sm_abcde, your home directory will be /home/sm_abcde. If you need to refer to your home directory in a script, use $HOME instead of hardcoding the name.

You home directory is backed up to tape once per day. Use it for reasonable amounts of important data that cannot be recreated easily. If you need something restored from a backup, please contact .

User quotas are used on the home file system. The default is 100 GiB but can be increased if you need it (contact ).

You can check the /home/diskinfo file (generated each half-hour) for usage and quota information.

Note: The home file system performance is low compared to the alternatives below. Do not use the home directory for job run/work directories or similar. Use Accumulus storage for that, or even the node-local scratch storage (for things local to a node).

Bi and Krypton share home directories. If there will be multiple SMHI FoU resources available at the same time in the future, you can expect them to share home directories too.

Accumulus (/nobackup) on FoU resources

The bulk of the storage available on SMHI FoU resources belongs to the Accumulus storage system, with several generations of servers running the Lustre distributed file system software.

Depending on the group you belong to, you may have access to subdirectories on different Accumlus file systems. If you are user sm_abcde, use a command like ls -ld /nobackup/*/sm_abcde to show the directories available to you.

As indicated by the "nobackup" part of the name, there are no backups of these file systems.

There are no user quotas on the file systems, but group quotas are used on some of them. Have a look at the /nobackup/*/diskinfo files to see current information.

The Accumulus file systems are available on Bi (all files systems), Krypton (all file systems), and Triolith (selected file systems, contact NSC if you cannot find your Accumulus data on Triolith).

They will be available on future SMHI FoU resources too.

/nobackup on Byvind

On Byvind, the /nobackup/prod* NFS file systems are available on the login node and the compute nodes. They are not part of Accumulus.

/nobackup on Frost

On Frost, the /nobackup/prod* Lustre file systems are available on the login node and the compute nodes. They are not part of Accumulus. As of 2015-03-23, there is only one file system available, mounted as /nobackup/prod1. The second file system, /nobackup/prod2, will be available at a later date.

Node-local scratch storage (/scratch/local)

Each node has a scratch file system that can be used for temporary storage on that node while your job is running. The data will be deleted when the job finishes.

On Krypton, you have approximately 420 GiB available per node (840 GiB on the fat nodes and 200 GiB on the Nehalem nodes). You need to use the subdirectory created for you by the system and pointed to by the $SNIC_TMP environment variable. (We do it like this to be compatible with the SNIC resources, and to be future-proof if we want to allow several jobs to share a node.)

On Byvind, you have approximately 100 GiB available per node. You use /scratch/local directly.

On Bi and Frost, there is ca. 420 GiB available per node. Access it through the path provided by the $SNIC_TMP environment variable, as described above for Krypton.

Installed software (/software)

The /software file system contains software installed by NSC. Users cannot write to that file system.

Most of the software is made available through the "module" system and can be listed using module avail. Some libraries may not have modules associated with them, so you might find it useful to browse the /software/apps directory for them.

You can find a list of installed software here.

The /software file system is not shared between Krypton and Byvind.

Frost will also have its own software file system.