Each compute node in our current HPC clusters has a local disk. Most of that disk space is available to users for storing temporary files during a job.
Please use the local disk when possible. By doing so, you're reducing the load on the shared file systems (e.g /home, /proj, /nobackup), which makes these as fast as possible for you and all other users.
In addition to placing temporary files on the local disk, it can sometimes be worthwhile to copy certain files to local disk and if needed copy them back to shared file systems before the job ends. Especially input data that is read many times by the same job can benefit from this (the local disk can utilize the Linux read cache, so all files that fit in the available RAM can be re-read very quickly from local disk).
In the "Tetra" generation of clusters (Tetralith, Sigma, Nebula, Cirrus and Stratus), the local disk is a Solid-state drive (SSD). On regular ("thin") compute nodes 210 GiB of space is available to users. On "fat" compute nodes 874 GiB is available.
On the older Bi cluster, the node disk is a traditional "spinning" disk (HDD) with 450 GiB of available space.
The environment variable
$SNIC_TMP in the job script environment points to a writable directory (/scratch/local) on the local disk that you can use.
Please note that anything stored on the local disk is deleted when your job ends. If some temporary or output files stored there needs to be preserved, copy them to project storage at the end of your job script.
If more than one job uses a compute node, those jobs needs to share the space on the local disk. There is currently nothing that prevents one job from using all available space, so if you want guaranteed access to the local disk, allocate the entire compute node (e.g
sbatch -N1 --exclusive).
Multi-node jobs can combine the local disks of all nodes in the job into one large area See job-local storage for more information. If you are using the job-local storage feature,
$SNIC_TMP will be set to
/scratch/job (but you can still access
/scratch/local directly if you want to use the node-local disk).
If you need help in making your jobs use the local disk, please contact NSC Support.