Each user has a home directory. If you are
sm_abcde, your home directory will be
/home/sm_abcde. If you need to refer to your home directory in a script, use
$HOME instead of hardcoding the name.
You home directory is backed up to tape once per day. Use it for reasonable amounts of important data that cannot be recreated easily. If you need something restored from a backup, please contact .
User quotas are used on the home file system. The default is 100 GiB but can be increased if you need it (contact ).
You can check the
/home/diskinfo file (generated each half-hour) for usage and quota information.
Note: The home file system performance is low compared to the alternatives below. Do not use the home directory for job run/work directories or similar. Use Accumulus storage for that, or even the node-local scratch storage (for things local to a node).
Bi and Krypton shared home directories (but now Krypton is turned off). If there will be multiple SMHI FoU resources available at the same time in the future, you can expect them to share home directories too.
Most of the time, yes. The system uses "snapshots" (a read-only point-in-time view of the file system that can be used to restore files from).
Snapshots are taken at certain intervals and kept for a time. The exact intervals and storage time depends mainly on the amount of available storage to use for snapshots. At the time of writing snapshots are taken every 15 minutes (kept 48h), every hour (kept 6 days), every day (kept 3 months) and every month (kept for 2 years).
Snapshots are available only on /home.
To recover deleted files from a snapshot (or check the contents of a file as is was at an earlier time), go to
/home/.zfs/snapshot. There you will find one directory per available snapshot. Then change into the snapshot directory (e.g
cd zfs-auto-snap_hourly-2016-11-01-0301), and you will see the files as they were at the time the snapshot was taken. To "undelete" a file, simply copy it to a location outside the
.zfs directory (e.g
cp zfs-auto-snap_hourly-2016-11-01-0301/sm_abcde/.bashrc /home/sm_abcde/my_old_bashrc).
Files created and deleted in the time between when two snapshots were taken cannot be restored.
Files that were deleted too long ago (before the currently oldest snapshot was taken) cannot be restored from snapshots.
Note that snapshots are not guaranteed to exist and are not a backup. If the /home storage runs out of space they will be deleted and if the server suffers a catastrophic failure that will also affect the snapshots. Actual backups are taken to tape once per day.
Oops, I have deleted the file "/home/sm_abcde/test.c", I would like to get it back.
Show which daily snapshots have the file:
[sm_abcde@bi ~]$ cd /home/.zfs/snapshot [sm_abcde@bi snapshot]$ ls -1 zfs-auto-snap_daily-*/sm_abcde/test.c zfs-auto-snap_daily-2016-10-29-0125/sm_abcde/test.c zfs-auto-snap_daily-2016-10-30-0230/sm_abcde/test.c zfs-auto-snap_daily-2016-10-31-0209/sm_abcde/test.c zfs-auto-snap_daily-2016-11-01-0230/sm_abcde/test.c
Lets restore the latest version of the file:
[sm_abcde@bi snapshot]$ cp fs-auto-snap_daily-2016-11-01-0230/sm_abcde/test.c /home/sm_abcde/
If the file was deleted so long ago that there are no daily snapshots still available of it "daily" can be replaced with "monthly" in the commands above.
Oops, I have very recently mangled the code in "test.c" so that no longer works, I would like to get the working version back.
List the available recent snapshots of that file and what the respective checksum are:
[sm_abcde@bi ~]$ cd /home/.zfs/snapshot [sm_abcde@bi snapshot]$ md5sum zfs-auto-snap_frequent-*/sm_abcde/test.c f119c865306c35e64eb00f65d7279664 zfs-auto-snap_frequent-2016-11-01-1200/sm_abcde/test.c f119c865306c35e64eb00f65d7279664 zfs-auto-snap_frequent-2016-11-01-1215/sm_abcde/test.c f119c865306c35e64eb00f65d7279664 zfs-auto-snap_frequent-2016-11-01-1231/sm_abcde/test.c f119c865306c35e64eb00f65d7279664 zfs-auto-snap_frequent-2016-11-01-1246/sm_abcde/test.c 0086eab58e556408fcb6858e6a0cf52a zfs-auto-snap_frequent-2016-11-01-1300/sm_abcde/test.c 0086eab58e556408fcb6858e6a0cf52a zfs-auto-snap_frequent-2016-11-01-1315/sm_abcde/test.c
Looks like there are only two versions of that file, and that the change was introduced after the 12:46 snapshot. Let's restore that:
[sm_abcde@bi snapshot]$ cp zfs-auto-snap_frequent-2016-11-01-1246/sm_abcde/test.c /home/sm_abcde/
The bulk of the storage available on SMHI FoU resources belongs to the Accumulus storage system, with several generations of servers running the Lustre distributed file system software.
Depending on the group you belong to, you may have access to subdirectories on different Accumlus file systems. If you are user
sm_abcde, use a command like
ls -ld /nobackup/*/sm_abcde to show the directories available to you.
As indicated by the "nobackup" part of the name, there are no backups of these file systems.
There are no user quotas on the file systems, but group quotas are used on some of them. Have a look at the
/nobackup/*/diskinfo files to see current information.
The Accumulus file systems are available on Bi (all files systems) and Triolith (selected file systems, contact NSC if you cannot find your Accumulus data on Triolith).
They will be available on future SMHI FoU resources too.
On Frost, the
/nobackup/prod* Lustre file systems are available on the login node and the compute nodes. They are not part of Accumulus. As of 2015-03-23, there is only one file system available, mounted as
/nobackup/prod1. The second file system,
/nobackup/prod2, will be available at a later date.
Each node has a scratch file system that can be used for temporary storage on that node while your job is running. The data will be deleted when the job finishes.
On Bi and Frost, you have approximately 420 GiB available per node. You need to use the subdirectory created for you by the system and pointed to by the
$SNIC_TMP environment variable.
/software file system contains software installed by NSC. Users cannot write to that file system.
Most of the software is made available through the "module" system and can be listed using
module avail. Some libraries may not have modules associated with them, so you might find it useful to browse the
/software/apps directory for them.
You can find a list of installed software here.
The /software file system is not shared between Bi and Frost (each have their own).