The DCS service ended 2019-07-01.
The storage associated with the DCS projects will continue to be available until it is possible to apply for the new type of multi-year SNAC Large storage project (probably during 2020H1 with allocations starting from 2020-07-01).
The analysis part of the DCS service will be replaced with a similar service available to large multi-year storage projects, but not limited to the previous DCS projects.
The details of the new analysis will be worked out during 2020H1 in preparation for the upcoming SNIC Large Storage projects starting 2020-07-01. If you have feedback regarding the new service, please let us know: support@nsc.liu.se.
Until the new service is in place, the previous DCS analysis nodes will continue to be available to the DCS projects that still exists, accessible in the same way and using the same policies as described below.Please note that these nodes are only available to the user groups listed below. Also, these projects can only be used to run jobs on the DCS analysis nodes, not on the other Tetralith nodes.
If you feel that your own user group have similar needs, contact support@nsc.liu.se to discuss your options.
To access the analysis nodes and project storage, follow the normal NSC process for getting a login account on Tetralith. When applying, you should apply for membership in the DCS/analysis project for your user group:
Group | Project |
---|---|
MISU/Bolin Centre |
SNIC 2014/10-3 |
SMHI FoU |
SNIC 2014/10-2 |
KTHMECH |
SNIC 2014/10-4 |
S-CMIP |
SNIC 2016/8-3 |
Paleoclimate / SU |
SNIC 2017/8-1 |
If you are already a Tetralith user but not a member of one of the projects above, login to SUPR and request membership in the appropriate project.
The groups above have been allocated space on NSC's Centre Storage system.
Group | Project directory |
---|---|
MISU/Bolin Centre |
/proj/bolinc |
SMHI FoU |
/proj/fouo |
KTHMECH |
/proj/kthmech |
S-CMIP |
/proj/s-cmip |
Paleoclimate / SU |
/proj/paleo |
Read the Centre Storage User Guide for more information.
Use the snicquota
command to see how much space you and your project colleagues are using and how much is available.
Four Tetralith compute nodes with 384 GB RAM each are dedicated for the DCS projects for analysis use.
Since the number of nodes is limited, please make sure to only request as much resources as you need. I.e don't request a full node ("-N1") if you can make do with just a few cores (e.g "-n2").
If you need to run more or larger analysis jobs than fits on the DCS nodes, you need to also be a member of a "normal" (non-DCS) project, and then use that project (you can still access your DCS project directory from such jobs).
To use the analysis nodes, you need to use our batch scheduler/resource manager. There are two main reasons for this:
Tetralith uses Slurm for scheduling (deciding who and what gets to use the system at any given time) and resource management (keeping track of nodes and allocated resources, starting and stopping jobs etc).
The analysis nodes are in a "reservation" on Tetralith. This means that they are separate from the other compute nodes - you must specify that you want to use them, and you must be authorized to use them.
To use analysis nodes, you must:
-A <project name>
--reservation=dcs
Example (interactive session using the Rossby project, four cores, 16GB RAM for 2 hours):
$ interactive -A snic2014-10-1 --reservation=dcs -t 2:00:00 -n 4 --mem=16000
Example (batch job using the MISU project, two cores and 32GB RAM for 24h)
$ sbatch -A snic2014-10-3 -n 2 --mem=32000 --reservation=dcs -t 24:00:00 myjob.sh
If you get the error "sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified", you are not a member of the project you're trying to use. Use the projinfo
command to show which projects you are a member of.
The analysis nodes are primarily intended for interactive use. It is permitted to run batch jobs on them as long as you make sure that you don't use all available resources.
Note: one user can easily block all analysis nodes for other users by submitting a large volume of jobs at once. Doing this is not acceptable! Analysis nodes must be available for interactive use, i.e it is not acceptable for most users to have to wait for several hours to get an interactive shell. Please use common sense!
Common sense can be assisted by checking the queue status before submitting a large volume of jobs. To get an overview of running and queued jobs, use the squeue
command with suitable options, e.g squeue --reservation=dcs -o "%.10i %.9P %.8u %.8T %.11L %.11l %.8N %.10m"
.
Guides, documentation and FAQ.
Applying for projects and login accounts.