This guide is being updated

Quick guide for working on Tetralith

The aim is to have a quick reference in one place. Project/reservation and some recommendations are specific for the workshop, while others are of a general nature and can be found in more details elsewhere at NSC web (see links).

Basic skills

Some experience using common Unix/Linux commands in a terminal, e.g. ls, cd, cp, mv, mkdir, rm. If you're completely unfamiliar with this, it's recommended to try out before the workshop. Note that there are some differences between a typical desktop Linux and HPC environment, e.g. you can't use sudo to install things. By using ThinLinc (see below) the environment looks somewhat similar to a desktop Linux.

It's also useful to know about grep, less, cat and some basic text editor (nano, gedit, vi, ...) and a little about bash shell, python, slurm etc. but not necessary, since examples will be shown in the hands-on.


Participants in the hands-on sessions which don't already have an account on Tetralith, need to set it up properly in the week before the start of the workshop, since it starts on a Monday. If you have troubles, don't hesitate to contact the organizer or in good time. It will most probably not be enough time to set up an account from scratch at the start of the workshop.

The information on how to set up an account will be sent out via email, remember to check your spam folders if not received.

Access with ThinLinc

This is assuming that you now have an account and already made your first login to Tetralith and followed the required steps.

For the hands-on sessions we recommend to use the virtual desktop ThinLinc, which is convenient for showing graphics, e.g. creating plots and visualization. To do this, you need to download and install a ThinLinc client on your side. Check instructions in this link how to setup ThinLinc. Scroll down to Installing the ThinLinc client and connecting to Tetralith.

If you login using e.g. ssh or PuTTY you can still run the examples, but it might be more difficult to get graphics to work (it relies on the Xserver on your computer).

Tetralith cluster

The Tetralith academic cluster has 1832 regular work nodes with 32 cores (16 per CPU) and 96 GiB RAM. There are also 60 fat memory nodes with 384 GiB RAM.

For the workshop examples it should be sufficient to run on 4 cores on the regular nodes, -n 4. There are 5 nodes (32x5 = 160 cores) reserved for the hands-on sessions.

Submitting and running jobs

NSC clusters use the slurm job scheduler.

To run the hands-on examples there is a compute project for the workshop snic-2020-13-76 with special reserved nodes available for (1) Mon 19th 13:00-17:00 bt-vaspcourse1 and (2) Tue 20th 13:00-17:00 bt-vaspcourse2. On NSC, you can select a reservation as default by typing e.g.

usereservation bt-vaspcourse1

and to check available reservations with


In this way, you don't need to specify it for an interactive job or in a job script with --reservation=bt-vaspcourse1.

Now to start an interactive job for the hands-on session, e.g. in a terminal on Tetralith type

interative -A snic2020-13-76 -t 1:00:00 -n 4

and then the job have started load a suitable vasp module and start vasp

module load VASP/
mpprun vasp_std

the job in the example can be shut down by ctrl d, otherwise it continues until 1h is up (-t 1:00:00), even if nothing is actively running.

  • For the Day 2 session, change to --reservation=bt-vaspcourse2.
  • Note the use of the NSC script mpprun rather than mpirun. It will automatically pick up the number of cores to run on.

Alternatively, one can submit a job to the scheduler instead. The corresponding job script, e.g. called would look like this

#SBATCH -A snic2020-13-76
#SBATCH -t 1:00:00
#SBATCH -n 4 
#SBATCH -J vaspjob

module load VASP/
mpprun vasp_std

submit the job script to the queue with


check the status of the job in the queue, substitute with your username

squeue -u USERNAME

this will also show the JOBID. The job can be cancelled, both while running or in the queue, with

scancel JOBID

NSC tool to find information about your compute projects on Tetralith, also see projinfo -h


Monitoring jobs

Typically you need your JOBID, which is the number shown in the slurm output file slurm-JOBID.out. For a running job, you can check the queue (as seen above) for the JOBID

squeue -u USERNAME

To check a job after it's already finished, for usage see seff -h

seff JOBID

NSC specific tool, also see jobload -h

jobload JOBID

NSC specific tool, also see lastjobs -h


NSC specific tool for login to your work node, also see jobsh -h. For login to a shared node, type

jobsh JOBID

if you run on the whole node jobsh NODENAME. You can e.g. check the activity of your run with the command top.

sacct is a common tool for checking a job which has many different options, for manual see man sacct, some examples shown below

sacct -j JOBID -X
sacct -j JOBID -X --format=JobID,Submit,Start,End,Elapsed,Nodelist,State,AllocCPUS%9,CPUTimeRaw%20
sacct --user=USERNAME -X --format=Elapsed,State,AllocCPUS%9,CPUTimeRaw%20

Some more details on monitoring jobs can be found in this link.

Finding software

module avail NAME

Here, it's enough if NAME is just a part of the full name.

For example, all the software below are available as a module


you can use tab in order to autocomplete the name of a module. Some of the programs are described in more detail in this link.

To list your already loaded modules type

module list

and to clean out all non-default loaded modules

module purge

Note that lots of common utility software don't have their own module, they are ready for direct use


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems


NSC Express