The aim is to have a quick reference in one place. Project/reservation and some recommendations are specific for the workshop, while others are of a general nature and can be found in more details elsewhere at NSC web (see links). Note that for users outside NAISS, it’s recommended to use the LEONARDO EuroHPC cluster instead.
Some experience using common Unix/Linux commands in a terminal, e.g. ls
, cd
, cp
, mv
, mkdir
, rm
. If you’re completely unfamiliar with this, it’s recommended to try out before the workshop. Note that there are some differences between a typical desktop Linux and HPC environment, e.g. you can’t use sudo
to install things. By using ThinLinc (see below) the environment looks somewhat similar to a desktop Linux.
It’s also useful to know about grep
, less
, cat
and some basic text editor (nano
, gedit
, vi
, …) and a little about bash shell, python, slurm etc. but not necessary, since examples will be shown in the hands-on.
For access to VASP installations during the hands-on part of the workshop and project data space on Tetralith, you need to apply for the following workshop project in SUPR:
NAISS 2024/22-241
after login to SUPR, check under “Projects” for the button “Go To Projects Page”.
Participants in the hands-on sessions which don’t already have an account on Tetralith, need to set it up properly at latest the day before the start of the workshop. If you have troubles, don't hesitate to contact the organizer or support@nsc.liu.se in good time
. It will most probably not be enough time to set up an account from scratch at the start of the workshop (at least it should not be counted on).
This is assuming that you now have an account and already made your first login to Tetralith and followed the required steps.
For the hands-on sessions we recommend to use the virtual desktop ThinLinc, which is convenient for showing graphics, e.g. creating plots and visualization. To do this, you need to download and install a ThinLinc client on your side. Check instructions in this link how to setup ThinLinc. Scroll down to Installing the ThinLinc client and connecting to Tetralith.
If you login using e.g. ssh
or PuTTY
you can still run the examples, but it might be more difficult to get graphics to work (it relies on the Xserver on your computer). If you will not use graphics (or have problems with ThinLinc) it is of course fine to login in this way.
The Tetralith academic cluster has 1832 regular work nodes with 32 cores (16 per CPU) and 96 GiB RAM. There are also 60 fat memory nodes with 384 GiB RAM.
For the workshop examples it should be sufficient to run on 4 cores on the regular nodes, -n 4
. There are 5 nodes (32x5 = 160 cores) reserved for the hands-on sessions.
NSC clusters use the slurm job scheduler.
To run the hands-on examples there is a compute project for the workshop naiss2024-22-241
with special reserved nodes available for (1) Wed 17th 13:00-17:00 vaspworkshop-2024-04-17
and (2) Thu 18th 13:00-17:00 vaspworkshop-2024-04-18
. On NSC, you can select a reservation as default by typing e.g.
usereservation vaspworkshop-2024-04-17
and to check available reservations with
listreservations
In this way, you don’t need to specify it for an interactive job or in a job script with --reservation=vaspworkshop-2024-04-17
.
Now to start an interactive job for the hands-on session, e.g. in a terminal on Tetralith type
interactive -A naiss2024-22-241 -t 1:00:00 -n 4
and then the job have started load a suitable vasp module and start vasp
module load VASP/6.4.3.19032024-omp-hpc1-intel-2023a-eb
mpprun vasp_std
the job in the example can be shut down by ctrl d
, otherwise it continues until 1h is up (-t 1:00:00), even if nothing is actively running.
--reservation=vaspworkshop-2024-04-18
.mpprun
rather than mpirun
. It will automatically pick up the number of cores to run on.Alternatively, one can submit a job to the scheduler instead. The corresponding job script, e.g. called run.sh
would look like this
#!/bin/bash
#SBATCH -A naiss2024-22-241
#SBATCH -t 1:00:00
#SBATCH -n 4
#SBATCH -J vaspjob
module load VASP/6.4.3.19032024-omp-hpc1-intel-2023a-eb
mpprun vasp_std
submit the job script to the queue with
sbatch run.sh
check the status of the job in the queue, substitute with your username
squeue -u USERNAME
this will also show the JOBID
. The job can be cancelled, both while running or in the queue, with
scancel JOBID
NSC tool to find information about your compute projects on Tetralith, also see projinfo -h
projinfo
Typically you need your JOBID, which is the number shown in the slurm output file slurm-JOBID.out
. For a running job, you can check the queue (as seen above) for the JOBID
squeue -u USERNAME
To check a job after it’s already finished, for usage see seff -h
seff JOBID
NSC specific tool, also see jobload -h
jobload JOBID
NSC specific tool, also see lastjobs -h
lastjobs
NSC specific tool for login to your work node, also see jobsh -h
. For login to a shared node, type
jobsh JOBID
if you run on the whole node jobsh NODENAME
. You can e.g. check the activity of your run with the command top
.
sacct is a common tool for checking a job which has many different options, for manual see man sacct
, some examples shown below
sacct -j JOBID -X
sacct -j JOBID -X --format=JobID,Submit,Start,End,Elapsed,Nodelist,State,AllocCPUS%9,CPUTimeRaw%20
sacct --user=USERNAME -X --format=Elapsed,State,AllocCPUS%9,CPUTimeRaw%20
Some more details on monitoring jobs can be found in this link.
module avail NAME
Here, it’s enough if NAME
is just a part of the full name.
For example, all the software below are available as a module
vasp
p4vasp
xmgrace
vesta
ase
cif2cell
gnuplot
xcrysden
you can use tab
in order to autocomplete the name of a module. Some of the programs are described in more detail in this link.
To list your already loaded modules type
module list
and to clean out all non-default loaded modules
module purge
Note that lots of common utility software don’t have their own module, they are ready for direct use
grep
less
cat
nano
gedit
vi
Guides, documentation and FAQ.
Applying for projects and login accounts.