If you have hard-coded the number of cores per node, please note that Krypton has 16 cores per compute node, not 8 as on Gimle/Bore/Byvind.
NOTE: It is not recommended to hard-code the number of cores in this way. Please use the relevant SLURM environment variables instead, e.g
SLURM_JOB_CPUS_ON_NODE. For more information, read the sbatch man page.
We are preparing for a future were at least some nodes (for example the nodes with more memory) are shared between multiple independent jobs. To be ready for that, we cannot let the top-level
/scratch/local directory be writable.
There is still a local scratch directory available on each node (currently implemented as a subdirectory to /scratch/local). Instead of /scratch/local, use the environment variable
$SNIC_TMP, which will be set to that directory (which is created for each job and deleted when the job ends).
E.g: if your job script looks like this
#!/bin/bash #SBATCH -t 00:10:00 # ./myapp --tempdir=/scratch/local
then change it to
#!/bin/bash #SBATCH -t 00:10:00 # ./myapp --tempdir=$SNIC_TMP
$SNIC_TMP comes from the NSC academic systems operated for SNIC. We decided to reuse that variable name instead of setting up another name just for the sake of it.
If you move existing jobs from Gimle, the actual run time will almost certainly be shorter if you run them on the same number of nodes.
You can either reduce the number of nodes or request a shorter wall time to accommodate this.
Many applications that are installed on Gimle are not yet available on Krypton. Before submitting a job, check that the application is available.
Even if the application is available, the version on Krypton might be different, so you might need to use a different path in your job script. Most module names will also have changed.
You might have to add the
-A flag to specify account name. See the Krypton specific batch job information.