Status displays
System status
Retired systems

Dunder User Guide




Getting an account on Dunder

Getting a grid certificate


The batch queueing system

Storage and file systems


Math libraries

Quick getting started example



Dunder is a cluster owned by SMHI, used mainly for application development. It is located in NSCs computer facility in Linköping and is maintained by NSC.


Dunder has 50 compute nodes, 1 login node and system/disk servers. Every node is a Dell PowerEdge 1850 Server and are connected in a InfiniBand network.

Node Configuration:

Login Node

n1 – n50

Compute Nodes


Operating System

Centos 4.

CentOS 4 is a 64-bit operating system (a rebuild of Redhat Enterprise Linux 4), it has the capability to run both 64-bit and 32-bit binaries.

Resource Manager





Intel C/C++/Fortran for Linux


Math Library

Intel Math Kernel Library


Scali MPI Connect

Numerical Applications



Octave and GnuPlot are recommended for plotting data.

If you wish to have some specific software installed, contact (specifying Dunder + relevant info in the subject header).


We use module to handle the environment when there exists several versions of the same software installed, e.g. different compilers. This application sets up the correct paths to the binaries, man-pages, libraries, etc. for the currently selected module. Here is a list of the most useful arguments to module:


lists the available arguments

module list

lists currently loaded modules

module avail

lists the available modules for use

module load example

loads the environment specified in the module named example

module unload example

unloads the environment specified in the module named example

A default environment is automatically declared when you log in. Check using module list

Example: Switch from the default 64-bit icc compiler to a 32-bit icc compiler:

[perl@dunder ~]$ module list
Currently loaded modules:
  1) icc
  2) ifort
  3) mkl
  4) pbs
  5) scali
  6) matlab
  7) dotmodules
  8) default

[perl@dunder ~]$ module avail

In directory /etc/cmod/modulefiles:

  +default                     -ifort/8.1.032_EM64T
  +dotmodules                  -ifort/8.1.033_EM64T
  -icc/8.1.027_EM64T           -ifort/8.1.035_IA32
  -icc/8.1.029_EM64T           -ifort/9.0.024_EM64T
  -icc/8.1.030_IA32            -ifort/9.0.024_IA32
  -icc/8.1.032_EM64T           -ifort/9.0.027_EM64T (def)
  -icc/8.1.033_EM64T           -ifort/default
  -icc/8.1.033_IA32            -matlab/default
  -icc/8.1.035_IA32            -mkl/7.2.1p3 (def)
  -icc/9.0.023_EM64T           -mkl/7.2.1p3_IA32
  -icc/9.0.023_IA32            -mkl/default
  -icc/9.0.026_EM64T (def)     -pbs/default
  -icc/default                 -root
  -ifort/8.1.026_IA32          -scali/default
  -ifort/8.1.027_EM64T         -totalview/7.1 (def)
  -ifort/8.1.029_EM64T         -totalview/default

[perl@dunder ~]$ module unload icc
[perl@dunder ~]$ module load icc/8.1.030_IA32
[perl@dunder ~]$ module list
Currently loaded modules:
  1) ifort
  2) mkl
  3) pbs
  4) scali
  5) matlab
  6) dotmodules
  7) default
  8) icc/8.1.030_IA32

Tip: The environment is specified in the files located under /etc/cmod/modulefiles.

Resource Name Environment Variable

If you are using several NSC resources and copying scripts between them, it can be useful for a script to have a way of knowing what resource it is running on. You can use the NSC_RESOURCE_NAME variable for that:

[username@dunder ~]$ echo "Running on $NSC_RESOURCE_NAME"
Running on dunder

Getting an account on Dunder

  1. Contact (Topic “New account on Dunder” or similar). We already have a list of people who are authorized for an account (supplied by Tomas Wilhelmsson at SMHI).

  2. Include the following information: Name, address, e-mail and telephone number.

  3. We will send an e-mail back containing your account information + password. Please change this password as soon as possible

  4. Read the security section below.

Getting a grid certificate

In order to get access to NSC's file storage you must have a valid grid certificate. The correct procedure to get a certificate is described at (please read both "User Guide - Introduction" and "Getting a Grid Certificate"): Guide - Introduction

The layout of the storage system is also described and there is also a detailed tutorial of how to use the storage system including a lot of examples.

Accessing Dunder

Log in to Dunder using ssh:


Use scp and sftp for transferring of files between Dunder and other computers:

Example: copy the file named example to Dunder:

scp ./example

Example: connect to Dunder using sftp:

Connecting to


  • Use different passwords for different systems (Very important). When a system is compromised and passwords stolen, the thing that causes the most grief is when the stolen password can be used for more than one system. A user that has accounts on many different computers and gets his/her shared password stolen will allow the intruders to easily cross administrative domains and further compromise other systems.

  • Do not share user accounts with other people. If they are eligible for an account on Dunder they will have to apply for an account of their own, if not, they should not be able to use Dunder.

  • Do not use weak passwords. Not using a trivial password (your name, account, dogs name, etc.) is VERY important. However, using really hard passwords it not even close to as important as not sharing them between systems.

  • Avoid chains of ssh sessions (see figure above). To login to a system and then continue from that system to a third (as illustrated above) is bad and should be avoided.

  • Check “Last login: DATE from MACHINE”. When logging into a system it only takes half a second to read the “last login” information. If you can't verify the information, contact as soon as possible.

SSH public-key authentication

There is an alternative to traditional passwords. This method of authentication is known as key-pair or public-key authentication. While a password is simple to understand (the secret is in your head until you give it to the ssh server which grants or denies access), a key-pair is somewhat more complicated.

Our recommendation is to use whichever method you feel comfortable with. If you invest some time to learn about key-pairs you will receive several benefits, including better security and easier work flow.

A key-pair is as the name suggests a pair of cryptographic keys. One of the keys is called the private key (this one should be kept secure and protected with a pass phrase) and a public key (this one can be passed around freely as the name suggests).

After you have created the pair, you have to copy the public key to all systems that you want to ssh to. The private key is kept as secure as possible and protected with a good pass phrase. On your laptop/workstation you use a key-agent to hold the private key while you work.

  • Can be much more secure than regular password authentication

  • Can be less secure if used incorrectly (understand before use)

  • Allows multiple logins without reentering password/pass phrase

  • Allows safer use of ssh chains

How to use SSH public-key authentication instead of regular password authentication is described in chapter 4 in SSH tips, tricks & protocol tutorial by Damien Miller.

Short description of the necessary steps involved using SSH public-key authentication (read Damien Miller's guide above for more details):

  • Generate a key-pair on your desktop/laptop, choose a good pass phrase and make sure private key is secure (once).

  • Put your public key into ~/.ssh/authorized_keys on desired systems.

  • Load your private key into your key-agent (ssh-add with OpenSSH).

  • Run ssh all you want without reentering your pass phrase and without the risk of anyone stealing your password.

The Batch Queueing System

In order to utilize the cluster as efficient as possible a batch queueing system is used (Torque). Instead of running your applications interactively you submit “jobs” to the batch queuing system which automatically starts your job when there are resources available. A job can have several attributes but among the most important ones are:

  • The number of compute nodes that the applications should use.

  • The maximum allowed time for the application to run.

  • How to start the application.

This makes it easier to ensure that the cluster is not idling at any time and we can easily specify that no more than one job at a time may use a node, preventing inefficient task switching and swapping. By also using an job scheduler (Moab) we can also easily implement policies, that allows us to prioritize between different types of jobs.

Tip: A graphical overview of the scheduling status is available at:

Submitting Jobs

For a quick introduction how to compile and run jobs see Quick getting started example

There are two ways to submit jobs to the batch queue system, either as an interactive job or as a batch job. Interactive jobs are most useful for debugging as you get interactive access to the input and the output of the job when it is running. But the normal way to run the applications is by submitting them as batch jobs since otherwise you would have to manually start your application.

qsub is the name of the command for submitting jobs, either if it is a batch job or an interactive job.

Important: qsub takes a submit script as a parameter and not standard in or any binary application. The PBS-script must end its lines with a newline (\n) as is the default on Unix platforms. On Windows, line-endings are terminated with a combination of a carriage return (\r) and a newline(\n) - this will not work with qsub.

Example of useful arguments to qsub (more important arguments are marked bold), read the man page for additional arguments and details:


Run the job interactively.

-j oe

Join standard out (o) and standard error (e) to the same file. As default the standard out and standard error is saved in two different files.

-l nodes=n:ppn=2

n specifies the number of nodes to run the job on, ppn=2 specifies that both the processors on each node will be used (if not specified only one will be used. n is an integer in the interval [1,129].

-l walltime=hh:mm:ss

Your estimation of the maximum execution time for the job.

-m abe

Send mail to the local when the job begins (b), exits (e) and aborts (a). You may combine these as you wish. If not specified, mail is only sent when the job is aborted.


List of e-mail addresses to send mail to. If not specified mail is sent to the local

-N myjobname

Name of the job, consisting of up to 15 printable, non white space characters with the first character alphabetic. If not specified the default name will be jobid.dunder.

Batch Jobs

  1. Create a PBS-script. This is a shell-script with additional declarations for the arguments to qsub, the arguments to qsub are declared as #PBS pbsargument, e.g. -N myjobname is specified as #PBS -N myjobname.

    Example of a PBS-script named pbssample.simple using 24 nodes and both processors. The wall clock time is 10 minutes and an e-mail will be sent when the job exits normally or exits with an error:

    # Request 24 nodes using both processors for the job and request 10 minutes of wall-clock time.
    #PBS -l nodes=24:ppn=2,walltime=00:10:00
    # Request regular output (stdout) and error output (stderr) to the same 
    # file.
    #PBS -j oe
    # Send mail when the job exits normally (e) or aborts with an error (a).
    #PBS -m ae
    # Goto the directory from which you submitted the job.
    # Start the job with mpprun on the nodes that the batch queue system have
    # allocated for your job.
    mpprun helloworld
  2. Submit the job, by specifying the PBS-script as the only argument to qsub:

    [perl@dunder bin]$ qsub pbsexample.simple 
  3. Check the status of the job with qstat. by specifying qstat -n you get information about allocated nodes. For even more details add -f as an argument. You should also try using showq.

    [perl@dunder bin]$ qstat
    Job id           Name             User             Time Use S Queue
    ---------------- ---------------- ---------------- -------- - -----
    5452.dune        pbsexample.simp  perl                    0 Q workq           
    [perl@dunder bin]$ qstat -f
    Job Id: 5452.dune
        Job_Name = pbsexample.simp
        Job_Owner = perl@dunder
        job_state = R
        queue = workq
        server = dune
        Checkpoint = u
        ctime = Fri Oct  7 11:40:39 2005
        Error_Path = dunder:/home/perl/bin/pbsexample.simp.e5452
        exec_host = n50/1+n50/0+n49/1+n49/0+n48/1+n48/0+n47/1+n47/0+n46/1+n46/0+n45
        Hold_Types = n
        Join_Path = oe
        Keep_Files = n
        Mail_Points = ae
        mtime = Fri Oct  7 11:40:41 2005
        Output_Path = dunder:/home/perl/bin/pbsexample.simp.o5452
        Priority = 0
        qtime = Fri Oct  7 11:40:39 2005
        Rerunable = True
        Resource_List.nodect = 24
        Resource_List.nodes = 24:ppn=2
        Resource_List.walltime = 00:10:00
        Variable_List = PBS_O_HOME=/home/perl,PBS_O_LANG=en_US.UTF-8,
        etime = Fri Oct  7 11:40:39 2005
  4. Since we specified -m ae, an e-mail will be sent to the local user at dunder when/if the job exits and aborts. The standard out and standard error is saved in the directory from where the job was submitted as pbssample.simpl.o5452

[perl@dunder bin]$ more pbsexample.simp.o5452 
Using MPI implementation 'scampi' on 24 node(s), 48 processes.
Taking nodenames from "/var/spool/PBS/aux/5452.dune", number of nodes specified by -np
/opt/scali/bin/mpimon -stdin all  -inherit_limits helloworld  --  n50 1 n50 
1 n49 1 n49 1 n48 1 n48 1 n47 1 n47 1 n46 1 n46 1 n45 1 n45 1 n44 1 n44 1 n43
1 n43 1 n42 1 n42 1 n41 1 n41 1 n40 1 n40 1 n39 1 n39 1 n38 1 n38 1 n37 1 n37
1 n36 1 n36 1 n35 1 n35 1 n34 1 n34 1 n33 1 n33 1 n32 1 n32 1 n31 1 n31 1 n30
1 n30 1 n29 1 n29 1 n28 1 n28 1 n27 1 n27 1
2: running on proc: n49.
4: running on proc: n48.
6: running on proc: n47.
5: running on proc: n48.
7: running on proc: n47.

Interactive Jobs

Submitting an interactive job is done at the command line without an PBS-script. All the arguments you would specify as #PBS pbsarg in an PBS-script are instead supplied directly to qsub on the command line, you do also need to add the argument -I specifying that it is an interactive job.

Example: Interactive version of the PBS-script in Batch Jobs

[perl@dunder bin]$ qsub -I -lnodes=24:ppn=2,walltime=00:10:00
qsub: waiting for job 5453.dune to start
qsub: job 5453.dune ready

[perl@n50 ~]$ qstat
Job id           Name             User             Time Use S Queue
---------------- ---------------- ---------------- -------- - -----
5453.dune        STDIN            perl                    0 R workq           
[perl@n50 ~]$ qstat -n

                                                            Req'd  Req'd   Elap
Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
5453.dune       perl     workq    STDIN         --   24  --    --  00:10 R   -- 

[perl@n50 ~]$ env | grep PBS

[perl@n50 ~]$ cd $PBS_O_WORKDIR

[perl@n50 bin]$ pwd   

[perl@n50 bin]$ ls
pbsexample.simple      pbsexample.simp.o5450  
helloworld     pbsexample.simp.o5451  hello_world.c pbsexample.simp.o5452

[perl@n50 bin]$ mpprun helloworld 
Using MPI implementation 'scampi' on 24 node(s), 48 processes.
Taking nodenames from "/var/spool/PBS/aux/5453.dune", number of nodes specified by -np
/opt/scali/bin/mpimon -stdin all  -inherit_limits helloworld  --  
n50 1 n50 1 n49 1 n49 1 n48 1 n48 1 n47 1 n47 1 n46 1 n46 1 n45 1
n45 1 n44 1 n44 1                   n43 1 n43 1 n42 1 n42 1 n41 1
n41 1 n40 1 n40 1 n39 1 n39 1 n38 1 n38 1 n37 1 n37 1 n36 1 n36 1
n35 1 n35 1 n34 1 n34 1 n33 1 n33 1 n32 1 n32 1 n31
1 n31 1 n30 1 n30 1 n29 1 n29 1 n28 1 n28 1 n27 1 n27 1
2: running on proc: n49.
4: running on proc: n48.
6: running on proc: n47.
8: running on proc: n46.
10: running on proc: n45.
<... snip....>

[perl@n50 bin]$ =>> PBS: job killed: walltime 601 exceeded limit 600

qsub: job 5453.dune completed

The output are displayed directly to the shell prompt.

Alternatively, you may run mpprun directly if you want to run a MPI application. mpprun will make an implicit call to qsub with a default wall time of 1 hour and automatically execute the application.

Example: Interactive job by calling mpprun directly.

[perl@dunder example]$ mpprun -np 24 helloworld
Using PBS to spawn an interactive job
spawn /usr/pbs/bin/qsub -I -lwalltime=1:00:00,nodes=24
qsub: waiting for job 5977.dunder to start
qsub: job 5977.dunder ready

[perl@n11 perl]$ exec /usr/local/bin/mpprun -Norder=defult helloworld

Automatic cleanup and file stage in/out

  1. When a job is finished, the batch queueing system kills all running processes belonging to the user that owns the exiting job on the compute node.

  2. Before a new job is started all contents of /disk/local is removed on the compute node.

If you want to be guaranteed not to loose what you have written to /disk/local you should use the PBS functionality called stage out and copy your data to another file system (/home or /nobackup/) when your job exits. Since a job can exit in several ways (normally, segmentation fault, killed by pbs) you can not otherwise be guaranteed that your data always will be copied if you do this by yourself (e.g. in your application or in the PBS-script).

  • File stage out only can by done from your master node (i.e. your highest ranking node in the job).

  • PBS file stage in is not available on dunder, but this may as well be done manually in the PBS script or in your application.

Please make sure that it works before using file stage out in your production jobs. See 'man qsub' for more details.

Example: copy /disk/local/data0 and /disk/local/data1 (output written from “mpiapplication”) from the master compute node in my job to dunder:/home/perl when the job exits.

#PBS -l nodes=16:ppn=2,walltime=01:00:00 
#PBS -W stageout=/disk/local/data0@dunder:/home/perl/,/disk/local/data1@dunder:/home/perl/

#Manual stage in:
cp /home/perl/tmp/test.gz /disk/local/

mpprun mpiapplication

Frequently used commands

Read the man-pages for more information about each listed command below. Commonly used commands are marked bold.

Frequently used PBS user commands:

Submits a job to the PBS queuing system.


Show status of PBS batch jobs.


Delete a PBS job from the queue.

Less frequently used PBS user commands:

Modifies the attributes of a job.


Requests that the PBS server place a hold on a job.


Reruns a PBS batch job.


Release hold on PBS batch job.


Requests that a signal be send to the session leader of a batch job.

Moab commands:

Moab is the scheduler. It takes care of how the jobs are prioritized, and where these jobs are run. Moab supports advance reservations, QOS levels, backfill, and allocation management.


List all jobs visible to the scheduler.


Show resources available for immediate access.


Show current reservations.


Makes a qualified guess about when a job will start.


Display numerous scheduling details for a job.

Scheduling policy

  • The maximum allowed walltime is 1 day. This is specified as -lwalltime=24:00:00 in PBS.


On Dunder there is currently no bonus system implemented (it is used on Monolith and Tornado), but the command projinfo is still available, giving an nice overview of your groups usage the current month.

Example output when using projinfo for a user belonging to the group rossby:

dunder 1% projinfo

Project        Used[h]        Allocated[h]
rossby        38947.1              148608
  sm_wyser     2628.2
  sm_aulle     2210.4
  sm_meier    31504.1
  sm_psamu     2604.5

Interactive development slots

Between 0700 – 1800 on Monday to Friday we have reserved 8 nodes for shorter jobs with a walltime of 1 hour or less. This will make it much easier to develop and test applications without having to wait a long time for some other large jobs to finish.

Storage & File systems

On Dunder there are different types of storage classes depending on level of reliability, size, speed, location, etc.









backed up

Your home directory. Writing large amount of data here will affect all logged in users. Store source code and vital data that must be backed up.








2-4 TB each

not backed up

If your job writes a large amount of data to disk you should use this area. Move vital data to /home or make a copy on another file system since it is not backed up.

Additional storage will be added by time. Check available file systems with the command df

Each file system is 2 – 4 TB large and consists of 1 or 2 RAID5 volumes of 6 – 7 harddrives each.


local hard drive to each compute node


not backed up

Should be used for temporary storage of your job output data. Fastest area since it is not shared between any other nodes. All contents on /disk/local is erased when the next job starts on the node. Use the stage-out facility in PBS to automatically copy data to another file system when job exits.


hierarchical file archive

Magnitude of ~100s of TB total

not backed up

NSC long term storage. Large area. Long access time. Acessible using migftp or globus-url-copy.

  • Every user has at least one directory under /nobackup. To check which, use the command “ls -d /nobackup/*/yourusername”. Example:

[perl@dunder ~]$ ls -d /nobackup/*/perl
/nobackup/rossby1/perl /nobackup/rossby2/perl

  • On /home and /nobackup/* there is a file called diskinfo containing each users total disk usage. This file is updated every night and is much quicker to use than du. Example:

[perl@dunder ~]$ sort -nk1 /home/diskinfo 

Diskinfo for file system/directory /dunder_home
Fri Oct  7 06:05:07 CEST 2005
[Measures in Megabytes of file space]
1       msa
1       sm_lfunk
4       perl
152     sm_wyser
478     sm_ppemb
725     sm_lmeul
1057    sm_uandr
1306    sm_jlang
1337    sm_ngust
3417    sm_towil
11149   cap
16680   faxen
79156   hirlam


For a quick introduction how to compile and run jobs see Quick getting started example.

GCC and both 64-bit and 32-bit Intel Compiler for Linux 8 and 9 are installed

We recommend using the 64-bit Intel compilers version 9 (used as default).

Note that since you may run both 64-bit binaries and 32-bit binaries on CentOS 4 the corresponding libraries are installed in different locations (/lib64 and /usr/lib64 for the 64-bit versions and /lib /usr/lib for the 32-bit versions).

Compiling MPI applications

NSC Cluster Environment (NCE) is NSC's modification of GCC and Intel compilers which makes it easier to compile MPI-applications. Rather than having to specify a quite long list of libraries to be linked which differs between different MPI implementations, you only have to compile with these additional arguments:


Compile with ScaMPI support.

The options should be used both at compile-time (to specify the path to the include files) and at link-time (to specify the correct libraries).

Example: compiling the MPI-program, mpiprog.c with icc for ScaMPI (the same goes for gcc, g++, g77 and ifort):

icc mpiprog.c -o mpiprog -Nscampi

Intel compiler, useful compiler options

Below are some useful compiler options, please do "man ifort" or "man icc" for more. For additional information, see:

a) Optimisation

There are three different optimization levels in Intel's compilers:


Disable optimizations.


Enable optimizations (DEFAULT).


Enable -O2 plus more aggressive optimizations that may not improve performance for all programs.

A recommended flag for general code is -O2 -mp and for best performance "-O3 -xP" which will enable software vectorisation. As always however, aggressive optimisation runs a higher risk of encountering compiler limitations.

b) Debugging


Generate symbolic debug information.


Generate extra information in the object file to allow the display of source file traceback information at runtime when a severe error occurs.


Specifies floating-point exception handling at run-time.


Maintains floating-point precision (while disabling some optimizations).

c) Profiling


Compile and link for function profiling with UNIX gprof tool.

d) Options that only apply to Fortran programs

-assume byterecl

Specifies (for unformatted data files) that the units for the OPEN statement RECL specifier (record length) value are in bytes, not longwords (four-byte units). For formatted files, the RECL unit is always in bytes.


Set default size of REAL to 8 bytes.


Set default size of integer variables to 8 bytes.


Implicitly initialize all data to zero.


Save variables (static allocation) except local variables within a recursive routine; opposite of -auto.


Performs run-time checks on whether array subscript and substring references are within declared bounds.

e) Large File Support (LFS)

To read/write files larger than 2GB you need to specify some flags at compilation:

Fortran: no additional flags needed.

CC/C++: LFS is obtained by specifying the flags below when compiling and linking:


f) Miscellaneous options

Little endian to Big endian conversion in Fortran is done through the F_UFMTENDIAN environment variable. When set, the following operations are done:

  • The WRITE operation converts little endian format to big endian format.

  • The READ operation converts big endian format to little endian format.


Convert all files.

F_UFMTENDIAN ="big;little:8" 

All files except those connected to unit 8 are converted.

g) NCE options (locally supplied by NSC)


Compile with ScaMPI support.

Math libraries

Intel Math Kernel Library

MKL Version 7.2 is installed on Dunder.

For complementary information look at Intel® Math Kernel Library 7.2 for Linux* Technical User Notes.

Additional documentation is available locally on Dunder at: /usr/local/intel/l_mkl_p_7.2.1.003/mkl721/doc/

The Math Kernel Library includes the following groups of routines:

  • Basic Linear Algebra Subprograms (BLAS):

    • vector operations

    • matrix-vector operations

    • matrix-matrix operations

  • Sparse BLAS (basic vector operations on sparse vectors)

  • Fast Fourier transform routines (with Fortran and C interfaces)

  • LAPACK routines for solving systems of linear equations

  • LAPACK routines for solving least-squares problems, eigenvalue and singular value problems, and Sylvester's equations

  • Vector Mathematical Library (VML) functions for computing core mathematical functions on vector arguments (with Fortran and C interfaces).

Full documentation can be found at

Directory structure

mkl is located in $MKL_ROOT, defined at login. Semantically, MKL consists of two parts: LAPACK and processor specific kernels. The LAPACK library contains LAPACK routines and drivers that were optimized as without regard to processor so that it can be used effectively on processors from Pentium to Xeon. Processor specific kernels contain BLAS, FFTs, CBLAS, VML that were optimized for the specific processor. Threading software is supplied as a separate dynamic link library -, when linking dynamically to MKL.

Linking with MKL

To use LAPACK and BLAS software you must link two libraries: LAPACK and one of the processor specific kernels (i.e. libmkl_p4n on dunder). Please use -L$MKL_ROOT instead of hardcoding the path (e.g. do not use -L/usr/local/intel/l_mkl_p_7.2.008/mkl72/lib/32). This will ensure that correct libraries are used when switching modules between different mkl versions.

Example (LAPACK library, Xeon EM64T processor kernel):

ld myprog.o -L$MKL_ROOT -lmkl_lapack -lmkl_p4n

Example (Dynamic linking. DLL dispatcher will load the appropriate dll for the processor dynamic kernel):

ld myprog.o -L$MKL_ROOT -lmkl -lguide -lpthread

Example (Dynamic linking using ifort):

ifort -L$MKL_ROOT -lmkl_lapack -lmkl_p4n example.o -o example

Quick getting started example

  1. Get an account on Dunder.

  2. Log in to Dunder using ssh:

  3. Compile an MPI application using the preferred Intel compiler (see Compiling for more details):

    icc -Nscampi helloworld.c -o helloworld
  1. Run the application as a batch job (see Submitting jobs for more details):

    1. Create a PBS-script. This file contains information about how many nodes you wish to use, how long you expect the job to run, how to start the application, etc..

    2. Submit the job.

      qsub pbsexample
    3. When the job is finished, an e-mail will be sent to the e-mail address specified in ~/.forward. or to the e-mail address specified in the PBS-script (#PBS -M emailaddress)

For more detailed information, read Compiling and The Batch Queueing System.


  • Use mpprun to start your MPI applications.

  • No world writable home-directories, if you need this, create world writable subdirectories instead

  • If you want your job to start fast, use a wall clock time that is less than or equal to 1 hr since there is 16 nodes reserved for jobs less than, or equal to 1 hr.

  • Use projinfo to look at how much cpu time your group has used the current month

  • Under /nobackup/*/ and /home there is a file called diskinfo which contains disk usage for each user. Example: sort -nk1 /nobackup/rossby2/diskinfo

  • To plot data have a look at octave or gnuplot.

  • A status overview of Dunder is available at

  • Use the local disks (/disk/local/) for temporary storage instead of using the NFS mounted area /home. /home consists of two RAID 0 disks, it is globally mounted over NFS to every compute node and will perform poorly compared to the local disks attached to the compute nodes. Notice that /disk/local are not backed up and should be considered as temporary scratch areas.

  • File transfer is available using scp and sftp.

  • Backup of /home is made every night.

  • For questions, contact

Page last modified: 2009-03-25 09:59
For more information contact us at