Multinode XDS, XDSAPP, autoPROC, XDSGUI

We made small changes to the XDS subroutine called forkxds to enable slurm scheduled multi-node runs with XDS and its related XDSAPP, XDSGUI and autoPROC softwares. According to our XDS, XDSAPP benchmark runs with Eiger detector data this runs fine provided the total number of cores requested by the slurm allocation, matches the internal software parameters according to

MAXIMUMNUMBER_OF_JOBS_ X MAXIMUMNUMBER_OF_PROCESSORS_ = number of nodes X cores per node = total number of cores

Either the interactive command or the sbatch script defines the total number of cores allocated at the beginning of each compute job

slurm allocation either by:
A) interactive --nodes=2 --exclusive -t 00:30:00 -A snic2018-3-XXX
 - Get an interactive terminal window having a slurm allocation as:
   a) 32 cores at NSC  Triolith for 30 minutes on project snic2018-3-XXX
   b) 40 cores at LUNARC Aurora for 30 minutes on project snic2018-3-XXX
or 
B) sbatch scripts:
  #!/bin/sh
  #SBATCH --nodes=2 --exclusive
  #SBATCH -t 0:30:00
  #SBATCH -A snic2018-3-XXX
  etc.
  results in a slurm allocation using
   a) 32 cores at NSC  Triolith for 30 minutes on project snic2018-3-XXX
   b) 40 cores at LUNARC Aurora for 30 minutes on project snic2018-3-XXX

XDS and its derivatives set MAXIMUM_NUMBER_OF_JOBS & MAXIMUM_NUMBER_OF_PROCESSORS by related keywords such as

  1. XDS and XDSGUI use MAXIMUM_NUMBER_OF_JOBS & MAXIMUM_NUMBER_OF_PROCESSORS in XDS.INP
  2. autoPROC autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS & autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS in sbatch script
  3. XDSAPP GUI No. of jobs and No. of cpus in XDSAPP GUI
  4. XDSAPP script -j and -c in sbatch script
  5. XDSME can use a single-node only since XDSME are varying these parameters for each xds subroutine (INIT, COLSPOT, IDXREF, INTEGRATE, etc) that cannot be matched by the slurm allocation command in the beginning of the XDSME sbatch script

multi-node XDS using sbatch or interactive node command-line

Using the PReSTO setup MX software can be launched in three different ways

  1. sbatch scripts
  2. command-line
  • interactive --nodes=2 --exclusive -t 00:30:00 -A snic2018-3-XXX
  • module load XDSAPP
  • xdsapp
  1. PReSTO menu

however multi-node runs are only possible when software is launched by 1. sbatch script or 2. from an interactive node command-line and not from 3. PReSTO menu that launch single-node software GUIs.

Software PReSTO menu command-line sbatch script

XDS

NO

YES multi-node

YES multi-node

XDSGUI

YES single-node

YES multi-node

NO

XDSAPP

YES single-node

YES multi-node

YES multi-node

autoPROC

NO

YES multi-node

YES multi-node

XDSME

NO

YES single-node

YES single-node

Table 1. Various ways to launch XDS and its derivatives. XDSAPP can be run in multi-node mode from command-line or via sbatch script, however from the PReSTO menu only single-node executions are possible

multi-node sbatch script examples

When adapting slurm allocations according to:

MAXIMUMNUMBER_OF_JOBS_ X MAXIMUMNUMBER_OF_PROCESSORS_ = number of nodes X cores per node = total number of cores

it should be noticed that

  • NSC Triolith is having 16 cores per node
  • LUNARC Aurora is having 20 cores per node

so the intention of using either NSC Triolith or LUNARC Aurora is highlighted in the examples below

Brief examples when using 1 node with 16 cores at NSC Triolith

sbatch single-node-examples for NSC Triolith:
#SBATCH --nodes=1 --exclusive

Require when using XDS, XDSGUI:
  MAXIMUM_NUMBER_OF_JOBS=4
  MAXIMUM_NUMBER_OF_PROCESSORS=4
  or
  MAXIMUM_NUMBER_OF_JOBS=2
  MAXIMUM_NUMBER_OF_PROCESSORS=8

Require when using autoPROC:
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=4 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=4 \
  or
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=2 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=8 \

Require when using XDSAPP GUI:
  No. of jobs 4
  No. of cpus 4
  or
  No. of jobs 2
  No. of cpus 8

Brief examples when using 1 node with 20 cores at LUNARC Aurora

sbatch single-node-examples LUNARC Aurora:
#SBATCH --nodes=1 --exclusive

Require when using XDS, XDSGUI:
  MAXIMUM_NUMBER_OF_JOBS=4
  MAXIMUM_NUMBER_OF_PROCESSORS=5
  or
  MAXIMUM_NUMBER_OF_JOBS=5
  MAXIMUM_NUMBER_OF_PROCESSORS=4

Require when using autoPROC:
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=4 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=5 \
  or
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=5 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=4 \

Require when using XDSAPP GUI:
  No. of jobs 4
  No. of cpus 5
  or
  No. of jobs 5
  No. of cpus 4

Brief examples when using 2 nodes with 16 cores at NSC Triolith

sbatch two-node-examples for NSC Triolith:
#SBATCH --nodes=2 --exclusive

Require when using XDS, XDSGUI:
  MAXIMUM_NUMBER_OF_JOBS=8
  MAXIMUM_NUMBER_OF_PROCESSORS=4
  or
  MAXIMUM_NUMBER_OF_JOBS=4
  MAXIMUM_NUMBER_OF_PROCESSORS=8

Require when using autoPROC:
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=8 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=4 \
  or
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=4 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=8 \

Require when using XDSAPP GUI:
  No. of jobs 8
  No. of cpus 4 
  or
  No. of jobs 4
  No. of cpus 8 
  
  

Brief examples when using 2 nodes with 20 cores at LUNARC Aurora

sbatch two-node-examples for LUNARC Aurora:
#SBATCH --nodes=2 --exclusive

Require when using XDS, XDSGUI:
  MAXIMUM_NUMBER_OF_JOBS=8
  MAXIMUM_NUMBER_OF_PROCESSORS=5
  or
  MAXIMUM_NUMBER_OF_JOBS=5
  MAXIMUM_NUMBER_OF_PROCESSORS=8
  or
  MAXIMUM_NUMBER_OF_JOBS=4
  MAXIMUM_NUMBER_OF_PROCESSORS=10

Require when using autoPROC:
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=8 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=5 \
  or
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=5 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=8 \
  or
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=4 \
  autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=10 \

Require when using XDSAPP GUI:
  No. of jobs 8
  No. of cpus 5 
  or
  No. of jobs 5
  No. of cpus 8
  or
  No. of jobs 4
  No. of cpus 10 

Example 1: Run 2 node XDSAPP - command-line version with eiger data at NSC Triolith. XDSAPP command-line options

open terminal window
sbatch xdsapp.script, where xdsapp.script is:
#!/bin/sh
#SBATCH -t 0:30:00
#SBATCH --nodes=2 --exclusive
#SBATCH -A snic2018-3-xxx
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@lu.se
module load XDSAPP
xdsapp --cmd \
--dir /proj/xray/users/x_marmo/test_suite_NSC/eiger/empty/presto/xdsapp_eiger/xdsit \
-j 8 \
-c 4 \
-i /proj/xray/users/x_marmo/test_suite_NSC/eiger/empty/2015_11_10/insu6_1_data_000001.h5

Example 2: Run 4 node XDSAPP - command-line version with eiger data at LUNARC Aurora. XDSAPP command-line options

open terminal window
sbatch xdsapp.script
#!/bin/sh
#SBATCH -t 0:30:00
#SBATCH --nodes=4 --exclusive
#SBATCH -A snic2018-3-xxx
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@lu.se
module load XDSAPP
xdsapp --cmd \
--dir /lunarc/nobackup/users/mochma/test_suite_NSC/eiger/empty/presto/xdsapp_forkxds_4 \
-j 8 \
-c 10 \
-i /lunarc/nobackup/users/mochma/test_suite_NSC/eiger/empty/2015_11_10/insu6_1_data_000001.h5

Example 3: Run 1 node autoPROC with eiger data - wiki using sbatch with eiger data at LUNARC Aurora. autoPROC can be run at multiple-nodes however many autoPROC subroutines are not parallel making multi-node autoPROC runs wasting compute time without gaining much wall-clock time.

#!/bin/sh
#SBATCH -t 0:30:00
#SBATCH --nodes=1 --exclusive
#SBATCH -A snic2018-3-xxx
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@lu.se
module load autoPROC
process \
-h5 /lunarc/nobackup/users/mochma/test_suite_NSC/eiger/empty/2015_11_10/insu6_1_master.h5 \
autoPROC_XdsKeyword_LIB=/sw/pkg/presto/software/Neggia/1.0.1-goolf-PReSTO-1.7.20/lib/dectris-neggia.so \
autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=4 \
autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=5 \
-d pro1 > pro1.log 

Example 4: Run 2 node XDS making your own XDS.INP from scratch at LUNARC Aurora

A) module load generate_XDS.INP
    generate_XDS.INP insu6_1_master.h5
B) Edit XDS.INP planning for a 2 node run at LUNARC Aurora (40 processors) i.e. by addding
  MAXIMUM_NUMBER_OF_JOBS=4
  MAXIMUM_NUMBER_OF_PROCESSORS=10
  LIB=/sw/pkg/presto/software/Neggia/1.0.1-goolf-PReSTO-1.7.20/lib/dectris-neggia.so
C) Create and edit xds.script as:
#!/bin/sh
#SBATCH -t 0:15:00
#SBATCH --nodes=2 --exclusive
#SBATCH -A snic2018-3-xxx
#SBATCH --mail-type=ALL
#SBATCH --mail-user=name.surname@lu.se
module load XDS
xds_par
D) submit job by:
sbatch xds.script

Multi-node runs using interactive from command-line

Example 5: Run 2 node XDSGUI using interactive node (since XDSGUI cannot be scripted) at LUNARC Aurora (40 processors). Single node XDSGUI can be launched from PReSTO menu.

A) Get two interactive nodes for 30 min.
    interactive --nodes=2 --exclusive -t 00:30:00 -A snic2018-3-XXX
B) module load XDSGUI
    xdsgui
C) Create XDSGUI output directory under Projects 
D) Generate XDS.INP by pressing "generate_XDS.INP" button under Frame and point to:
  /lunarc/nobackup/users/mochma/test_suite_NSC/eiger/empty/2015_11_10/insu6_1_master.h5
E) Add 
    MAXIMUM_NUMBER_OF_JOBS=4
    MAXIMUM_NUMBER_OF_PROCESSORS=10
    LIB=/sw/pkg/presto/software/Neggia/1.0.1-goolf-PReSTO-1.7.20/lib/dectris-neggia.so
  to XDS.INP
F) Press "Save" button under XDS.INP and press "Run XDS"

Example 6: Run 2 node XDSAPP using interactive node at LUNARC Aurora (40 processors). Single node XDSAPP can be launched from PReSTO menu.

A) Get two interactive nodes for 30 min.
    interactive --nodes=2 --exclusive -t 00:30:00 -A snic2018-3-XXX
B) Check allocation (not really required)
        squeue -u mochma
         JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
        165722      snic _interac   mochma  R       0:08      2 au[118,159]
C) module load XDSAPP
    xdsapp
D) Select output directory under Settings!
E) Load dataset by pressing "Load" button and point to:
  /lunarc/nobackup/users/mochma/test_suite_NSC/eiger/empty/2015_11_10/insu6_1_data_000001.h5
F) Under "You know what you do" Settings add
    No. of jobs 4
    No. of cpus 10    
G) Press "Do all" button 

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express