DALTON and LSDALTON installations at NSC


Directory list:

VersionDescription
2013.0 release 2013.0

Level of support

Tier 2 NSC has only limited experience with this software, but we will try to help as much as possible. We have ran some tests, for example, if the program comes with a test suite, but they may be far from exhaustive. We will try to install and test new versions, as soon as we can.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

The DALTON and LSDALTON installations are generally maintained by Radovan Bast from PDC, Stockholm.

The DALTON2013 suite consists of two separate executables, Dalton and lsDalton. The Dalton code is a powerful tool for a wide range of molecular properties at different levels of theory, whereas lsDalton is a linear-scaling HF and DFT code suitable for large molecular systems.

Developer’s homepage: daltonprogram.org

How to run

Example batch script for running DALTON:

#!/bin/bash

#SBATCH --time=10:00:00
#SBATCH --nodes=2
#SBATCH --exclusive
#SBATCH --account=snicxxx-yyy

module load dalton/2013.0-build1

# Space for F77 "WORK array" allocations for each MPI rank
export WORK_MEM_MB=1024
# Convert to words, do not edit
export WRKMEM=$(( $WORK_MEM_MB * 131072 ))

# Place where DALTON writes temporary files
export DALTON_TMPDIR=$SNIC_TMP

# Specify input files
DALTON_INPUT=input.dal
DALTON_MOLECULE=molecule.mol

# Broadcast input files to all nodes
sbcast $DALTON_INPUT ${DALTON_TMPDIR}/DALTON.INP
sbcast $DALTON_MOLECULE ${DALTON_TMPDIR}/MOLECULE.INP

# Start the job
cd $SNIC_TMP
mpprun $DALTON_EXE

# Copy output file back
cp DALTON.OUT ${SLURM_SUBMIT_DIR}/${DALTON_INPUT%.dal}_${DALTON_MOLECULE%.mol}.out

Example batch script for running LSDALTON:

#!/bin/bash

#SBATCH --time=10:00:00
#SBATCH --nodes=2
#SBATCH --exclusive
#SBATCH --account=snicxxx-yyy

module load dalton/2013.0-build1

# Space for F77 "WORK array" allocations for each MPI rank
export WORK_MEM_MB=1024
# Convert to words, do not edit
export WRKMEM=$(( $WORK_MEM_MB * 131072 ))

# Place where LSDALTON writes temporary files
export DALTON_TMPDIR=$SNIC_TMP

# Specify input files
DALTON_INPUT=input.dal
DALTON_MOLECULE=molecule.mol

# Broadcast input files to all nodes
sbcast $DALTON_INPUT ${DALTON_TMPDIR}/LSDALTON.INP
sbcast $DALTON_MOLECULE ${DALTON_TMPDIR}/MOLECULE.INP

# Start the job
cd $SNIC_TMP
mpprun $LSDALTON_EXE

# Copy output file back
cp LSDALTON.OUT ${SLURM_SUBMIT_DIR}/${DALTON_INPUT%.dal}_${DALTON_MOLECULE%.mol}.out

mpprun will automatically start the number of MPI ranks that matches the requested allocation.

NOTE: You will need to change input.dal and molecule.mol to whatever name your input files have.

Remember to change the account string, snicxxx-yyy, to the string for the project that you want to charge the job to. You can find the account string in NSC Express or using the projinfo command. If you are only included in one project, then you don’t need this line at all.

NOTE about allocating memory: Make sure that the total memory allocated (i.e. MPI-ranks-per-node * memory-per-rank) doesn’t exceed the available physical memory. On Triolith, most compute nodes have 32 GB memory, so if you use all 16 cores, then you can allocate up to ~2000 MB per MPI rank. If you for some reason need more than ~2000 MB per MPI rank, then you should reduce the number of MPI ranks per node or schedule your job on the memory fat compute nodes. You can request the memory fat nodes by adding the --constraint=fat option to sbatch and with the --ntasks-per-node=<ntasks> option, where <ntasks> is less than 16, you can control the number of MPI ranks per node.

Disclaimer

NSC takes no responsibility for the correctness of results produced with the binaries! Hence, always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.