Software  
Biology
Chemistry «
    Dalton «
    Dirac
    Gamess-UK
    Gamess-US
    Gaussian
    Gromacs
    NWChem
Mathematics »
Physics »
Visualization »
 
 
 

Dalton

General
Dalton represents a powerful quantum chemistry program for calculations of molecular properties with SCF, DFT, MP2, MCSCF or CC wave functions. The strengths of the program are mainly in the areas of frequency-dependent magnetic and electric properties, and for studies of molecular potential energy surfaces, both for static and dynamical investigations. See the official homepage for details.

Policy
The Dalton quantum chemistry program is available to users at NSC under the condition that published work includes citation of the program (see below) as well as citations to papers describing the implementation in cases when certain features of the code have been used.

Program citation
"DALTON, a molecular electronic structure program, Release 2.0 (2005), see http://www.kjemi.uio.no/software/dalton/dalton.html "
or,
"Dalton, a molecular electronic structure program, Release 1.2 (2001), written by T. Helgaker, H.J.Aa. Jensen, P. Jørgensen, J. Olsen, K. Ruud, H. Ågren, A.A. Auer, K.L. Bak, V. Bakken, O. Christiansen, S. Coriani, P. Dahle, E.K. Dalskov, T. Enevoldsen, B. Fernandez, C. Hättig, K. Hald, A. Halkier, H. Heiberg, H. Hettema, D. Jonsson, S. Kirpekar, R. Kobayashi, H. Koch, K.V. Mikkelsen, P. Norman, M.J. Packer, T.B. Pedersen, T.A. Ruden, A. Sanchez, T. Saue, S.P.A. Sauer, B. Schimmelpfennig, K. O. Sylvester-Hvid, P.R. Taylor, and O. Vahtras"

Manual
A manual is available on the official homepage .

Essentials
To run Dalton one has to provide it with two input files called xxxx.dal and yyyy.mol where xxxx and yyyy are arbitrary strings. The yyyy.mol file typically contains information about the molecular symmetry, nuclear coordinates, and the basis functions. The xxxx.dal file contains information about the approximation to be used (Hartree-Fock, MP2, etc.) and what molecular properties should be calculated. When executing Dalton one types:

dalton xxxx yyyy

As a result a file called xxxx_yyyy.out will be produced. A sample of input and output files is found below for respective platform.

Resource specifics

Dalton on Neolith
To run Dalton on Neolith with use of 2 nodes (a total of 16 cores) for 1 hour, submit the shell script below to the queue (the submit command is: sbatch scriptname). The input files xxxx.dal and yyyy.mol are assumed to be located in the submit directory. Temporary files created by the program are placed on the node local disk named /scratch/local in order not to slow down the network file system, and the temporary files are removed after the calculation is completed. See output file.

#!/bin/bash
#SBATCH -N 2
#SBATCH -t 1:00:00
#SBATCH -U name_of_account

DALTON=/software/apps/dalton/2.0/dalton/bin/dalton

dalinp=xxxx
molinp=yyyy

$DALTON $dalinp $molinp
#
# End of script
#

Important Notes:

  • There is no need to specify the "-N" flag to Dalton. All cores available to your job will the used.
  • In order to run a serial job, you alter the queue system node-directive in the submit script to read as
    #SBATCH -N 1
    #SBATCH -n 1
    
  • The "-U" directive to SLURM is used for the SNAC accounting. If you have only access to one SNAC account, the scheduler will (with some time delay) schedule your job to that account. If you have more than one SNAC account, the "-U" directive is compulsory.

Dalton on Monolith
To run Dalton on Monolith with use of 4 processors (2 nodes with 2 processors each) for 1 hour, submit the shell script below to the queue (the submit command is: qsub scriptname). The input files xxxx.dal and yyyy.mol are assumed to be located in the submit directory. Temporary files created by the program are placed on the node local disk named /disk/local in order not to slow down the network file system (NFS), and the temporary files are removed after the calculation is completed. See output file.

#!/bin/bash
#PBS -l walltime=1:00:00
#PBS -l nodes=2:ppn=2

NProc=`wc -l < $PBS_NODEFILE`
#
# Choose version of the program
#
DALTON=/usr/local/chem/dalton/1.2/dalton/dalton
DALTON=/usr/local/chem/dalton/2.0/dalton/bin/dalton

dalinp=xxxx
molinp=yyyy

cd $PBS_O_WORKDIR
time $DALTON -N $NProc $dalinp $molinp
#
# End of script
#

Dalton on Mozart
To set the shell environment so that Dalton can be executed on Mozart, you need to add the Dalton module. Therefore, before the first use of Dalton, type

echo "dalton" > $HOME/.modules
and the dalton module will be automatically added in the future. You will then get the latest version of Dalton as the default program.

To run Dalton on Mozart with use of 1 processor for 1 hour, submit the shell script below to the queue (the submit command is: qsub scriptname). The input files xxxx.dal and yyyy.mol are assumed to be located in the submit directory. Temporary files created by the program are placed on the disk named /scratch/$USER, and the temporary files are removed after the calculation is completed. See output file.

#!/bin/bash
#PBS -l walltime=1:00:00
#PBS -l nodes=1:ppn=1

dalinp=hf
molinp=SC4H4

cd $PBS_O_WORKDIR
time $DALTON $dalinp $molinp

#
# End of script
#

Note:
On Mozart, we supply Dalton for use on a single processor only. The intention is to encourage use of the distributed memory machines (Monolith and Swegrid) for parallel jobs, and save the shared memory SGI machine for memory intensive calculations. We anticipate that large memory is predominantly required for electron correlated calculations, and these parts of the code are not parallelized.

Dalton on SweGrid
To run Dalton on SweGrid with use of 3 processors for 10 minutes, submit the xRSL script below (the submit command is: ngsub -f scriptname). The input files xxxx.dal and yyyy.mol are assumed to be located in the submit directory, and the output file xxxx_yyyy.out is stored on the SweGrid resource after the calculation has finished. The current example also keeps the restart file named xxxx_yyyy.tar.gz. The result files can be retrieved from the SweGrid resource in different ways. In this example, the output and checkpoint files (in addition to log-files) are fetched with the command: ngget jobname.

&(jobname="SC4H4")
 (count=3)
 (cputime="10 minutes")
 (executable="/usr/bin/time")
 (arguments="dalton" "xxxx" "yyyy")
 (inputfiles=("xxxx.dal" "")("yyyy.mol" ""))
 (outputfiles=("xxxx_yyyy.out" "")("xxxx_yyyy.tar.gz" ""))
 (runtimeenvironment="APPS/CHEM/DALTON-2.0-1.0")
 (stdout="stdout")
 (stderr="stderr")
 (gmlog="gmlog")

Note:

  • Before submission can be done, you must acquire a valid token or proxy for SweGrid (the command to use is grid-proxy-init) [details].
  • The result files are kept on the SweGrid resource only for limited time only. Remember to retrieve result files [details].
  • Timing information the output file is not correct when running Dalton 1.2.
  • If you have access to a Swegrid storage resource (a grid-ftp server) you may transfer input/output files to this resource before/after the calculation has completed. You typically need to contact the system administrator to grant you access before using an existing grid-ftp server (it is not enough to have regular login access). The xRSL file below provides an example in which the output file xxxx_yyyy.out is transferred to a user directory on the computer green.nsc.liu.se:
    &(jobname="SC4H4")
     (count=3)
     (cputime="10 minutes")
     (executable="/usr/bin/time")
     (arguments="dalton" "xxxx" "yyyy")
     (inputfiles=("xxxx.dal" "")("yyyy.mol" ""))
     (outputfiles=("xxxx_yyyy.out" "gsiftp://green.nsc.liu.se/home/panor/xxxx_yyyy.out"))
     (runtimeenvironment="APPS/CHEM/DALTON-2.0-1.0")
     (stdout="stdout")
     (stderr="stderr")
     (gmlog="gmlog")
    

Dalton on Green
To run Dalton on Green with use of 4 processors for 1 hour, submit the shell script below to the queue (the submit command is: qsub scriptname). The input files xxxx.dal and yyyy.mol are assumed to be located in the submit directory. Temporary files created by the program are placed on the node local disk named /disk/local in order not to slow down the network file system (NFS), and the temporary files are removed after the calculation is completed. See output file.

#!/bin/bash
#PBS -l walltime=1:00:00
#PBS -l nodes=4

NProc=`wc -l < $PBS_NODEFILE`
#
# Choose version of the program
#
DALTON=/usr/local/chem/dalton/1.2.1/dalton/dalton
DALTON=/usr/local/chem/dalton/2.0/dalton/bin/dalton

dalinp=xxxx
molinp=yyyy

cd $PBS_O_WORKDIR
time $DALTON -N $NProc $dalinp $molinp
#
# End of script
#

Note:

  • Timing information the output file is not correct when running Dalton 1.2.





Page last modified: 2008-01-15 09:56
For more information contact us at info@nsc.liu.se.