Gaussian 03 Rev. E.01


Directory list:

VersionDescription
bdist Binary distribution of Gaussian 03 Rev. E.01

Level of support

Tier 2 NSC has only limited experience with this software, but we will try to help as much as possible. We have ran some tests, for example, if the program comes with a test suite, but they may be far from exhaustive. We will try to install and test new versions, as soon as we can.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

Gaussian 03 has been superseded by Gaussian 09.

You can obtain the Gaussian 03 manual pages here: http://www.gaussian.com/g_dl/g03_man.zip

Please contact NSC Support if you have any questions or problems with running Gaussian 03.

How to run

Example batch script for running Gaussian 03:

#!/bin/bash 

#SBATCH --time=10:00:00                #Requested walltime 
#SBATCH --nodes=2                      #Number of compute nodes to allocate
#SBATCH --exclusive                    #Don't share compute nodes with other jobs (i.e. allocate complete nodes)
#SBATCH --account=liu-2012-00060-20    #Account string for the project that you wish to account the job to

job=jobname
WRKDIR=$(pwd)

#Trap SIGTERM and copy the chk file if the job hits the walltime limit 
trap 'if [ -f ${job}.chk ]; then cp ${job}.chk ${WRKDIR}; else echo "No named chk file"; fi; echo "SIGTERM was traped"' SIGTERM

#Load the relevant Gaussian module
module load gaussian/G03RevE.01-bdist
. $g03root/g03/bsd/g03.profile 

cd $GAUSS_SCRDIR

if [ -f ${WRKDIR}/${job}.chk ]
then
    cp ${WRKDIR}/${job}.chk .
fi

(time g03 < ${WRKDIR}/${job}.com) > ${WRKDIR}/${job}.out &
wait

g_exit_status=$?

if [ -f ${job}.chk ]
then
    cp ${job}.chk ${WRKDIR}
fi

exit $g_exit_status
#END OF SCRIPT

Note that you must edit the jobname and the account string before submitting!

Important Link 0 commands

NOTE: NSC’s g03 wrapper will per default start two Linda workers per compute node!

%NProcLinda : The number of Linda workers to use for the job.
If NProcLinda is not specified, then the job will only run on one compute node!
The value of NProcLinda should be set to 2x the number of compute nodes that you intend to use for the job.

#SBATCH --nodes=2  
=> %NProcLinda=4

%NProcShared : The number of shared memory threads to use for a shared memory (only) parallel job.
Note that this value should not exceed 8 for this installation of G03!
If NProcShared is not specified then a value of 8 is used per default, so you don’t need to specify NProcShared at all.
OR
%NProcShared : The number of shared memory threads to use for each Linda worker in a Linda parallel job, if NProcLinda is also spefified.
Note that this value multiplied by the number of Linda workers per compute node should never exceed the number of physical cores on the compute node. Hence, the default value of 8 should generally be used on Triolith.

%Mem : The amount of memory to allocate for the job.
Note that this value should never exceed the amount of physical memory available on the compute node.
OR
%Mem : The amount of memory to allocate per Linda worker for a Linda parallel job.
Note that this value multiplied by the number of Linda workers per compute node should never exceed the amount of physical memory available on the compute node.

Disclaimer

NSC takes no responsibility for the correctness of results produced with the binaries! Hence, always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.