Gaussian 09 Rev. C.01


Directory list:

VersionDescription
bdist Binary distribution of Gaussian 09 Rev. C.01 with support for SSE 4.2 instructions
mkl Gaussian 09 Rev. C.01 build with MKL 10.3.10.319

Level of support

Tier 1 NSC has at least one application expert on staff with knowledge about this software, and we will try to help you with any problems. We have also ran some internal tests to make sure that the program is working as intended. NSC also commits to installing updates to this program as they become available.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

How to run

Example batch script for running Gaussian 09:

#!/bin/bash 

#SBATCH --time=10:00:00                #Requested walltime 
#SBATCH --nodes=2                      #Number of compute nodes to allocate
#SBATCH --exclusive                    #Don't share compute nodes with other jobs (i.e. allocate complete nodes)
#SBATCH --account=liu-2012-00060-20    #Account string for the project that you wish to account the job to

job=jobname
WRKDIR=$(pwd)

#Trap SIGTERM and copy the chk file if the job hits the walltime limit 
trap 'if [ -f ${job}.chk ]; then cp ${job}.chk ${WRKDIR}; else echo "No named chk file"; fi; echo "SIGTERM was traped"' SIGTERM

#Load the relevant Gaussian module
module load gaussian/G09RevC.01-bdist
#If you want to run two Linda workers per compute node, then uncomment the line below
#export NSC_G_LW=2
. $g09root/g09/bsd/g09.profile 

cd $GAUSS_SCRDIR

if [ -f ${WRKDIR}/${job}.chk ]
then
    cp ${WRKDIR}/${job}.chk .
fi

(time g09 < ${WRKDIR}/${job}.com) > ${WRKDIR}/${job}.out &
wait

g_exit_status=$?

if [ -f ${job}.chk ]
then
    cp ${job}.chk ${WRKDIR}
fi

exit $g_exit_status
#END OF SCRIPT

Note that you must edit the jobname and the account string before submitting!

Important Link 0 commands

NOTE: NSC’s g09 wrapper will per default start one Linda worker per compute node!

%NProcLinda : The number of Linda workers to use for the job.
If NProcLinda is not specified, then the job will only run on one compute node!
The value of NProcLinda should, in general, be set to the number of compute nodes that you intend to use for the job.

One Linda worker per compute node (default):
#SBATCH --nodes=2
=> %NProcLinda=2

However, if you want to run more than one Linda worker per compute node, then you must adjust the value for NProcLinda accordingly. For example, if you want to run two Linda workers per compute node, then the value for NProcLinda must be twice the number of allocated compute nodes

Two Linda workers per compute node:
#SBATCH --nodes=2  
=> %NProcLinda=4  
Remember to also uncomment the #export NSC_G_LW=2 line in this case.

%NProcShared : The number of shared memory threads to use for a shared memory (only) parallel job.
Note that this value should never exceed the number of physical cores on the compute node (i.e. 16 in the case of Triolith).
OR
%NProcShared : The number of shared memory threads to use for each Linda worker in a Linda parallel job, if NProcLinda is also spefified.
Note that this value multiplied by the number of Linda workers per compute node should never exceed the number of physical cores on the compute node. For example, if you use the default of one Linda worker per compute node, then NProcShared can be set to the number of physical cores on the compute node (i.e. 16 in the case of Triolith). However, if you use two Linda workers per compute node, then NProcShared must be set to half the number of physical cores on the compute node (i.e. 8 in the case of Triolith).

%Mem : The amount of memory to allocate for the job.
Note that this value should never exceed the amount of physical memory available on the compute node.
OR
%Mem : The amount of memory to allocate per Linda worker for a Linda parallel job.
Note that this value multiplied by the number of Linda workers per compute node should never exceed the amount of physical memory available on the compute node.

Release notes

Changes between Gaussian 09 Revisions B.01 and C.01:

  1. Changes to optimization algorithms and options:

    a. The selection of modes to include when stepping down from a region of wrong curvature during an optimization has been improved. This can also now be controlled by route options:

    Opt=NoDownHill ... don't try to go downhill, just take RFO-like
                       step.
    
    Opt=NGoDown=M ... Mix at most M eigenvectors in taking a downhill
                      step.  The default is 3.
    

    b. Linear bends are handled more reliably, and included in internal coordinate more frequently, than before. This avoids many optimization problems involving nearly linear angles becoming exactly linear.

    c. The connectivities of reactant and product are now merged in generating the internal coordinates for the TS during QST2 and QST3 optimizations.

    d. The maximum number of steps allowed ever in an optimization (i.e., include later restarts) can be reduced. This is sometimes useful for very large systems in order to reduce memory and disk usage.

    e. The program now checks if the standard orientation of a molecule has flipped by 180 degrees during an optimization and avoids the flip. This avoids jumps when animating optimizations, IRCs, etc. in GaussView and improves SCF convergence.

    f. The memory allocation for generation of internal coordinates is now proportional to the amount of memory provided by %mem. This allows jobs with very large numbers of atoms or internal coordinates which previously failed to run if enough memory is provided.

    g. By default, internal coordinates for potential hydrogen bonds are not generated automatically. Bond coordinates are still added to connect otherwise disjoint fragments, so coordinates for hydrogen bonds which connect fragments will still be included.

  2. Single-point BD calculations now default to frozen-core, with the core orbitals uncorrelated but updated using the BD Fock matrix. The previous default was to leave the core orbitals unchanged from the HF values, or from the orbitals read in with BD=Read. The new default produces energies which are independent of the choice of starting orbitals. Gradients with BD still require and defaults to full rather than frozen-core. The OldFCBD keyword requests the old-style frozen-core.

  3. The memory required by very large ONIOM(MO:MM) and pure MM frequency calculations has been reduced.

  4. On some machines fully direct integral transformation and fully direct MP2 are chosen if there is a large amount of memory, but the semidirect algorithms are faster. Tran=SemiDirect in the Default.Route file now forces the SemiDirect algorithm for MP2 as well as the transformation in higher level post-SCF calculations. (All method keywords such as MP2 are ignored in the Default.Route file, because otherwise they would force that model in all calculations.)

  5. Output=Wfn and Output=WfX with post-SCF methods now defaults to Density=Current and Pop=NOAB, both of which are necessary for the post-SCF density to be stored in the .wfn/.wfx file. Problems with the orientation of the forces in these files and in generating them with ROHF wavefunctions and/or linearly dependent basis sets have also been fixed. Core densities are stored in the wfx file for calculations which use ECPs, so that AIM and other analysis can be done correctly for these cases.

  6. Several customers have used the file generated for COSMORS as input to their local simulation programs, so this capability has been put back into G09.

  7. Polar=Gamma has been added as more descriptive option for requesting second hyperpolarizabilities. It is synonym for Polar=(DCSHG,Cubic).

  8. The definition of improper torsions in the Amber force field is dependent on the ordering of atoms in the molecule. Calculations in the Amber program on typical proteins are consistent because of the standard ordering of atoms withing residues and residues within a PDB file, but for general molecules produced with GaussView the results depend on the order of atoms in an arbitrary way. G09 has been changed to average over the 6 possible orders of atoms in an Amber improper torsion, making the results slightly different than the standard Amber force field, but making the energy independent of permuations of atoms in the molecule.

  9. The RevTPSS exchange and correlation functionals have been added.

  10. SDD now defaults to more recent basis sets for Actinides; OldSDD requests the previous default.

  11. Printing during Pop=MK with IOp33 increased has been restored to include the data required for RESP charge fitting. However, G09 can now generate the data file for AnteChamber directly, by setting IOp(6/50=1) in the Pop=MK job, and this is the recommended method for generating input for RESP.

  12. A bug in CIS frequencies with PCM solvation was fixed.

  13. MaxDisk in a Default.Route file now applies to all steps of a compound job; previously, only the first step was defaulted properly.

  14. A bug which prevented reading AlpB parameters for AM1 was fixed.

  15. Convergence during SCVS calculations is now checked more carefully. Refer to the input files for tests 935–939 and 945 for examples of using SCVS.

  16. TB and TW can now be used to specify memory and disk allocations in units of terabytes and terawords, respectively.

  17. Pop=SaveBio in Stable=Opt jobs caused the stability calculation to be wrong or fail. This now works properly, saving the biorthogonal orbitals only after the wavefunction has been made stable.

  18. External point charges now work with symmetry turned on.

  19. A bug in TD-DFT gradients with frozen core was fixed.

  20. Print statements for NMR shielding were fixed to work with more than 999 atoms.

  21. A bug in DFTB using interpolated (not analytic) parameters with d functions was fixed.

  22. A rare problem with uncompleted write statements on slow file systems was fixed.

  23. Problems with some combinations of charge and multiplicity in fragments during Guess=Fragment calculations were fixed.

  24. Printing of Coriolis terms during Freq=VibRot was restored.

  25. Some memory allocation problems for PBC calculations with large unit cells were fixed.

  26. Inconsistencies in how the geometry was modified in some cases by Symm=Loose were fixed.

  27. A bug in the ROMP4 triples energy when NoSymm was specified was fixed.

  28. A warning “The extrapolated energy is higher than the direct energy” is no long printed unnecessarily by the CBS extrapolation.

  29. ONIOM(MO:MM) jobs which do microiterations and which fail to finish are now restartable.

  30. A bug in reading ECPs with ONIOM when the same ECP was placed on multiple centers was fixed.

  31. The combination of IRC and Freq, which did the frequency calculation at the last point of the IRC rather than the TS, is now rejected.

  32. Several unsupported combinations of Douglas-Kroll-Hess with properties now generate an error message rather than incorrect answers.

  33. A bug in generating the default (Harris) initial guess when using ECPs on charged species was fixed. The quality of the initial guess when using ECPs has also been improved.

  34. Several defaults for whether to use FMM and other integral options have been updated for better performance on current models of CPU.

  35. FormChk now writes –1 rather than ****** to the formatted checkpoint file if the value exceeds 10^13–1. This allows unfchk and other utilities to process the resulting fchk file.

  36. A bug affecting geom=check after numerical frequencies when using ONIOM was fixed.

  37. The Direct option is available for SAC-CI. This requests an integral-direct algorithm suitable for larger molecules.

  38. A %oldchk link0 command has been added. The contents of the checkpoint file specified by %oldchk are copied to the checkpoint file of the current job step at the start of the job step. This allows data to be picked up from a previous calculation without destroying anything on the chk file from it.

  39. The combination of BD or W1BD with SCRF, which does not work correctly, is now rejected by the route generator.

  40. A new version of the ATLAS blas library is used on most platforms. This fixes several problems when using very large amounts of memory. In the event of such problems, IOp1=NoAssem can now be specified on the route line to turn off use of the ATLAS matrix multiplication routines.

  41. A problem in reported transition moments between excited states computed in SAC-CI jobs was fixed.

  42. Empirical dispersion with DFT and ghost atoms now runs. Empirical dispersion and PBC now produces an error message, since it is not implemented.

  43. “Opt Freq” with ROHF/RODFT now works correctly, doing Freq=Numer with the restricted open-shell wavefunction in the second job step.

  44. Franck-Condon calculations now function correctly for forbidden transitions.

  45. The route generator rejects the combination of TD and double-hybrid DFT methods, which never worked (previously, TD was done based on only the SCF part of the double-hybrid).

  46. IRC=(RCFC,GradientOnly) calculations now correctly use the Hessian from the chk file.

  47. Diffuse (aug-) functions were added for cc-pVDZ for the first transition row.

  48. A memory allocation bug for very large systems which could cause a failure with the message “NIJ > Max2 in MMCore” was fixed.

Corrections to deprecated features:

  1. Problems with setting non-integer nuclear charges in Massage input have been corrected. There is now a ZNuc function in Massage input which changes the nuclear charge but not the atomic number.

  2. A problem with AIM analysis on Windows only was fixed.

Disclaimer

NSC takes no responsibility for the correctness of results produced with the binaries! Hence, always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.