ORCA installations at NSC

Level of support

Tier 2 NSC has only limited experience with this software, but we will try to help as much as possible. We have ran some tests, for example, if the program comes with a test suite, but they may be far from exhaustive. We will try to install and test new versions, as soon as we can.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

The program ORCA is a modern electronic structure program package written by F. Neese, with contributions from many current and former coworkers and several collaborating groups. The binaries of ORCA are available free of charge for academic users for a variety of platforms. ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects. Due to the user-friendly style, ORCA is considered to be a helpful tool not only for computational chemists, but also for chemists, physicists and biologists that are interested in developing the full information content of their experimental data with help of calculations.

ORCA is able to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory. Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.

The ORCA installations at NSC are generally maintained by Johan Raber (raber@nsc.liu.se). More information is available on the ORCA web pages.

How to run

Load the ORCA module corresponding to the version you want to use. To see which versions are available do a

module avail orca

We strongly recommend to use the latest version when you have a choice. For instance

module load orca/3.0.1

For doing interactive work, first allocate a node for your work

interactive -N 1 --exclusive -t 8:00:00 -A <your_project_account>

This allocates one node exclusively for you for 8h. The <your_project_account> string is the SNIC or local project name you want to use. If you have only one project, this can be omitted. The “projinfo” command will give you a list of projects you belong to. Note that it may take a while to get a node allocated depending on your priority and available resources. Your priority is a function of how much time you have spent of your allocation in the last 30 days, vis-√†-vis the priority of everybody else in the batch queue.

If you only plan to do a shorter interactive stint, you can use the development nodes of Triolith which have a wall time limit of one hour only, but are on the other hand most often less used and therefore easier to get allocated. This is a good way to do some quick debugging. Allocate like

interactive -N 1 --exclusive -t 1:00:00 -A <your_project_account> --reservation=devel

After you get a node allocated, run orca with

orca.run input.inp > output.out

A very important flag to “interactive” (and sbatch) to know about, is the “-C” flag which can be used to allocate a “fat” node, i.e. a node with substantially more memory installed than the baseline 32 GB of triolith. On triolith the fat nodes are currently equipped with 128 GB RAM. To get a fat node add the option “-C fat” to “interactive” or you batch processing script.

Running ORCA batch scripts

An example batch script for running ORCA looks like this:

#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH --exclusive
#SBATCH -A SNIC-xxx-yyy

wd=$(pwd) # Set working directory

# Set a trap for SIGTERM to copy back scratch files if wall time limit is
# reached. Make sure to edit this to only copy back useful files. Note that
# the trap makes a directory in the current working directory called
# scratch_${SLURM_JOB_ID} to which it copies the scratch files. The
# variable ${SLURM_JOB_ID} is a unique job name identifier on the cluster it
# is run on. The purpose is to not overwrite any previous files. Modify
# this to your liking. Also note that this will only work out well enough
# if your files can be copied back within five minutes (scratch files < ~30
# GB). If not, another approach is needed.
trap "mkdir ${wd}/scratch_${SLURM_JOB_ID} && cp -r ${SNIC_TMP}/* ${wd}/scratch_${SLURM_JOB_ID}/" SIGTERM

# Copy the input file(s) to the node local scratch directory
cp $job $SNIC_TMP/ # Copies files to the node local disk
cd $SNIC_TMP # Work on the node local disk

module load orca/<desired_version>
orca.run $job > $wd/$job.out # Put output file in the working directory

cp -r $SNIC_TMP/{*.xyz,*.trj,*.prop,*.opt,*.engrad} $wd/ # Copy back scratch files if needed. Please edit to suit you. 

Note that you should edit the jobname, account number, scratch and input files and desired ORCA version before submitting. To get a fat node, add an SBATCH line saying “#SBATCH -C fat” to the above script. Also, please note that parallelisation is handled via the input file directly and not via an external launcher, a section such as the following may be needed in your input file

%pal nprocs 32 # In case of two full Triolith nodes allocated for instance

Orca is highly dependant on file I/O performance and especially for wider jobs (more nodes) it is very beneficial for performance to work on the node local disk, which is why the example batch script shows this. It is not strictly necessary but can get you a performance boost of circa a factor two as our (limited) tests show.

Visualizing ORCA output

Visualisation of ORCA output can be done with Gabedit. Available versions of Gabedit can be found with a “module avail gabedit”. For visualization work we strongly recommend the VNC solution ThinLinc over X forwarding although the latter is certainly possible.