Molcas at NSC


Level of support

Tier 3 NSC will not be able to help you much with this program, either because we lack the in-house experience, or because it is a test installation. In general, these types of installations are untested, and will not be updated unless you send a request to NSC.

Please see the page describing our software support categories for more information.
You can also contact support@nsc.liu.se for further information.

Licensing

In order to use Molcas at NSC you need a Molcas license, which you can obtain following the instructions on the Molcas web site. Without a valid license installed in your ~/.Molcas directory, you will not be able to execute the Molcas binaries at NSC.

How to run

Given that you have a Molcas input file called “jobname.input”, here’s how you can run Molcas from an interactive session:

@triolith: $ interactive -n4 --ntasks-per-node=2 --exclusive -t 00:30:00 #This drops you in a shell on an allocated node 
@node: $ module load molcas/8.0_sp1
@node: $ molcas.run jobname.input > jobname.log 2> jobname.err

Running it in batch mode is very similar and an example is available below. The “molcas.run” script is an adaptation of the original “molcas” driver suitable for use in the NSC cluster environment. It sets a number of sensible parameter defaults based on your allocation of resources, such as ${CPUS} (default: ${SLURM_NPROC}), ${MOLCASMEM} (default: 1500 MB) and ${WorkDir} (default: ${SNIC_TMP}). All of these parameters can be overridden by setting the corresponding variable in your batch job script. Setting the ${WorkDir} variable to ${SNIC_TMP} will put the large output files there.

Putting all large output files under ${SNIC_TMP} in a job will prompt the need to save them at the end of the job. Otherwise they will be removed by the cleanup script which is run between jobs. An example of how to accomplish this is shown in the example batch job script below.

Example batch script

A batch script for running Molcas at NSC may look like this:

#!/bin/bash
#SBATCH -n 4
#SBATCH --ntasks-per-node=2
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH --exclusive
#SBATCH -A SNIC-xxx-yy

WD=$(pwd)
JOB=my_test_job

module load molcas/8.0_sp1
#Trap SIGTERM and copy the larger files from $SNIC_TMP if the job hits the walltime limit 
trap 'cp ${SNIC_TMP}/* ${WD}/; echo "SIGTERM was trapped"' SIGTERM

# If other than default values are needed, please set the following environment variables
# to fit your job requirements
# export CPUS=... # Default is your allocated number of CPU cores
#
# Default is 75% of available memory per process
# export MOLCASMEM=...

molcas.run ${JOB}.input > ${JOB}.log 2> ${JOB}.err
exit_status=$?

cp ${$SNIC_TMP}/* ${WD}/

if (( $exit_status != 0 )); then
    echo "Molcas exited with error status $exit_status"
else
    echo "Molcas exited successfully."
fi

exit $exit_status

Note that you should at least edit the jobname and the account number before submitting.