Level of support

Tier 2 NSC has only limited experience with this software, but we will try to help as much as possible. We have ran some tests, for example, if the program comes with a test suite, but they may be far from exhaustive. We will try to install and test new versions, as soon as we can.

Please see the page describing our software support categories for more information.
You can also contact for further information.

“Amber” refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos. This installation contains both of the above.

The Amber 12 user guide is available as a pdf file here

Please contact NSC Support if you have any questions or problems.

How to run

All executables are located in a path available by loading an Amber module file, e.g. “module load amber/12” will make available all Amber 12 binaries. How to run these are covered in the Amber user guide, except the executables suffixed “.run” which are NSC specific runscripts intended to launch MPI parallel Amber executables with a minimum of fuzz, see the below job script for an example, i.e. they take care of properly launching their corresponding Amber MPI binary.

Example batch script:


#SBATCH --time=10:00:00                #Requested walltime. 10h in this case.
#SBATCH --nodes=2 --exclusive          #Number of compute nodes to allocate
#SBATCH --account=liu-2012-00060-20    #Account string for the project that you wish to account the job to

# Set the working dir to wherever you submit the job from. This example batch
# script assumes that you have your job input files in this directory

module load amber/12 # For instance. Use any amber version you want if there are many.

# If you run qmmm with an external interface, be sure to load that
# module also. If you interface to Gaussian, please uncomment and edit as appropriate the
# line below:
#module load gaussian/G09RevC.01-bdist

# To use the node local disk, which may be very beneficial to
# performance, copy pertinent files there and change directory
# to it (uncomment and edit if this is what you want):
#cp ${WRKDIR}/{mdin,prmtop,inpcrd,restrt} ${SNIC_TMP}/ && cd ${SNIC_TMP}/

#Trap SIGTERM and copy the trajectory file (and other files) if the job hits the walltime limit 
trap 'if [ -f ${SNIC_TMP}/* ]; then cp ${SNIC_TMP}/* ${WRKDIR}/; else echo "No run files found"; fi; echo "SIGTERM was trapped"' SIGTERM

# Run Amber. All Amber executables which are MPI capable have
# corresponding launcher scripts with suffix ".run". To check which
# executables are MPI compiled do a
# "ls /software/apps/amber/<version_number>/bin/*.run"
# To run an Amber MPI executable, uncomment and edit the line below to
# suit your needs. This is a generic example: -i mdin -o mdout -p prmtop -c inpcrd -r restrt

# Copy pertinent files to your working directory when simulation
# finishes if you ran un the node local disk. Uncomment and edit:
#cp ${SNIC_TMP}/* ${WRKDIR}/ 

exit 0

Note that you must edit at least the jobname, account string, walltime and number of requested nodes iin the above before submitting!


NSC takes no responsibility for the correctness of results produced with the binaries! Hence, always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.