Structural biologists use Macromolecular X-ray crystallography (MX), Nuclear Magnetic Resonance (NMR) and cryo Electron Microscopy (cryo-EM) to determine 3D structures of macromolecules such as proteins, DNA and RNA. To address complex research questions structural biologists is now stretching towards cellular length scales with techniques such as cryo-EM tomography and X-ray imaging while capturing dynamics via correlative microscopy and molecular dynamics (MD) simulations. The multi-purpose and multi-technique approach to characterize how macromolecules and their assemblies interact with virus and other pathogens in space and time is known as integrated structural biology. Today structural biologists experience an increasing amount of experimental raw data being collected by modern photon counting detectors at national and international facilities. Swedish structural biology research groups using the Swedish MAX IV light source for macromolecular X-ray crystallography and the cryo-EM facilities at Science For Life laboratories are generated large amounts of raw data that require supportive and research driven computational and storage solutions.
In november 2015, MAX IV and NSC formally decided to develop a HPC platform for MX in support of Swedish MAX IV users giving the PReSTO project the status of a MAX IV satellite.
Swedish structural biologists can access and use PReSTO by requesting membership in project (SNIC 2019/3-326):
The PReSTO installation has been performed by Easybuild for rapid sharing with other Swedish HPC centers coordinated by SNIC. We have installed the majority of MX software at NSC computer Tetralith, LUNARC computer Aurora and the MAX IV cluster. THe installations at NSC and LUNARC is available through SNIC while the MAX IV cluster is accessible when having MX beamtime at MAX IV.
A HPC computer consist of a few login nodes(computers) and many compute nodes(computers), where the login node is typically used for remote graphics applications and the compute nodes for heavy computing. We develop a Linux Desktop menu for my own Linux computer look and feel that direct the users towards running software either at the login node or at the compute nodes. Easybuild also enable transfer of environment variables to the compute nodes as been shown important to MX software that depend on Perl such as pipedream from GlobalPhasing.
Today at Aurora every user get 500 GB diskspace under /lunarc/nobackup/users/username while at Tetralith we 2.5 TB of diskspace under /proj/xray/users/username however if this is too small for your project or if you plan to run molecular dynamics simulations, apply for your own compute time at NSC Triolith or LUNARC Aurora or perhaps at Kebnekaise if interested in Cryo-EM. PReSTO project membership grant access to MX software however having your own compute time allocation is required when combining MX with compute intensive molecular dynamics or Cryo-EM calculations.
This homepage contains startup instructions for the Easybuild installation of MX software that we call PReSTO. We attempt to guide HPC beginners in using the PReSTO environment and simply share links to excellent MX software guides and tutorials generated by MX software developers. Many MX softwares can be conveniently launched from the PReSTO menu that is visible by default at Aurora but require some copy-paste settings into the Tetralith .bashrc file.
We are indepted to all MX software authors that kindly share their software for academic use in an HPC environment. We list all MX software currently in PReSTO installation with links to software home page and citations. Specifically we want to mention Gérhard Bricogne and Claus Flensburg from GlobalPhasing supporting the PReSTO project since its very beginning and shared all their MX software for academic HPC use including autoPROC with Staraniso for convenient elliptical scaling of diffraction datasets. We also want to mention the developers behind XDSAPP version 2.99 (Karine Röwer, Uwe Müller and Manfred Weiss) sharing a pre-release that can process Eiger data containers directly and Graeme Winter from the DIALS development team sharing a BioMAX specific software patch for DIALS.
explain basic HPC commands
BioMAX data processing scripts/settings/keywords
Triolith multi-node example
login, compute nodes and OpenGL acceleration
Description of issues
MX software home page and citations
Triolith is exchanged for Tetralith
convenient GUI launching at single compute/login node
sending parallel jobs to the queue