Storage for PReSTO pilot SNIC 2019/3-326
||Storage for PReSTO pilot SNIC 2019/3-326|
||SNIC Medium Storage|
||Martin Moche <Martin.Moche@ki.se>|
||2019-12-20 – 2021-01-01|
||10601 20908 20906|
In 2013, Protein Science Facility (PSF) from Karolinska Institutet in Stockholm and National Super-computer Centre (NSC) in Linköping, started a pilot project to evaluate performance of Macromo-lecular X-ray crystallography (MX) applications running in an HPC environment. In 2015, the Swe-dish light source MAXIV decided to fund a pilot extension called PReSTO (https://www.nsc.liu.se/support/presto/), that aim to support integrated structural biology calcula-tions including MX, Nuclear Magnetic Resonance (NMR) and cryo-electron microscopy (cryo-EM). During 2017-2018 the MX-PReSTO installation was made using easybuild with several advantages to a standard HPC installation such as
A) software environments can be brought to compute nodes
B) software dependencies are visible in easyconfigs.
In 2018, the Swedish Research Council granted funds (dnr. 2018-06479) to a collaborative exten-sion of MX-PReSTO towards Cryo-EM/SciLifeLab and NMR/Swedish NMR Centre. During 2019 MX-PReSTO (SNIC 2019/3-326) has been reinstalled under version control and we share easybuild daily maintenance and operations with local HPC administrators at the MAXIV site. We also start install Cryo-EM software in collaboration with staff from SciLifeLab CryoEM in Stockholm and SciLifeLab/HPC2N in Umeå, aiming for a first release of CryoEM-PReSTO in Q1 2020.
Swedish researchers visiting MAXIV receive some training in using MX-PReSTO during BioMAX beamtime. After beamtime training sessions with users own data/questions are given on informal basis or organized as a seminar or workshop by contacting Martin Moche as done by UU in 2015 and SLU in 2018. On the CryoEM side training is organized by SciLifeLab staff and CryoEM-PReSTO might be used in their workflow/training activities during 2020.
Thinlinc is a remote desktop server from Cendio that supports the integrated structural biology workflow by enabling graphic applications i.e. coot/chimera/ccp4mg/pymol to run smoothly from a remote computer. The PReSTO homepage (https://www.nsc.liu.se/support/presto/) for HPC-MX newcomers demonstrate how to interact with the HPC setup by loading modules, request compute time, core and nodes, writing sbatch scripts, schedule and monitor jobs, describe how to use PHE-NIX with slurm scheduling, and pointing towards the many MX software developer manuals. To adapt new users to the MX-HPC workflow we also developed a helpful desktop menu that
A) launch MX software at either compute or login node with graphics support
B) enable user to select number of cores and runtime at compute nodes
C) enable user to select output directory for software such as hkl2map.
Access to the PReSTO installation is via Swedish National Infrastructure of Computing (SNIC) funded by the Swedish Research Council. MX-PReSTO is available at NSC Tetralith, LUNARC Aurora and the MAX IV cluster. Please acknowledge SNIC (https://www.snic.se/allocations/apply4access/) when using its resources(1-3).