Data storage for Swedish climate modelling and contributions to international projects (S-CMIP) 2021
The central motivation for climate modeling is to describe the responses of the Earth’s non-linear climate system to changes in forcing and to understand its internal variability and related processes. Scientific progress in these areas and in the science of prediction and projection methods will enable actionable information on climate change in the fields of climate change adaptation (adjustment to a new climate) and mitigation (control of greenhouse gas emissions vs effects). Mitigation science is especially important in the light of the short deadlines (several years) for actions to live up to the international Paris climate accord.
International climate simulations to address the above questions are coordinated largely under the framework of the Climate Model Intercomparison Project, phase 6 (CMIP6, Meehl et al. 2014), which is structured in more specific MIPs. Increasingly projects not formally coordinated under CMIP are also using the CMIP standards and infrastructure, such as a data upload to CMIPs data grid (ESGF), which is accessible to the global climate research community and thus boosts across-model climate studies. CMIP5 has led to more than 1000 peer-reviewed publications. CMIP-based data and studies are major input sources to the UN International Panel on Climate Change IPCC.
S-CMIP will carry out calculations connected to research projects funded by EU-H2020, Formas and VR. These cover understanding and modeling of processes in the climate system, exploration of the predictability of climate on time scales of several years, millennium and paleo time scale climate, the role of grid resolution, finding emission pathways that avoid climate tipping points, and the probability of climate extremes in the future. All these projects are externally reviewed, are considered state of the art and are expected to generate publications by S-CMIP members.
The simulations performed within S-CMIP generate a substantial amount of data that needs to be stored, post-processed and eventually published. This requires access to an infrastructure consisting of large-scale storage connected to computing resources and an ESGF node, such as currently found at NSC.