Large-scale Simulations in Stability, Transition, Turbulence and Control
Title: |
Large-scale Simulations in Stability, Transition, Turbulence and Control |
DNr: |
NAISS 2024/4-12 |
Project Type: |
NAISS Large Storage |
Principal Investigator: |
Ardeshir Hanifi <hanifi@kth.se> |
Affiliation: |
Kungliga Tekniska högskolan |
Duration: |
2025-01-01 – 2026-01-01 |
Classification: |
20306 10508 10501 |
Homepage: |
https://www.flow.kth.se |
Keywords: |
|
Abstract
This is a large storage application complementary to the large and LUMI compute applications with the same title.
We present a large-level request for storage on high-performance computing (HPC) resources within the National Academic Infrastructure for Supercomputing in Sweden (NAISS). The proposed projects by the research groups of the KTH Engineering Mechanics department are summarized. The group of applicants consists of a total of 7 senior researchers, 2 application experts, and 20 Postdocs and PhD students, i.e. a total of 29 researchers. We actively promote collaboration within our large user group to facilitate HPC support, sharing of simulation methods, codes, data, post-processing, data management methods, and user experience. We have thus found it beneficial to apply for a large-level allocation instead of multiple medium-level requests.
The detailed description of our research group and the proposed scientific projects can be found in the complementary large compute application. In this document we presented as well numerical codes used in our research. Closely related to these two applications is our LUMI Sweden request, where we ask for both compute time and sotrage space. In the current document we focus on the specific data management plan which is described in Section 2 below.
Note that we get specific application support through the Swedish e-Science Research Centre (SeRC), the EuroHPC competence centre and via two EU Centres of Excellence (CEEC and Excellerat) in the form of two application experts and we actively develop our codes considering efficient I/O operations as well. In complementary compute applications we request access to multiple machines (Dardel (PDC), Tetralith (NSC), Alvis (C3SE), LUMI (CSC)), so it is important for us to have a good mix of storage space between different computer centres. The storage solutions are thus the capabilities at PDC, NSC, C3SE and CSC. In addition we intend to provide databases for external users, so we apply for a share on the Swestore/dCache system.