Large-scale Simulations in Stability, Transition, Turbulence and Control
Title: Large-scale Simulations in Stability, Transition, Turbulence and Control
DNr: NAISS 2023/4-4
Project Type: NAISS Large Storage
Principal Investigator: Philipp Schlatter <pschlatt@mech.kth.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2024-01-01 – 2025-01-01
Classification: 20306 10508 10501
Homepage: https://www.flow.kth.se
Keywords:

Abstract

This is a large storage application complementary to the large and LUMI compute applications with the same title. We present a large-level request for storage on high-performance computing (HPC) resources within the Swedish National Infrastructure for Computing (SNIC). The proposed projects by the research groups of the KTH Engineering Mechanics department are summarized. The group of applicants consists of a total of 7 senior researchers, 3 application experts, and 26 Postdocs and PhD students, i.e. a total of 36 researchers. We actively promote collaboration within our large user group to facilitate HPC support, sharing of simulation methods, codes, data, post-processing, data management methods, and user experience. We have thus found it beneficial to apply for a large-level allocation instead of multiple medium-level requests. The detailed description of our research group and the proposed scientific projects can be found in the complementary large compute application. In this document we presented as well numerical codes used in our research. Closely related to these two applications is our LUMI Sweden request, where we ask for both compute time and sotrage space. In the current document we focus on the specific data management plan which is described in Section 2 below. Note that we get specific application support through the Swedish e-Science Research Centre (SeRC), the EuroHPC competence centre and via two EU Centres of Excellence (CEEC and Excellerat) in the form of four application experts and we actively develop our codes considering efficient I/O operations as well. In complementary compute applications we request access to multiple machines (Tetralith (NSC), Alvis (C3SE), Dardel (PDC), LUMI (CSC)), so it is important for us to have a good mix of storage space between different computer centres. The storage solutions are thus the capabilities at PDC, C3SE, NSC and CSC. In addition we intend to provide databases for external users, so we apply for a share on the SweSTORE/dCache system.