DEEPSEP - Deep reinforcement learning for active flow control in turbulent boundary layers
||DEEPSEP - Deep reinforcement learning for active flow control in turbulent boundary layers|
||Ricardo Vinuesa <email@example.com>|
||Kungliga Tekniska högskolan|
||2023-09-28 – 2024-04-01|
The increase of computational power over the last decades has allowed the use of machine learning techniques in a wide range of topics. Also related with the increase of computational resources, numerical simulations of turbulent flows are possible. Therefore, this combination is one of the most attractive nowadays from the computational cost point of view.
We propose a project that consist of using deep reinforcement learning (DRL) for active flow control (AFC) in turbulent boundary layers (TBL). An agent is connected through a artificial neural network (ANN) to some flow actuators in the form of jets. The DRL method will be employed to obtain a proper control strategy using a proximal-policy-optimization (PPO). DRL techniques consist of a trial-and-error repetition, where every try is known as an episode. After enough episodes, the algorithm is able to find nonlinear patterns in the system that allows the learning of a control strategy. Two inputs from the flow simulation are needed for the agent in order to choose an action, where an action is a mass flow rate of the jets. The first, is the reward, which is the parameter that the ANN must optimize. In our case we will use two different reward functions: a reduction of the wall friction coefficient and a reduction of the TBL thickness to ensure a delay in the detachment of the TBL. The second a partial description of the flow, which is known as the state. In the case of the TBL, the state will be the velocity fluctuations at a specific distance from the wall. In order to make DRL work, the Tensorforce libraries from Tensorflow will be used. Tensorforce is an open-source DRL framework that contains the algorithms in a modularized flexible fashion. This Tensorforce libraries have been already used satisfactorily by the research group in other active flow control problems .
From the physics point of view, a big preliminary work has been already done to study the TBL under different pressure gradients. A unique high-fidelity database was generated in recent works , which motivates further research on this topic with a great opportunity to study new phenomena. All the knowledge obtained in this previous work will help us to generate a proper setup of the DRL cases regarding the placement of the actuators and the best partial representation of the flow to determine the state. Thanks to this, we would be able to exploit the DRL techniques for AFC, mainly focusing the work on the machine learning part.
 Pau Varela, Pol Suárez, Francisco Alcántara-Ávila, Arnau Miró, Jean Rabault, Bernat Font, Luis MiguelGarcía-Cuevas, Oriol Lehmkuhl, and Ricardo Vinuesa. Deep reinforcement learning for flow controlexploits different physics for increasing reynolds-number regimes. 2022.
 Ramon Pozuelo, Qiang Li, Philipp Schlatter, and Ricardo Vinuesa. An adverse-pressure-gradient turbulentboundary layer with nearly constant β ≃ 1.4 up to Reθ≃ 8700. Journal of Fluid Mechanics, 939(A34),2022.