Cargando…

Deep reinforcement learning for turbulent drag reduction in channel flows

We introduce a reinforcement learning (RL) environment to design and benchmark control strategies aimed at reducing drag in turbulent fluid flows enclosed in a channel. The environment provides a framework for computationally efficient, parallelized, high-fidelity fluid simulations, ready to interfa...

Descripción completa

Detalles Bibliográficos
Autores principales: Guastoni, Luca, Rabault, Jean, Schlatter, Philipp, Azizpour, Hossein, Vinuesa, Ricardo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10090012/
https://www.ncbi.nlm.nih.gov/pubmed/37039923
http://dx.doi.org/10.1140/epje/s10189-023-00285-8
_version_ 1785022875025014784
author Guastoni, Luca
Rabault, Jean
Schlatter, Philipp
Azizpour, Hossein
Vinuesa, Ricardo
author_facet Guastoni, Luca
Rabault, Jean
Schlatter, Philipp
Azizpour, Hossein
Vinuesa, Ricardo
author_sort Guastoni, Luca
collection PubMed
description We introduce a reinforcement learning (RL) environment to design and benchmark control strategies aimed at reducing drag in turbulent fluid flows enclosed in a channel. The environment provides a framework for computationally efficient, parallelized, high-fidelity fluid simulations, ready to interface with established RL agent programming interfaces. This allows for both testing existing deep reinforcement learning (DRL) algorithms against a challenging task, and advancing our knowledge of a complex, turbulent physical system that has been a major topic of research for over two centuries, and remains, even today, the subject of many unanswered questions. The control is applied in the form of blowing and suction at the wall, while the observable state is configurable, allowing to choose different variables such as velocity and pressure, in different locations of the domain. Given the complex nonlinear nature of turbulent flows, the control strategies proposed so far in the literature are physically grounded, but too simple. DRL, by contrast, enables leveraging the high-dimensional data that can be sampled from flow simulations to design advanced control strategies. In an effort to establish a benchmark for testing data-driven control strategies, we compare opposition control, a state-of-the-art turbulence-control strategy from the literature, and a commonly used DRL algorithm, deep deterministic policy gradient. Our results show that DRL leads to 43% and 30% drag reduction in a minimal and a larger channel (at a friction Reynolds number of 180), respectively, outperforming the classical opposition control by around 20 and 10 percentage points, respectively.
format Online
Article
Text
id pubmed-10090012
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-100900122023-04-13 Deep reinforcement learning for turbulent drag reduction in channel flows Guastoni, Luca Rabault, Jean Schlatter, Philipp Azizpour, Hossein Vinuesa, Ricardo Eur Phys J E Soft Matter Regular Article - Flowing Matter We introduce a reinforcement learning (RL) environment to design and benchmark control strategies aimed at reducing drag in turbulent fluid flows enclosed in a channel. The environment provides a framework for computationally efficient, parallelized, high-fidelity fluid simulations, ready to interface with established RL agent programming interfaces. This allows for both testing existing deep reinforcement learning (DRL) algorithms against a challenging task, and advancing our knowledge of a complex, turbulent physical system that has been a major topic of research for over two centuries, and remains, even today, the subject of many unanswered questions. The control is applied in the form of blowing and suction at the wall, while the observable state is configurable, allowing to choose different variables such as velocity and pressure, in different locations of the domain. Given the complex nonlinear nature of turbulent flows, the control strategies proposed so far in the literature are physically grounded, but too simple. DRL, by contrast, enables leveraging the high-dimensional data that can be sampled from flow simulations to design advanced control strategies. In an effort to establish a benchmark for testing data-driven control strategies, we compare opposition control, a state-of-the-art turbulence-control strategy from the literature, and a commonly used DRL algorithm, deep deterministic policy gradient. Our results show that DRL leads to 43% and 30% drag reduction in a minimal and a larger channel (at a friction Reynolds number of 180), respectively, outperforming the classical opposition control by around 20 and 10 percentage points, respectively. Springer Berlin Heidelberg 2023-04-11 2023 /pmc/articles/PMC10090012/ /pubmed/37039923 http://dx.doi.org/10.1140/epje/s10189-023-00285-8 Text en © The Author(s) 2023, corrected publication 2023, corrected publication 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Regular Article - Flowing Matter
Guastoni, Luca
Rabault, Jean
Schlatter, Philipp
Azizpour, Hossein
Vinuesa, Ricardo
Deep reinforcement learning for turbulent drag reduction in channel flows
title Deep reinforcement learning for turbulent drag reduction in channel flows
title_full Deep reinforcement learning for turbulent drag reduction in channel flows
title_fullStr Deep reinforcement learning for turbulent drag reduction in channel flows
title_full_unstemmed Deep reinforcement learning for turbulent drag reduction in channel flows
title_short Deep reinforcement learning for turbulent drag reduction in channel flows
title_sort deep reinforcement learning for turbulent drag reduction in channel flows
topic Regular Article - Flowing Matter
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10090012/
https://www.ncbi.nlm.nih.gov/pubmed/37039923
http://dx.doi.org/10.1140/epje/s10189-023-00285-8
work_keys_str_mv AT guastoniluca deepreinforcementlearningforturbulentdragreductioninchannelflows
AT rabaultjean deepreinforcementlearningforturbulentdragreductioninchannelflows
AT schlatterphilipp deepreinforcementlearningforturbulentdragreductioninchannelflows
AT azizpourhossein deepreinforcementlearningforturbulentdragreductioninchannelflows
AT vinuesaricardo deepreinforcementlearningforturbulentdragreductioninchannelflows