Cargando…
A reinforcement learning approach to airfoil shape optimization
Shape optimization is an indispensable step in any aerodynamic design. However, the inherent complexity and non-linearity associated with fluid mechanics as well as the high-dimensional design space intrinsic to such problems make airfoil shape optimization a challenging task. Current approaches rel...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10276028/ https://www.ncbi.nlm.nih.gov/pubmed/37328498 http://dx.doi.org/10.1038/s41598-023-36560-z |
_version_ | 1785059989497315328 |
---|---|
author | Dussauge, Thomas P. Sung, Woong Je Pinon Fischer, Olivia J. Mavris, Dimitri N. |
author_facet | Dussauge, Thomas P. Sung, Woong Je Pinon Fischer, Olivia J. Mavris, Dimitri N. |
author_sort | Dussauge, Thomas P. |
collection | PubMed |
description | Shape optimization is an indispensable step in any aerodynamic design. However, the inherent complexity and non-linearity associated with fluid mechanics as well as the high-dimensional design space intrinsic to such problems make airfoil shape optimization a challenging task. Current approaches relying on gradient-based or gradient-free optimizers are data-inefficient in that they do not leverage accumulated knowledge, and are computationally expensive when integrating Computational Fluid Dynamics (CFD) simulation tools. Supervised learning approaches have addressed these limitations but are constrained by user-provided data. Reinforcement learning (RL) provides a data-driven approach bearing generative capabilities. We formulate the airfoil design as a Markov decision process (MDP) and investigate a Deep Reinforcement Learning (DRL) approach to airfoil shape optimization. A custom RL environment is developed allowing the agent to successively modify the shape of an initially provided 2D airfoil and to observe the associated changes in aerodynamic metrics such as lift-to-drag (L/D), lift coefficient (C(l)) and drag coefficient (C(d)). The learning abilities of the DRL agent are demonstrated through various experiments in which the agent’s objective-maximizing L/D, maximizing C(l) or minimizing C(d)-as well as the initial airfoil shape are varied. Results show that the DRL agent is able to generate high performing airfoils within a limited number of learning iterations. The strong resemblance between the artificially produced shapes and those found in the literature highlights the rationality of the decision-making policy learned by the agent. Overall, the presented approach demonstrates the relevance of DRL to airfoil shape optimization and brings forward a successful application of DRL to a physics-based aerodynamics problem. |
format | Online Article Text |
id | pubmed-10276028 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-102760282023-06-18 A reinforcement learning approach to airfoil shape optimization Dussauge, Thomas P. Sung, Woong Je Pinon Fischer, Olivia J. Mavris, Dimitri N. Sci Rep Article Shape optimization is an indispensable step in any aerodynamic design. However, the inherent complexity and non-linearity associated with fluid mechanics as well as the high-dimensional design space intrinsic to such problems make airfoil shape optimization a challenging task. Current approaches relying on gradient-based or gradient-free optimizers are data-inefficient in that they do not leverage accumulated knowledge, and are computationally expensive when integrating Computational Fluid Dynamics (CFD) simulation tools. Supervised learning approaches have addressed these limitations but are constrained by user-provided data. Reinforcement learning (RL) provides a data-driven approach bearing generative capabilities. We formulate the airfoil design as a Markov decision process (MDP) and investigate a Deep Reinforcement Learning (DRL) approach to airfoil shape optimization. A custom RL environment is developed allowing the agent to successively modify the shape of an initially provided 2D airfoil and to observe the associated changes in aerodynamic metrics such as lift-to-drag (L/D), lift coefficient (C(l)) and drag coefficient (C(d)). The learning abilities of the DRL agent are demonstrated through various experiments in which the agent’s objective-maximizing L/D, maximizing C(l) or minimizing C(d)-as well as the initial airfoil shape are varied. Results show that the DRL agent is able to generate high performing airfoils within a limited number of learning iterations. The strong resemblance between the artificially produced shapes and those found in the literature highlights the rationality of the decision-making policy learned by the agent. Overall, the presented approach demonstrates the relevance of DRL to airfoil shape optimization and brings forward a successful application of DRL to a physics-based aerodynamics problem. Nature Publishing Group UK 2023-06-16 /pmc/articles/PMC10276028/ /pubmed/37328498 http://dx.doi.org/10.1038/s41598-023-36560-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Dussauge, Thomas P. Sung, Woong Je Pinon Fischer, Olivia J. Mavris, Dimitri N. A reinforcement learning approach to airfoil shape optimization |
title | A reinforcement learning approach to airfoil shape optimization |
title_full | A reinforcement learning approach to airfoil shape optimization |
title_fullStr | A reinforcement learning approach to airfoil shape optimization |
title_full_unstemmed | A reinforcement learning approach to airfoil shape optimization |
title_short | A reinforcement learning approach to airfoil shape optimization |
title_sort | reinforcement learning approach to airfoil shape optimization |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10276028/ https://www.ncbi.nlm.nih.gov/pubmed/37328498 http://dx.doi.org/10.1038/s41598-023-36560-z |
work_keys_str_mv | AT dussaugethomasp areinforcementlearningapproachtoairfoilshapeoptimization AT sungwoongje areinforcementlearningapproachtoairfoilshapeoptimization AT pinonfischeroliviaj areinforcementlearningapproachtoairfoilshapeoptimization AT mavrisdimitrin areinforcementlearningapproachtoairfoilshapeoptimization AT dussaugethomasp reinforcementlearningapproachtoairfoilshapeoptimization AT sungwoongje reinforcementlearningapproachtoairfoilshapeoptimization AT pinonfischeroliviaj reinforcementlearningapproachtoairfoilshapeoptimization AT mavrisdimitrin reinforcementlearningapproachtoairfoilshapeoptimization |