Cargando…

A reinforcement learning approach to airfoil shape optimization

Shape optimization is an indispensable step in any aerodynamic design. However, the inherent complexity and non-linearity associated with fluid mechanics as well as the high-dimensional design space intrinsic to such problems make airfoil shape optimization a challenging task. Current approaches rel...

Descripción completa

Detalles Bibliográficos
Autores principales: Dussauge, Thomas P., Sung, Woong Je, Pinon Fischer, Olivia J., Mavris, Dimitri N.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10276028/
https://www.ncbi.nlm.nih.gov/pubmed/37328498
http://dx.doi.org/10.1038/s41598-023-36560-z
Descripción
Sumario:Shape optimization is an indispensable step in any aerodynamic design. However, the inherent complexity and non-linearity associated with fluid mechanics as well as the high-dimensional design space intrinsic to such problems make airfoil shape optimization a challenging task. Current approaches relying on gradient-based or gradient-free optimizers are data-inefficient in that they do not leverage accumulated knowledge, and are computationally expensive when integrating Computational Fluid Dynamics (CFD) simulation tools. Supervised learning approaches have addressed these limitations but are constrained by user-provided data. Reinforcement learning (RL) provides a data-driven approach bearing generative capabilities. We formulate the airfoil design as a Markov decision process (MDP) and investigate a Deep Reinforcement Learning (DRL) approach to airfoil shape optimization. A custom RL environment is developed allowing the agent to successively modify the shape of an initially provided 2D airfoil and to observe the associated changes in aerodynamic metrics such as lift-to-drag (L/D), lift coefficient (C(l)) and drag coefficient (C(d)). The learning abilities of the DRL agent are demonstrated through various experiments in which the agent’s objective-maximizing L/D, maximizing C(l) or minimizing C(d)-as well as the initial airfoil shape are varied. Results show that the DRL agent is able to generate high performing airfoils within a limited number of learning iterations. The strong resemblance between the artificially produced shapes and those found in the literature highlights the rationality of the decision-making policy learned by the agent. Overall, the presented approach demonstrates the relevance of DRL to airfoil shape optimization and brings forward a successful application of DRL to a physics-based aerodynamics problem.