Cargando…
Learning in continuous action space for developing high dimensional potential energy models
Reinforcement learning (RL) approaches that combine a tree search with deep learning have found remarkable success in searching exorbitantly large, albeit discrete action spaces, as in chess, Shogi and Go. Many real-world materials discovery and design applications, however, involve multi-dimensiona...
Autores principales: | , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8766468/ https://www.ncbi.nlm.nih.gov/pubmed/35042872 http://dx.doi.org/10.1038/s41467-021-27849-6 |
_version_ | 1784634538225303552 |
---|---|
author | Manna, Sukriti Loeffler, Troy D. Batra, Rohit Banik, Suvo Chan, Henry Varughese, Bilvin Sasikumar, Kiran Sternberg, Michael Peterka, Tom Cherukara, Mathew J. Gray, Stephen K. Sumpter, Bobby G. Sankaranarayanan, Subramanian K. R. S. |
author_facet | Manna, Sukriti Loeffler, Troy D. Batra, Rohit Banik, Suvo Chan, Henry Varughese, Bilvin Sasikumar, Kiran Sternberg, Michael Peterka, Tom Cherukara, Mathew J. Gray, Stephen K. Sumpter, Bobby G. Sankaranarayanan, Subramanian K. R. S. |
author_sort | Manna, Sukriti |
collection | PubMed |
description | Reinforcement learning (RL) approaches that combine a tree search with deep learning have found remarkable success in searching exorbitantly large, albeit discrete action spaces, as in chess, Shogi and Go. Many real-world materials discovery and design applications, however, involve multi-dimensional search problems and learning domains that have continuous action spaces. Exploring high-dimensional potential energy models of materials is an example. Traditionally, these searches are time consuming (often several years for a single bulk system) and driven by human intuition and/or expertise and more recently by global/local optimization searches that have issues with convergence and/or do not scale well with the search dimensionality. Here, in a departure from discrete action and other gradient-based approaches, we introduce a RL strategy based on decision trees that incorporates modified rewards for improved exploration, efficient sampling during playouts and a “window scaling scheme" for enhanced exploitation, to enable efficient and scalable search for continuous action space problems. Using high-dimensional artificial landscapes and control RL problems, we successfully benchmark our approach against popular global optimization schemes and state of the art policy gradient methods, respectively. We demonstrate its efficacy to parameterize potential models (physics based and high-dimensional neural networks) for 54 different elemental systems across the periodic table as well as alloys. We analyze error trends across different elements in the latent space and trace their origin to elemental structural diversity and the smoothness of the element energy surface. Broadly, our RL strategy will be applicable to many other physical science problems involving search over continuous action spaces. |
format | Online Article Text |
id | pubmed-8766468 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-87664682022-02-04 Learning in continuous action space for developing high dimensional potential energy models Manna, Sukriti Loeffler, Troy D. Batra, Rohit Banik, Suvo Chan, Henry Varughese, Bilvin Sasikumar, Kiran Sternberg, Michael Peterka, Tom Cherukara, Mathew J. Gray, Stephen K. Sumpter, Bobby G. Sankaranarayanan, Subramanian K. R. S. Nat Commun Article Reinforcement learning (RL) approaches that combine a tree search with deep learning have found remarkable success in searching exorbitantly large, albeit discrete action spaces, as in chess, Shogi and Go. Many real-world materials discovery and design applications, however, involve multi-dimensional search problems and learning domains that have continuous action spaces. Exploring high-dimensional potential energy models of materials is an example. Traditionally, these searches are time consuming (often several years for a single bulk system) and driven by human intuition and/or expertise and more recently by global/local optimization searches that have issues with convergence and/or do not scale well with the search dimensionality. Here, in a departure from discrete action and other gradient-based approaches, we introduce a RL strategy based on decision trees that incorporates modified rewards for improved exploration, efficient sampling during playouts and a “window scaling scheme" for enhanced exploitation, to enable efficient and scalable search for continuous action space problems. Using high-dimensional artificial landscapes and control RL problems, we successfully benchmark our approach against popular global optimization schemes and state of the art policy gradient methods, respectively. We demonstrate its efficacy to parameterize potential models (physics based and high-dimensional neural networks) for 54 different elemental systems across the periodic table as well as alloys. We analyze error trends across different elements in the latent space and trace their origin to elemental structural diversity and the smoothness of the element energy surface. Broadly, our RL strategy will be applicable to many other physical science problems involving search over continuous action spaces. Nature Publishing Group UK 2022-01-18 /pmc/articles/PMC8766468/ /pubmed/35042872 http://dx.doi.org/10.1038/s41467-021-27849-6 Text en © This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Manna, Sukriti Loeffler, Troy D. Batra, Rohit Banik, Suvo Chan, Henry Varughese, Bilvin Sasikumar, Kiran Sternberg, Michael Peterka, Tom Cherukara, Mathew J. Gray, Stephen K. Sumpter, Bobby G. Sankaranarayanan, Subramanian K. R. S. Learning in continuous action space for developing high dimensional potential energy models |
title | Learning in continuous action space for developing high dimensional potential energy models |
title_full | Learning in continuous action space for developing high dimensional potential energy models |
title_fullStr | Learning in continuous action space for developing high dimensional potential energy models |
title_full_unstemmed | Learning in continuous action space for developing high dimensional potential energy models |
title_short | Learning in continuous action space for developing high dimensional potential energy models |
title_sort | learning in continuous action space for developing high dimensional potential energy models |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8766468/ https://www.ncbi.nlm.nih.gov/pubmed/35042872 http://dx.doi.org/10.1038/s41467-021-27849-6 |
work_keys_str_mv | AT mannasukriti learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT loefflertroyd learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT batrarohit learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT baniksuvo learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT chanhenry learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT varughesebilvin learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT sasikumarkiran learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT sternbergmichael learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT peterkatom learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT cherukaramathewj learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT graystephenk learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT sumpterbobbyg learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels AT sankaranarayanansubramaniankrs learningincontinuousactionspacefordevelopinghighdimensionalpotentialenergymodels |