Cargando…
On Entropy Regularized Path Integral Control for Trajectory Optimization
In this article, we present a generalized view on Path Integral Control (PIC) methods. PIC refers to a particular class of policy search methods that are closely tied to the setting of Linearly Solvable Optimal Control (LSOC), a restricted subclass of nonlinear Stochastic Optimal Control (SOC) probl...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7597248/ https://www.ncbi.nlm.nih.gov/pubmed/33286889 http://dx.doi.org/10.3390/e22101120 |
_version_ | 1783602301541285888 |
---|---|
author | Lefebvre, Tom Crevecoeur, Guillaume |
author_facet | Lefebvre, Tom Crevecoeur, Guillaume |
author_sort | Lefebvre, Tom |
collection | PubMed |
description | In this article, we present a generalized view on Path Integral Control (PIC) methods. PIC refers to a particular class of policy search methods that are closely tied to the setting of Linearly Solvable Optimal Control (LSOC), a restricted subclass of nonlinear Stochastic Optimal Control (SOC) problems. This class is unique in the sense that it can be solved explicitly yielding a formal optimal state trajectory distribution. In this contribution, we first review the PIC theory and discuss related algorithms tailored to policy search in general. We are able to identify a generic design strategy that relies on the existence of an optimal state trajectory distribution and finds a parametric policy by minimizing the cross-entropy between the optimal and a state trajectory distribution parametrized by a parametric stochastic policy. Inspired by this observation, we then aim to formulate a SOC problem that shares traits with the LSOC setting yet that covers a less restrictive class of problem formulations. We refer to this SOC problem as Entropy Regularized Trajectory Optimization. The problem is closely related to the Entropy Regularized Stochastic Optimal Control setting which is often addressed lately by the Reinforcement Learning (RL) community. We analyze the theoretical convergence behavior of the theoretical state trajectory distribution sequence and draw connections with stochastic search methods tailored to classic optimization problems. Finally we derive explicit updates and compare the implied Entropy Regularized PIC with earlier work in the context of both PIC and RL for derivative-free trajectory optimization. |
format | Online Article Text |
id | pubmed-7597248 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-75972482020-11-09 On Entropy Regularized Path Integral Control for Trajectory Optimization Lefebvre, Tom Crevecoeur, Guillaume Entropy (Basel) Article In this article, we present a generalized view on Path Integral Control (PIC) methods. PIC refers to a particular class of policy search methods that are closely tied to the setting of Linearly Solvable Optimal Control (LSOC), a restricted subclass of nonlinear Stochastic Optimal Control (SOC) problems. This class is unique in the sense that it can be solved explicitly yielding a formal optimal state trajectory distribution. In this contribution, we first review the PIC theory and discuss related algorithms tailored to policy search in general. We are able to identify a generic design strategy that relies on the existence of an optimal state trajectory distribution and finds a parametric policy by minimizing the cross-entropy between the optimal and a state trajectory distribution parametrized by a parametric stochastic policy. Inspired by this observation, we then aim to formulate a SOC problem that shares traits with the LSOC setting yet that covers a less restrictive class of problem formulations. We refer to this SOC problem as Entropy Regularized Trajectory Optimization. The problem is closely related to the Entropy Regularized Stochastic Optimal Control setting which is often addressed lately by the Reinforcement Learning (RL) community. We analyze the theoretical convergence behavior of the theoretical state trajectory distribution sequence and draw connections with stochastic search methods tailored to classic optimization problems. Finally we derive explicit updates and compare the implied Entropy Regularized PIC with earlier work in the context of both PIC and RL for derivative-free trajectory optimization. MDPI 2020-10-03 /pmc/articles/PMC7597248/ /pubmed/33286889 http://dx.doi.org/10.3390/e22101120 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Lefebvre, Tom Crevecoeur, Guillaume On Entropy Regularized Path Integral Control for Trajectory Optimization |
title | On Entropy Regularized Path Integral Control for Trajectory Optimization |
title_full | On Entropy Regularized Path Integral Control for Trajectory Optimization |
title_fullStr | On Entropy Regularized Path Integral Control for Trajectory Optimization |
title_full_unstemmed | On Entropy Regularized Path Integral Control for Trajectory Optimization |
title_short | On Entropy Regularized Path Integral Control for Trajectory Optimization |
title_sort | on entropy regularized path integral control for trajectory optimization |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7597248/ https://www.ncbi.nlm.nih.gov/pubmed/33286889 http://dx.doi.org/10.3390/e22101120 |
work_keys_str_mv | AT lefebvretom onentropyregularizedpathintegralcontrolfortrajectoryoptimization AT crevecoeurguillaume onentropyregularizedpathintegralcontrolfortrajectoryoptimization |