Cargando…

Entropic Regularization of Markov Decision Processes

An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment...

Descripción completa

Detalles Bibliográficos
Autores principales: Belousov, Boris, Peters, Jan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7515171/
https://www.ncbi.nlm.nih.gov/pubmed/33267388
http://dx.doi.org/10.3390/e21070674
_version_ 1783586757826052096
author Belousov, Boris
Peters, Jan
author_facet Belousov, Boris
Peters, Jan
author_sort Belousov, Boris
collection PubMed
description An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely [Formula: see text]-divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson [Formula: see text]-divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the [Formula: see text]-divergence, we carry out asymptotic analysis of the solutions for different values of [Formula: see text] and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems.
format Online
Article
Text
id pubmed-7515171
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75151712020-11-09 Entropic Regularization of Markov Decision Processes Belousov, Boris Peters, Jan Entropy (Basel) Article An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely [Formula: see text]-divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson [Formula: see text]-divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the [Formula: see text]-divergence, we carry out asymptotic analysis of the solutions for different values of [Formula: see text] and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems. MDPI 2019-07-10 /pmc/articles/PMC7515171/ /pubmed/33267388 http://dx.doi.org/10.3390/e21070674 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Belousov, Boris
Peters, Jan
Entropic Regularization of Markov Decision Processes
title Entropic Regularization of Markov Decision Processes
title_full Entropic Regularization of Markov Decision Processes
title_fullStr Entropic Regularization of Markov Decision Processes
title_full_unstemmed Entropic Regularization of Markov Decision Processes
title_short Entropic Regularization of Markov Decision Processes
title_sort entropic regularization of markov decision processes
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7515171/
https://www.ncbi.nlm.nih.gov/pubmed/33267388
http://dx.doi.org/10.3390/e21070674
work_keys_str_mv AT belousovboris entropicregularizationofmarkovdecisionprocesses
AT petersjan entropicregularizationofmarkovdecisionprocesses