Cargando…

Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities

The operation of a variety of natural or man-made systems subject to uncertainty is maintained within a range of safe behavior through run-time sensing of the system state and control actions selected according to some strategy. When the system is observed from an external perspective, the control s...

Descripción completa

Detalles Bibliográficos
Autores principales: Zadenoori, Mohammad Amin, Vicario, Enrico
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9230096/
https://www.ncbi.nlm.nih.gov/pubmed/35746272
http://dx.doi.org/10.3390/s22124491
_version_ 1784734975166251008
author Zadenoori, Mohammad Amin
Vicario, Enrico
author_facet Zadenoori, Mohammad Amin
Vicario, Enrico
author_sort Zadenoori, Mohammad Amin
collection PubMed
description The operation of a variety of natural or man-made systems subject to uncertainty is maintained within a range of safe behavior through run-time sensing of the system state and control actions selected according to some strategy. When the system is observed from an external perspective, the control strategy may not be known and it should rather be reconstructed by joint observation of the applied control actions and the corresponding evolution of the system state. This is largely hurdled by limitations in the sensing of the system state and different levels of noise. We address the problem of optimal selection of control actions for a stochastic system with unknown dynamics operating under a controller with unknown strategy, for which we can observe trajectories made of the sequence of control actions and noisy observations of the system state which are labeled by the exact value of some reward functions. To this end, we present an approach to train an Input–Output Hidden Markov Model (IO-HMM) as the generative stochastic model that describes the state dynamics of a POMDP by the application of a novel optimization objective adopted from the literate. The learning task is hurdled by two restrictions: the only available sensed data are the limited number of trajectories of applied actions, noisy observations of the system state, and system state; and, the high failure costs prevent interaction with the online environment, preventing exploratory testing. Traditionally, stochastic generative models have been used to learn the underlying system dynamics and select appropriate actions in the defined task. However, current state of the art techniques, in which the state dynamics of the POMDP is first learned and then strategies are optimized over it, frequently fail because the model that best fits the data may not be well suited for controlling. By using the aforementioned optimization objective, we try to to tackle the problems related to model mis-specification. The proposed methodology is illustrated in a scenario of failure avoidance for a multi component system. The quality of the decision making is evaluated by using the collected reward on the test data and compared against the previous literature usual approach.
format Online
Article
Text
id pubmed-9230096
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-92300962022-06-25 Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities Zadenoori, Mohammad Amin Vicario, Enrico Sensors (Basel) Article The operation of a variety of natural or man-made systems subject to uncertainty is maintained within a range of safe behavior through run-time sensing of the system state and control actions selected according to some strategy. When the system is observed from an external perspective, the control strategy may not be known and it should rather be reconstructed by joint observation of the applied control actions and the corresponding evolution of the system state. This is largely hurdled by limitations in the sensing of the system state and different levels of noise. We address the problem of optimal selection of control actions for a stochastic system with unknown dynamics operating under a controller with unknown strategy, for which we can observe trajectories made of the sequence of control actions and noisy observations of the system state which are labeled by the exact value of some reward functions. To this end, we present an approach to train an Input–Output Hidden Markov Model (IO-HMM) as the generative stochastic model that describes the state dynamics of a POMDP by the application of a novel optimization objective adopted from the literate. The learning task is hurdled by two restrictions: the only available sensed data are the limited number of trajectories of applied actions, noisy observations of the system state, and system state; and, the high failure costs prevent interaction with the online environment, preventing exploratory testing. Traditionally, stochastic generative models have been used to learn the underlying system dynamics and select appropriate actions in the defined task. However, current state of the art techniques, in which the state dynamics of the POMDP is first learned and then strategies are optimized over it, frequently fail because the model that best fits the data may not be well suited for controlling. By using the aforementioned optimization objective, we try to to tackle the problems related to model mis-specification. The proposed methodology is illustrated in a scenario of failure avoidance for a multi component system. The quality of the decision making is evaluated by using the collected reward on the test data and compared against the previous literature usual approach. MDPI 2022-06-14 /pmc/articles/PMC9230096/ /pubmed/35746272 http://dx.doi.org/10.3390/s22124491 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zadenoori, Mohammad Amin
Vicario, Enrico
Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities
title Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities
title_full Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities
title_fullStr Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities
title_full_unstemmed Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities
title_short Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities
title_sort learning dynamics and control of a stochastic system under limited sensing capabilities
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9230096/
https://www.ncbi.nlm.nih.gov/pubmed/35746272
http://dx.doi.org/10.3390/s22124491
work_keys_str_mv AT zadenoorimohammadamin learningdynamicsandcontrolofastochasticsystemunderlimitedsensingcapabilities
AT vicarioenrico learningdynamicsandcontrolofastochasticsystemunderlimitedsensingcapabilities