Cargando…
Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning
This article offers an optimal control tracking method using an event-triggered technique and the internal reinforcement Q-learning (IrQL) algorithm to address the tracking control issue of unknown nonlinear systems with multiple agents (MASs). Relying on the internal reinforcement reward (IRR) form...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9955809/ https://www.ncbi.nlm.nih.gov/pubmed/36832665 http://dx.doi.org/10.3390/e25020299 |
_version_ | 1784894438249594880 |
---|---|
author | Wang, Ziwei Wang, Xin Tang, Yijie Liu, Ying Hu, Jun |
author_facet | Wang, Ziwei Wang, Xin Tang, Yijie Liu, Ying Hu, Jun |
author_sort | Wang, Ziwei |
collection | PubMed |
description | This article offers an optimal control tracking method using an event-triggered technique and the internal reinforcement Q-learning (IrQL) algorithm to address the tracking control issue of unknown nonlinear systems with multiple agents (MASs). Relying on the internal reinforcement reward (IRR) formula, a Q-learning function is calculated, and then the iteration IRQL method is developed. In contrast to mechanisms triggered by time, an event-triggered algorithm reduces the rate of transmission and computational load, since the controller may only be upgraded when the predetermined triggering circumstances are met. In addition, in order to implement the suggested system, a neutral reinforce-critic-actor (RCA) network structure is created that may assess the indices of performance and online learning of the event-triggering mechanism. This strategy is intended to be data-driven without having in-depth knowledge of system dynamics. We must develop the event-triggered weight tuning rule, which only modifies the parameters of the actor neutral network (ANN) in response to triggering cases. In addition, a Lyapunov-based convergence study of the reinforce-critic-actor neutral network (NN) is presented. Lastly, an example demonstrates the accessibility and efficiency of the suggested approach. |
format | Online Article Text |
id | pubmed-9955809 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-99558092023-02-25 Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning Wang, Ziwei Wang, Xin Tang, Yijie Liu, Ying Hu, Jun Entropy (Basel) Article This article offers an optimal control tracking method using an event-triggered technique and the internal reinforcement Q-learning (IrQL) algorithm to address the tracking control issue of unknown nonlinear systems with multiple agents (MASs). Relying on the internal reinforcement reward (IRR) formula, a Q-learning function is calculated, and then the iteration IRQL method is developed. In contrast to mechanisms triggered by time, an event-triggered algorithm reduces the rate of transmission and computational load, since the controller may only be upgraded when the predetermined triggering circumstances are met. In addition, in order to implement the suggested system, a neutral reinforce-critic-actor (RCA) network structure is created that may assess the indices of performance and online learning of the event-triggering mechanism. This strategy is intended to be data-driven without having in-depth knowledge of system dynamics. We must develop the event-triggered weight tuning rule, which only modifies the parameters of the actor neutral network (ANN) in response to triggering cases. In addition, a Lyapunov-based convergence study of the reinforce-critic-actor neutral network (NN) is presented. Lastly, an example demonstrates the accessibility and efficiency of the suggested approach. MDPI 2023-02-05 /pmc/articles/PMC9955809/ /pubmed/36832665 http://dx.doi.org/10.3390/e25020299 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wang, Ziwei Wang, Xin Tang, Yijie Liu, Ying Hu, Jun Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning |
title | Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning |
title_full | Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning |
title_fullStr | Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning |
title_full_unstemmed | Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning |
title_short | Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning |
title_sort | optimal tracking control of a nonlinear multiagent system using q-learning via event-triggered reinforcement learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9955809/ https://www.ncbi.nlm.nih.gov/pubmed/36832665 http://dx.doi.org/10.3390/e25020299 |
work_keys_str_mv | AT wangziwei optimaltrackingcontrolofanonlinearmultiagentsystemusingqlearningviaeventtriggeredreinforcementlearning AT wangxin optimaltrackingcontrolofanonlinearmultiagentsystemusingqlearningviaeventtriggeredreinforcementlearning AT tangyijie optimaltrackingcontrolofanonlinearmultiagentsystemusingqlearningviaeventtriggeredreinforcementlearning AT liuying optimaltrackingcontrolofanonlinearmultiagentsystemusingqlearningviaeventtriggeredreinforcementlearning AT hujun optimaltrackingcontrolofanonlinearmultiagentsystemusingqlearningviaeventtriggeredreinforcementlearning |