Cargando…

Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning

A Critical Adaptive Distributed Embedded System (CADES) is a group of interconnected nodes that must carry out a set of tasks to achieve a common goal, while fulfilling several requirements associated with their critical (e.g., hard real-time requirements) and adaptive nature. In these systems, a ke...

Descripción completa

Detalles Bibliográficos
Autores principales: Rotaeche, Ramón, Ballesteros, Alberto, Proenza, Julián
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9823603/
https://www.ncbi.nlm.nih.gov/pubmed/36617145
http://dx.doi.org/10.3390/s23010548
_version_ 1784866200566628352
author Rotaeche, Ramón
Ballesteros, Alberto
Proenza, Julián
author_facet Rotaeche, Ramón
Ballesteros, Alberto
Proenza, Julián
author_sort Rotaeche, Ramón
collection PubMed
description A Critical Adaptive Distributed Embedded System (CADES) is a group of interconnected nodes that must carry out a set of tasks to achieve a common goal, while fulfilling several requirements associated with their critical (e.g., hard real-time requirements) and adaptive nature. In these systems, a key challenge is to solve, in a timely manner, the combinatorial optimization problem involved in finding the best way to allocate the tasks to the available nodes (i.e., the task allocation) taking into account aspects such as the computational costs of the tasks and the computational capacity of the nodes. This problem is not trivial and there is no known polynomial time algorithm to find the optimal solution. Several studies have proposed Deep Reinforcement Learning (DRL) approaches to solve combinatorial optimization problems and, in this work, we explore the application of such approaches to the task allocation problem in CADESs. We first discuss the potential advantages of using a DRL-based approach over several heuristic-based approaches to allocate tasks in CADESs and we then demonstrate how a DRL-based approach can achieve similar results for the best performing heuristic in terms of optimality of the allocation, while requiring less time to generate such allocation.
format Online
Article
Text
id pubmed-9823603
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98236032023-01-08 Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning Rotaeche, Ramón Ballesteros, Alberto Proenza, Julián Sensors (Basel) Article A Critical Adaptive Distributed Embedded System (CADES) is a group of interconnected nodes that must carry out a set of tasks to achieve a common goal, while fulfilling several requirements associated with their critical (e.g., hard real-time requirements) and adaptive nature. In these systems, a key challenge is to solve, in a timely manner, the combinatorial optimization problem involved in finding the best way to allocate the tasks to the available nodes (i.e., the task allocation) taking into account aspects such as the computational costs of the tasks and the computational capacity of the nodes. This problem is not trivial and there is no known polynomial time algorithm to find the optimal solution. Several studies have proposed Deep Reinforcement Learning (DRL) approaches to solve combinatorial optimization problems and, in this work, we explore the application of such approaches to the task allocation problem in CADESs. We first discuss the potential advantages of using a DRL-based approach over several heuristic-based approaches to allocate tasks in CADESs and we then demonstrate how a DRL-based approach can achieve similar results for the best performing heuristic in terms of optimality of the allocation, while requiring less time to generate such allocation. MDPI 2023-01-03 /pmc/articles/PMC9823603/ /pubmed/36617145 http://dx.doi.org/10.3390/s23010548 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Rotaeche, Ramón
Ballesteros, Alberto
Proenza, Julián
Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning
title Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning
title_full Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning
title_fullStr Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning
title_full_unstemmed Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning
title_short Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning
title_sort speeding task allocation search for reconfigurations in adaptive distributed embedded systems using deep reinforcement learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9823603/
https://www.ncbi.nlm.nih.gov/pubmed/36617145
http://dx.doi.org/10.3390/s23010548
work_keys_str_mv AT rotaecheramon speedingtaskallocationsearchforreconfigurationsinadaptivedistributedembeddedsystemsusingdeepreinforcementlearning
AT ballesterosalberto speedingtaskallocationsearchforreconfigurationsinadaptivedistributedembeddedsystemsusingdeepreinforcementlearning
AT proenzajulian speedingtaskallocationsearchforreconfigurationsinadaptivedistributedembeddedsystemsusingdeepreinforcementlearning