Cargando…

Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information

In this paper, we investigate dynamic resource selection in dense deployments of the recent 6G mobile in-X subnetworks (inXSs). We cast resource selection in inXSs as a multi-objective optimization problem involving maximization of the minimum capacity per inXS while minimizing overhead from intra-s...

Descripción completa

Detalles Bibliográficos
Autores principales: Adeogun, Ramoni, Berardinelli, Gilberto
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9269819/
https://www.ncbi.nlm.nih.gov/pubmed/35808557
http://dx.doi.org/10.3390/s22135062
_version_ 1784744315297202176
author Adeogun, Ramoni
Berardinelli, Gilberto
author_facet Adeogun, Ramoni
Berardinelli, Gilberto
author_sort Adeogun, Ramoni
collection PubMed
description In this paper, we investigate dynamic resource selection in dense deployments of the recent 6G mobile in-X subnetworks (inXSs). We cast resource selection in inXSs as a multi-objective optimization problem involving maximization of the minimum capacity per inXS while minimizing overhead from intra-subnetwork signaling. Since inXSs are expected to be autonomous, selection decisions are made by each inXS based on its local information without signaling from other inXSs. A multi-agent Q-learning (MAQL) method based on limited sensing information (SI) is then developed, resulting in low intra-subnetwork SI signaling. We further propose a rule-based algorithm termed Q-Heuristics for performing resource selection based on similar limited information as the MAQL method. We perform simulations with a focus on joint channel and transmit power selection. The results indicate that: (1) appropriate settings of Q-learning parameters lead to fast convergence of the MAQL method even with two-level quantization of the SI, and (2) the proposed MAQL approach has significantly better performance and is more robust to sensing and switching delays than the best baseline heuristic. The proposed Q-Heuristic shows similar performance to the baseline greedy method at the 50th percentile of the per-user capacity and slightly better at lower percentiles. The Q-Heuristic method shows high robustness to sensing interval, quantization threshold and switching delay.
format Online
Article
Text
id pubmed-9269819
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-92698192022-07-09 Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information Adeogun, Ramoni Berardinelli, Gilberto Sensors (Basel) Article In this paper, we investigate dynamic resource selection in dense deployments of the recent 6G mobile in-X subnetworks (inXSs). We cast resource selection in inXSs as a multi-objective optimization problem involving maximization of the minimum capacity per inXS while minimizing overhead from intra-subnetwork signaling. Since inXSs are expected to be autonomous, selection decisions are made by each inXS based on its local information without signaling from other inXSs. A multi-agent Q-learning (MAQL) method based on limited sensing information (SI) is then developed, resulting in low intra-subnetwork SI signaling. We further propose a rule-based algorithm termed Q-Heuristics for performing resource selection based on similar limited information as the MAQL method. We perform simulations with a focus on joint channel and transmit power selection. The results indicate that: (1) appropriate settings of Q-learning parameters lead to fast convergence of the MAQL method even with two-level quantization of the SI, and (2) the proposed MAQL approach has significantly better performance and is more robust to sensing and switching delays than the best baseline heuristic. The proposed Q-Heuristic shows similar performance to the baseline greedy method at the 50th percentile of the per-user capacity and slightly better at lower percentiles. The Q-Heuristic method shows high robustness to sensing interval, quantization threshold and switching delay. MDPI 2022-07-05 /pmc/articles/PMC9269819/ /pubmed/35808557 http://dx.doi.org/10.3390/s22135062 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Adeogun, Ramoni
Berardinelli, Gilberto
Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information
title Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information
title_full Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information
title_fullStr Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information
title_full_unstemmed Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information
title_short Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information
title_sort multi-agent dynamic resource allocation in 6g in-x subnetworks with limited sensing information
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9269819/
https://www.ncbi.nlm.nih.gov/pubmed/35808557
http://dx.doi.org/10.3390/s22135062
work_keys_str_mv AT adeogunramoni multiagentdynamicresourceallocationin6ginxsubnetworkswithlimitedsensinginformation
AT berardinelligilberto multiagentdynamicresourceallocationin6ginxsubnetworkswithlimitedsensinginformation