Cargando…
Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks
Device-to-device (D2D) technology enables direct communication between devices, which can effectively solve the problem of insufficient spectrum resources in 5G communication technology. Since the channels are shared among multiple D2D user pairs, it may lead to serious interference between D2D user...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9777944/ https://www.ncbi.nlm.nih.gov/pubmed/36554127 http://dx.doi.org/10.3390/e24121722 |
_version_ | 1784856232604991488 |
---|---|
author | Sun, Ming Jin, Yanhui Wang, Shumei Mei, Erzhuang |
author_facet | Sun, Ming Jin, Yanhui Wang, Shumei Mei, Erzhuang |
author_sort | Sun, Ming |
collection | PubMed |
description | Device-to-device (D2D) technology enables direct communication between devices, which can effectively solve the problem of insufficient spectrum resources in 5G communication technology. Since the channels are shared among multiple D2D user pairs, it may lead to serious interference between D2D user pairs. In order to reduce interference, effectively increase network capacity, and improve wireless spectrum utilization, this paper proposed a distributed resource allocation algorithm with the joint of a deep Q network (DQN) and an unsupervised learning network. Firstly, a DQN algorithm was constructed to solve the channel allocation in the dynamic and unknown environment in a distributed manner. Then, a deep power control neural network with the unsupervised learning strategy was constructed to output an optimized channel power control scheme to maximize the spectrum transmit sum-rate through the corresponding constraint processing. As opposed to traditional centralized approaches that require the collection of instantaneous global network information, the algorithm proposed in this paper used each transmitter as a learning agent to make channel selection and power control through a small amount of state information collected locally. The simulation results showed that the proposed algorithm was more effective in increasing the convergence speed and maximizing the transmit sum-rate than other traditional centralized and distributed algorithms. |
format | Online Article Text |
id | pubmed-9777944 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-97779442022-12-23 Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks Sun, Ming Jin, Yanhui Wang, Shumei Mei, Erzhuang Entropy (Basel) Article Device-to-device (D2D) technology enables direct communication between devices, which can effectively solve the problem of insufficient spectrum resources in 5G communication technology. Since the channels are shared among multiple D2D user pairs, it may lead to serious interference between D2D user pairs. In order to reduce interference, effectively increase network capacity, and improve wireless spectrum utilization, this paper proposed a distributed resource allocation algorithm with the joint of a deep Q network (DQN) and an unsupervised learning network. Firstly, a DQN algorithm was constructed to solve the channel allocation in the dynamic and unknown environment in a distributed manner. Then, a deep power control neural network with the unsupervised learning strategy was constructed to output an optimized channel power control scheme to maximize the spectrum transmit sum-rate through the corresponding constraint processing. As opposed to traditional centralized approaches that require the collection of instantaneous global network information, the algorithm proposed in this paper used each transmitter as a learning agent to make channel selection and power control through a small amount of state information collected locally. The simulation results showed that the proposed algorithm was more effective in increasing the convergence speed and maximizing the transmit sum-rate than other traditional centralized and distributed algorithms. MDPI 2022-11-24 /pmc/articles/PMC9777944/ /pubmed/36554127 http://dx.doi.org/10.3390/e24121722 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Sun, Ming Jin, Yanhui Wang, Shumei Mei, Erzhuang Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks |
title | Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks |
title_full | Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks |
title_fullStr | Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks |
title_full_unstemmed | Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks |
title_short | Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks |
title_sort | joint deep reinforcement learning and unsupervised learning for channel selection and power control in d2d networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9777944/ https://www.ncbi.nlm.nih.gov/pubmed/36554127 http://dx.doi.org/10.3390/e24121722 |
work_keys_str_mv | AT sunming jointdeepreinforcementlearningandunsupervisedlearningforchannelselectionandpowercontrolind2dnetworks AT jinyanhui jointdeepreinforcementlearningandunsupervisedlearningforchannelselectionandpowercontrolind2dnetworks AT wangshumei jointdeepreinforcementlearningandunsupervisedlearningforchannelselectionandpowercontrolind2dnetworks AT meierzhuang jointdeepreinforcementlearningandunsupervisedlearningforchannelselectionandpowercontrolind2dnetworks |