Cargando…
Application of deep neural network and deep reinforcement learning in wireless communication
OBJECTIVE: To explore the application of deep neural networks (DNNs) and deep reinforcement learning (DRL) in wireless communication and accelerate the development of the wireless communication industry. METHOD: This study proposes a simple cognitive radio scenario consisting of only one primary use...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7332070/ https://www.ncbi.nlm.nih.gov/pubmed/32614858 http://dx.doi.org/10.1371/journal.pone.0235447 |
_version_ | 1783553455416147968 |
---|---|
author | Li, Ming Li, Hui |
author_facet | Li, Ming Li, Hui |
author_sort | Li, Ming |
collection | PubMed |
description | OBJECTIVE: To explore the application of deep neural networks (DNNs) and deep reinforcement learning (DRL) in wireless communication and accelerate the development of the wireless communication industry. METHOD: This study proposes a simple cognitive radio scenario consisting of only one primary user and one secondary user. The secondary user attempts to share spectrum resources with the primary user. An intelligent power algorithm model based on DNNs and DRL is constructed. Then, the MATLAB platform is utilized to simulate the model. RESULTS: In the performance analysis of the algorithm model under different strategies, it is found that the second power control strategy is more conservative than the first. In the loss function, the second power control strategy has experienced more iterations than the first. In terms of success rate, the second power control strategy has more iterations than the first. In the average number of transmissions, they show the same changing trend, but the success rate can reach 1. In comparison with the traditional distributed clustering and power control (DCPC) algorithm, it is obvious that the convergence rate of the algorithm in this research is higher. The proposed DQN algorithm based on DRL only needs several steps to achieve convergence, which verifies its effectiveness. CONCLUSION: By applying DNNs and DRL to model algorithms constructed in wireless scenarios, the success rate is higher and the convergence rate is faster, which can provide experimental basis for the improvement of later wireless communication networks. |
format | Online Article Text |
id | pubmed-7332070 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-73320702020-07-15 Application of deep neural network and deep reinforcement learning in wireless communication Li, Ming Li, Hui PLoS One Research Article OBJECTIVE: To explore the application of deep neural networks (DNNs) and deep reinforcement learning (DRL) in wireless communication and accelerate the development of the wireless communication industry. METHOD: This study proposes a simple cognitive radio scenario consisting of only one primary user and one secondary user. The secondary user attempts to share spectrum resources with the primary user. An intelligent power algorithm model based on DNNs and DRL is constructed. Then, the MATLAB platform is utilized to simulate the model. RESULTS: In the performance analysis of the algorithm model under different strategies, it is found that the second power control strategy is more conservative than the first. In the loss function, the second power control strategy has experienced more iterations than the first. In terms of success rate, the second power control strategy has more iterations than the first. In the average number of transmissions, they show the same changing trend, but the success rate can reach 1. In comparison with the traditional distributed clustering and power control (DCPC) algorithm, it is obvious that the convergence rate of the algorithm in this research is higher. The proposed DQN algorithm based on DRL only needs several steps to achieve convergence, which verifies its effectiveness. CONCLUSION: By applying DNNs and DRL to model algorithms constructed in wireless scenarios, the success rate is higher and the convergence rate is faster, which can provide experimental basis for the improvement of later wireless communication networks. Public Library of Science 2020-07-02 /pmc/articles/PMC7332070/ /pubmed/32614858 http://dx.doi.org/10.1371/journal.pone.0235447 Text en © 2020 Li, Li http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Li, Ming Li, Hui Application of deep neural network and deep reinforcement learning in wireless communication |
title | Application of deep neural network and deep reinforcement learning in wireless communication |
title_full | Application of deep neural network and deep reinforcement learning in wireless communication |
title_fullStr | Application of deep neural network and deep reinforcement learning in wireless communication |
title_full_unstemmed | Application of deep neural network and deep reinforcement learning in wireless communication |
title_short | Application of deep neural network and deep reinforcement learning in wireless communication |
title_sort | application of deep neural network and deep reinforcement learning in wireless communication |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7332070/ https://www.ncbi.nlm.nih.gov/pubmed/32614858 http://dx.doi.org/10.1371/journal.pone.0235447 |
work_keys_str_mv | AT liming applicationofdeepneuralnetworkanddeepreinforcementlearninginwirelesscommunication AT lihui applicationofdeepneuralnetworkanddeepreinforcementlearninginwirelesscommunication |