Cargando…

Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System

In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm will be...

Descripción completa

Detalles Bibliográficos
Autores principales: Gaballa, Mohamed, Abbod, Maysam
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10647647/
https://www.ncbi.nlm.nih.gov/pubmed/37960708
http://dx.doi.org/10.3390/s23219010
_version_ 1785135156166656000
author Gaballa, Mohamed
Abbod, Maysam
author_facet Gaballa, Mohamed
Abbod, Maysam
author_sort Gaballa, Mohamed
collection PubMed
description In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm will be developed and incorporated into the NOMA system so that this developed DQN model can be employed to estimate the channel coefficients for each user device in NOMA system. The developed DQN scheme will be structured as a simplified approach to efficiently predict the channel parameters for each user in order to maximize the downlink sum rates for all users in the system. In order to approximate the channel parameters for each user device, this proposed DQN approach is first initialized using random channel statistics, and then the proposed DQN model will be dynamically updated based on the interaction with the environment. The predicted channel parameters will be utilized at the receiver side to recover the desired data. Furthermore, this work inspects how the channel estimation process based on the simplified DQN algorithm and the power allocation policy, can both be integrated for the purpose of multiuser detection in the examined NOMA system. Simulation results, based on several performance metrics, have demonstrated that the proposed simplified DQN algorithm can be a competitive algorithm for channel parameters estimation when compared to different benchmark schemes for channel estimation processes such as deep neural network (DNN) based long-short term memory (LSTM), RL based Q algorithm, and channel estimation scheme based on minimum mean square error (MMSE) procedure.
format Online
Article
Text
id pubmed-10647647
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-106476472023-11-06 Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System Gaballa, Mohamed Abbod, Maysam Sensors (Basel) Article In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm will be developed and incorporated into the NOMA system so that this developed DQN model can be employed to estimate the channel coefficients for each user device in NOMA system. The developed DQN scheme will be structured as a simplified approach to efficiently predict the channel parameters for each user in order to maximize the downlink sum rates for all users in the system. In order to approximate the channel parameters for each user device, this proposed DQN approach is first initialized using random channel statistics, and then the proposed DQN model will be dynamically updated based on the interaction with the environment. The predicted channel parameters will be utilized at the receiver side to recover the desired data. Furthermore, this work inspects how the channel estimation process based on the simplified DQN algorithm and the power allocation policy, can both be integrated for the purpose of multiuser detection in the examined NOMA system. Simulation results, based on several performance metrics, have demonstrated that the proposed simplified DQN algorithm can be a competitive algorithm for channel parameters estimation when compared to different benchmark schemes for channel estimation processes such as deep neural network (DNN) based long-short term memory (LSTM), RL based Q algorithm, and channel estimation scheme based on minimum mean square error (MMSE) procedure. MDPI 2023-11-06 /pmc/articles/PMC10647647/ /pubmed/37960708 http://dx.doi.org/10.3390/s23219010 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Gaballa, Mohamed
Abbod, Maysam
Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
title Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
title_full Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
title_fullStr Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
title_full_unstemmed Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
title_short Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
title_sort simplified deep reinforcement learning approach for channel prediction in power domain noma system
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10647647/
https://www.ncbi.nlm.nih.gov/pubmed/37960708
http://dx.doi.org/10.3390/s23219010
work_keys_str_mv AT gaballamohamed simplifieddeepreinforcementlearningapproachforchannelpredictioninpowerdomainnomasystem
AT abbodmaysam simplifieddeepreinforcementlearningapproachforchannelpredictioninpowerdomainnomasystem