Cargando…

Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach

Unmanned Aerial Vehicles (UAVs) are considered an important element in wireless communication networks due to their agility, mobility, and ability to be deployed as mobile base stations (BSs) in the network to improve the communication quality and coverage area. UAVs can be used to provide communica...

Descripción completa

Detalles Bibliográficos
Autores principales: Nemer, Ibrahim A., Sheltami, Tarek R., Belhaiza, Slim, Mahmoud, Ashraf S.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8915037/
https://www.ncbi.nlm.nih.gov/pubmed/35271067
http://dx.doi.org/10.3390/s22051919
_version_ 1784667911188643840
author Nemer, Ibrahim A.
Sheltami, Tarek R.
Belhaiza, Slim
Mahmoud, Ashraf S.
author_facet Nemer, Ibrahim A.
Sheltami, Tarek R.
Belhaiza, Slim
Mahmoud, Ashraf S.
author_sort Nemer, Ibrahim A.
collection PubMed
description Unmanned Aerial Vehicles (UAVs) are considered an important element in wireless communication networks due to their agility, mobility, and ability to be deployed as mobile base stations (BSs) in the network to improve the communication quality and coverage area. UAVs can be used to provide communication services for ground users in different scenarios, such as transportation systems, disaster situations, emergency cases, and surveillance. However, covering a specific area under a dynamic environment for a long time using UAV technology is quite challenging due to its limited energy resources, short communication range, and flying regulations and rules. Hence, a distributed solution is needed to overcome these limitations and to handle the interactions among UAVs, which leads to a large state space. In this paper, we introduced a novel distributed control solution to place a group of UAVs in the candidate area in order to improve the coverage score with minimum energy consumption and a high fairness value. The new algorithm is called the state-based game with actor–critic (SBG-AC). To simplify the complex interactions in the problem, we model SBG-AC using a state-based potential game. Then, we merge SBG-AC with an actor–critic algorithm to assure the convergence of the model, to control each UAV in a distributed way, and to have learning capabilities in case of dynamic environments. Simulation results show that the SBG-AC outperforms the distributed DRL and the DRL-EC3 in terms of fairness, coverage score, and energy consumption.
format Online
Article
Text
id pubmed-8915037
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89150372022-03-12 Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach Nemer, Ibrahim A. Sheltami, Tarek R. Belhaiza, Slim Mahmoud, Ashraf S. Sensors (Basel) Article Unmanned Aerial Vehicles (UAVs) are considered an important element in wireless communication networks due to their agility, mobility, and ability to be deployed as mobile base stations (BSs) in the network to improve the communication quality and coverage area. UAVs can be used to provide communication services for ground users in different scenarios, such as transportation systems, disaster situations, emergency cases, and surveillance. However, covering a specific area under a dynamic environment for a long time using UAV technology is quite challenging due to its limited energy resources, short communication range, and flying regulations and rules. Hence, a distributed solution is needed to overcome these limitations and to handle the interactions among UAVs, which leads to a large state space. In this paper, we introduced a novel distributed control solution to place a group of UAVs in the candidate area in order to improve the coverage score with minimum energy consumption and a high fairness value. The new algorithm is called the state-based game with actor–critic (SBG-AC). To simplify the complex interactions in the problem, we model SBG-AC using a state-based potential game. Then, we merge SBG-AC with an actor–critic algorithm to assure the convergence of the model, to control each UAV in a distributed way, and to have learning capabilities in case of dynamic environments. Simulation results show that the SBG-AC outperforms the distributed DRL and the DRL-EC3 in terms of fairness, coverage score, and energy consumption. MDPI 2022-03-01 /pmc/articles/PMC8915037/ /pubmed/35271067 http://dx.doi.org/10.3390/s22051919 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Nemer, Ibrahim A.
Sheltami, Tarek R.
Belhaiza, Slim
Mahmoud, Ashraf S.
Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach
title Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach
title_full Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach
title_fullStr Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach
title_full_unstemmed Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach
title_short Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach
title_sort energy-efficient uav movement control for fair communication coverage: a deep reinforcement learning approach
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8915037/
https://www.ncbi.nlm.nih.gov/pubmed/35271067
http://dx.doi.org/10.3390/s22051919
work_keys_str_mv AT nemeribrahima energyefficientuavmovementcontrolforfaircommunicationcoverageadeepreinforcementlearningapproach
AT sheltamitarekr energyefficientuavmovementcontrolforfaircommunicationcoverageadeepreinforcementlearningapproach
AT belhaizaslim energyefficientuavmovementcontrolforfaircommunicationcoverageadeepreinforcementlearningapproach
AT mahmoudashrafs energyefficientuavmovementcontrolforfaircommunicationcoverageadeepreinforcementlearningapproach