Cargando…

A Reinforcement Learning Routing Protocol for UAV Aided Public Safety Networks

In Public Safety Networks (PSNs), the conservation of on-scene device energy is critical to ensure long term connectivity to first responders. Due to the limited transmit power, this connectivity can be ensured by enabling continuous cooperation among on-scene devices through multipath routing. In t...

Descripción completa

Detalles Bibliográficos
Autores principales: Minhas, Hassan Ishtiaq, Ahmad, Rizwan, Ahmed, Waqas, Waheed, Maham, Alam, Muhammad Mahtab, Gul, Sufi Tabassum
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8232606/
https://www.ncbi.nlm.nih.gov/pubmed/34203912
http://dx.doi.org/10.3390/s21124121
_version_ 1783713670730088448
author Minhas, Hassan Ishtiaq
Ahmad, Rizwan
Ahmed, Waqas
Waheed, Maham
Alam, Muhammad Mahtab
Gul, Sufi Tabassum
author_facet Minhas, Hassan Ishtiaq
Ahmad, Rizwan
Ahmed, Waqas
Waheed, Maham
Alam, Muhammad Mahtab
Gul, Sufi Tabassum
author_sort Minhas, Hassan Ishtiaq
collection PubMed
description In Public Safety Networks (PSNs), the conservation of on-scene device energy is critical to ensure long term connectivity to first responders. Due to the limited transmit power, this connectivity can be ensured by enabling continuous cooperation among on-scene devices through multipath routing. In this paper, we present a Reinforcement Learning (RL) and Unmanned Aerial Vehicle- (UAV) aided multipath routing scheme for PSNs. The aim is to increase network lifetime by improving the Energy Efficiency (EE) of the PSN. First, network configurations are generated by using different clustering schemes. The RL is then applied to configure the routing topology that considers both the immediate energy cost and the total distance cost of the transmission path. The performance of these schemes are analyzed in terms of throughput, energy consumption, number of dead nodes, delay, packet delivery ratio, number of cluster head changes, number of control packets, and EE. The results showed an improvement of approximately [Formula: see text] in EE of the clustering scheme when compared with non-clustering schemes. Furthermore, the impact of UAV trajectory and the number of UAVs are jointly analyzed by considering various trajectory scenarios around the disaster area. The EE can be further improved by [Formula: see text] using Two UAVs on Opposite Axis of the building and moving in the Opposite directions (TUOAO) when compared to a single UAV scheme. The result showed that although the number of control packets in both the single and two UAV scenarios are comparable, the total number of CH changes are significantly different.
format Online
Article
Text
id pubmed-8232606
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-82326062021-06-26 A Reinforcement Learning Routing Protocol for UAV Aided Public Safety Networks Minhas, Hassan Ishtiaq Ahmad, Rizwan Ahmed, Waqas Waheed, Maham Alam, Muhammad Mahtab Gul, Sufi Tabassum Sensors (Basel) Article In Public Safety Networks (PSNs), the conservation of on-scene device energy is critical to ensure long term connectivity to first responders. Due to the limited transmit power, this connectivity can be ensured by enabling continuous cooperation among on-scene devices through multipath routing. In this paper, we present a Reinforcement Learning (RL) and Unmanned Aerial Vehicle- (UAV) aided multipath routing scheme for PSNs. The aim is to increase network lifetime by improving the Energy Efficiency (EE) of the PSN. First, network configurations are generated by using different clustering schemes. The RL is then applied to configure the routing topology that considers both the immediate energy cost and the total distance cost of the transmission path. The performance of these schemes are analyzed in terms of throughput, energy consumption, number of dead nodes, delay, packet delivery ratio, number of cluster head changes, number of control packets, and EE. The results showed an improvement of approximately [Formula: see text] in EE of the clustering scheme when compared with non-clustering schemes. Furthermore, the impact of UAV trajectory and the number of UAVs are jointly analyzed by considering various trajectory scenarios around the disaster area. The EE can be further improved by [Formula: see text] using Two UAVs on Opposite Axis of the building and moving in the Opposite directions (TUOAO) when compared to a single UAV scheme. The result showed that although the number of control packets in both the single and two UAV scenarios are comparable, the total number of CH changes are significantly different. MDPI 2021-06-15 /pmc/articles/PMC8232606/ /pubmed/34203912 http://dx.doi.org/10.3390/s21124121 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Minhas, Hassan Ishtiaq
Ahmad, Rizwan
Ahmed, Waqas
Waheed, Maham
Alam, Muhammad Mahtab
Gul, Sufi Tabassum
A Reinforcement Learning Routing Protocol for UAV Aided Public Safety Networks
title A Reinforcement Learning Routing Protocol for UAV Aided Public Safety Networks
title_full A Reinforcement Learning Routing Protocol for UAV Aided Public Safety Networks
title_fullStr A Reinforcement Learning Routing Protocol for UAV Aided Public Safety Networks
title_full_unstemmed A Reinforcement Learning Routing Protocol for UAV Aided Public Safety Networks
title_short A Reinforcement Learning Routing Protocol for UAV Aided Public Safety Networks
title_sort reinforcement learning routing protocol for uav aided public safety networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8232606/
https://www.ncbi.nlm.nih.gov/pubmed/34203912
http://dx.doi.org/10.3390/s21124121
work_keys_str_mv AT minhashassanishtiaq areinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT ahmadrizwan areinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT ahmedwaqas areinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT waheedmaham areinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT alammuhammadmahtab areinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT gulsufitabassum areinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT minhashassanishtiaq reinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT ahmadrizwan reinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT ahmedwaqas reinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT waheedmaham reinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT alammuhammadmahtab reinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks
AT gulsufitabassum reinforcementlearningroutingprotocolforuavaidedpublicsafetynetworks