Cargando…

Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach

Vehicle-to-vehicle (V2V) communication has attracted increasing attention since it can improve road safety and traffic efficiency. In the underlay approach of mode 3, the V2V links need to reuse the spectrum resources preoccupied with vehicle-to-infrastructure (V2I) links, which will interfere with...

Descripción completa

Detalles Bibliográficos
Autores principales: Fu, Jinjuan, Qin, Xizhong, Huang, Yan, Tang, Li, Liu, Yan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8914637/
https://www.ncbi.nlm.nih.gov/pubmed/35271024
http://dx.doi.org/10.3390/s22051874
_version_ 1784667767091232768
author Fu, Jinjuan
Qin, Xizhong
Huang, Yan
Tang, Li
Liu, Yan
author_facet Fu, Jinjuan
Qin, Xizhong
Huang, Yan
Tang, Li
Liu, Yan
author_sort Fu, Jinjuan
collection PubMed
description Vehicle-to-vehicle (V2V) communication has attracted increasing attention since it can improve road safety and traffic efficiency. In the underlay approach of mode 3, the V2V links need to reuse the spectrum resources preoccupied with vehicle-to-infrastructure (V2I) links, which will interfere with the V2I links. Therefore, how to allocate wireless resources flexibly and improve the throughput of the V2I links while meeting the low latency requirements of the V2V links needs to be determined. This paper proposes a V2V resource allocation framework based on deep reinforcement learning. The base station (BS) uses a double deep Q network to allocate resources intelligently. In particular, to reduce the signaling overhead for the BS to acquire channel state information (CSI) in mode 3, the BS optimizes the resource allocation strategy based on partial CSI in the framework of this article. The simulation results indicate that the proposed scheme can meet the low latency requirements of V2V links while increasing the capacity of the V2I links compared with the other methods. In addition, the proposed partial CSI design has comparable performance to complete CSI.
format Online
Article
Text
id pubmed-8914637
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89146372022-03-12 Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach Fu, Jinjuan Qin, Xizhong Huang, Yan Tang, Li Liu, Yan Sensors (Basel) Article Vehicle-to-vehicle (V2V) communication has attracted increasing attention since it can improve road safety and traffic efficiency. In the underlay approach of mode 3, the V2V links need to reuse the spectrum resources preoccupied with vehicle-to-infrastructure (V2I) links, which will interfere with the V2I links. Therefore, how to allocate wireless resources flexibly and improve the throughput of the V2I links while meeting the low latency requirements of the V2V links needs to be determined. This paper proposes a V2V resource allocation framework based on deep reinforcement learning. The base station (BS) uses a double deep Q network to allocate resources intelligently. In particular, to reduce the signaling overhead for the BS to acquire channel state information (CSI) in mode 3, the BS optimizes the resource allocation strategy based on partial CSI in the framework of this article. The simulation results indicate that the proposed scheme can meet the low latency requirements of V2V links while increasing the capacity of the V2I links compared with the other methods. In addition, the proposed partial CSI design has comparable performance to complete CSI. MDPI 2022-02-27 /pmc/articles/PMC8914637/ /pubmed/35271024 http://dx.doi.org/10.3390/s22051874 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Fu, Jinjuan
Qin, Xizhong
Huang, Yan
Tang, Li
Liu, Yan
Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach
title Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach
title_full Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach
title_fullStr Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach
title_full_unstemmed Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach
title_short Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach
title_sort deep reinforcement learning-based resource allocation for cellular vehicular network mode 3 with underlay approach
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8914637/
https://www.ncbi.nlm.nih.gov/pubmed/35271024
http://dx.doi.org/10.3390/s22051874
work_keys_str_mv AT fujinjuan deepreinforcementlearningbasedresourceallocationforcellularvehicularnetworkmode3withunderlayapproach
AT qinxizhong deepreinforcementlearningbasedresourceallocationforcellularvehicularnetworkmode3withunderlayapproach
AT huangyan deepreinforcementlearningbasedresourceallocationforcellularvehicularnetworkmode3withunderlayapproach
AT tangli deepreinforcementlearningbasedresourceallocationforcellularvehicularnetworkmode3withunderlayapproach
AT liuyan deepreinforcementlearningbasedresourceallocationforcellularvehicularnetworkmode3withunderlayapproach