Cargando…

Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing

Vehicular edge computing (VEC) is a promising technology for supporting computation-intensive vehicular applications with low latency at the network edges. Vehicles offload their tasks to VEC servers (VECSs) to improve the quality of service (QoS) of the applications. However, the high density of ve...

Descripción completa

Detalles Bibliográficos
Autores principales: Moon, Sungwon, Lim, Yujin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9782689/
https://www.ncbi.nlm.nih.gov/pubmed/36559963
http://dx.doi.org/10.3390/s22249595
_version_ 1784857403658862592
author Moon, Sungwon
Lim, Yujin
author_facet Moon, Sungwon
Lim, Yujin
author_sort Moon, Sungwon
collection PubMed
description Vehicular edge computing (VEC) is a promising technology for supporting computation-intensive vehicular applications with low latency at the network edges. Vehicles offload their tasks to VEC servers (VECSs) to improve the quality of service (QoS) of the applications. However, the high density of vehicles and VECSs and the mobility of vehicles increase channel interference and deteriorate the channel condition, resulting in increased power consumption and latency. Therefore, we proposed a task offloading method with the power control considering dynamic channel interference and conditions in a vehicular environment. The objective is to maximize the throughput of a VEC system under the power constraints of a vehicle. We leverage deep reinforcement learning (DRL) to achieve superior performance in complex environments and high-dimensional inputs. However, most conventional methods adopted the multi-agent DRL approach that makes decisions using only local information, which can result in poor performance, while single-agent DRL approaches require excessive data exchanges because data needs to be concentrated in an agent. To address these challenges, we adopt a federated deep reinforcement learning (FL) method that combines centralized and distributed approaches to the deep deterministic policy gradient (DDPG) framework. The experimental results demonstrated the effectiveness and performance of the proposed method in terms of the throughput and queueing delay of vehicles in dynamic vehicular networks.
format Online
Article
Text
id pubmed-9782689
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-97826892022-12-24 Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing Moon, Sungwon Lim, Yujin Sensors (Basel) Article Vehicular edge computing (VEC) is a promising technology for supporting computation-intensive vehicular applications with low latency at the network edges. Vehicles offload their tasks to VEC servers (VECSs) to improve the quality of service (QoS) of the applications. However, the high density of vehicles and VECSs and the mobility of vehicles increase channel interference and deteriorate the channel condition, resulting in increased power consumption and latency. Therefore, we proposed a task offloading method with the power control considering dynamic channel interference and conditions in a vehicular environment. The objective is to maximize the throughput of a VEC system under the power constraints of a vehicle. We leverage deep reinforcement learning (DRL) to achieve superior performance in complex environments and high-dimensional inputs. However, most conventional methods adopted the multi-agent DRL approach that makes decisions using only local information, which can result in poor performance, while single-agent DRL approaches require excessive data exchanges because data needs to be concentrated in an agent. To address these challenges, we adopt a federated deep reinforcement learning (FL) method that combines centralized and distributed approaches to the deep deterministic policy gradient (DDPG) framework. The experimental results demonstrated the effectiveness and performance of the proposed method in terms of the throughput and queueing delay of vehicles in dynamic vehicular networks. MDPI 2022-12-07 /pmc/articles/PMC9782689/ /pubmed/36559963 http://dx.doi.org/10.3390/s22249595 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Moon, Sungwon
Lim, Yujin
Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
title Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
title_full Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
title_fullStr Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
title_full_unstemmed Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
title_short Federated Deep Reinforcement Learning Based Task Offloading with Power Control in Vehicular Edge Computing
title_sort federated deep reinforcement learning based task offloading with power control in vehicular edge computing
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9782689/
https://www.ncbi.nlm.nih.gov/pubmed/36559963
http://dx.doi.org/10.3390/s22249595
work_keys_str_mv AT moonsungwon federateddeepreinforcementlearningbasedtaskoffloadingwithpowercontrolinvehicularedgecomputing
AT limyujin federateddeepreinforcementlearningbasedtaskoffloadingwithpowercontrolinvehicularedgecomputing