Cargando…

DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing

Hardware bottlenecks can throttle smart device (SD) performance when executing computation-intensive and delay-sensitive applications. Hence, task offloading can be used to transfer computation-intensive tasks to an external server or processor in Mobile Edge Computing. However, in this approach, th...

Descripción completa

Detalles Bibliográficos
Autores principales: Lim, Ducsun, Lee, Wooyeob, Kim, Won-Tae, Joe, Inwhee
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9740101/
https://www.ncbi.nlm.nih.gov/pubmed/36501914
http://dx.doi.org/10.3390/s22239212
_version_ 1784847975503101952
author Lim, Ducsun
Lee, Wooyeob
Kim, Won-Tae
Joe, Inwhee
author_facet Lim, Ducsun
Lee, Wooyeob
Kim, Won-Tae
Joe, Inwhee
author_sort Lim, Ducsun
collection PubMed
description Hardware bottlenecks can throttle smart device (SD) performance when executing computation-intensive and delay-sensitive applications. Hence, task offloading can be used to transfer computation-intensive tasks to an external server or processor in Mobile Edge Computing. However, in this approach, the offloaded task can be useless when a process is significantly delayed or a deadline has expired. Due to the uncertain task processing via offloading, it is challenging for each SD to determine its offloading decision (whether to local or remote and drop). This study proposes a deep-reinforcement-learning-based offloading scheduler (DRL-OS) that considers the energy balance in selecting the method for performing a task, such as local computing, offloading, or dropping. The proposed DRL-OS is based on the double dueling deep Q-network (D3QN) and selects an appropriate action by learning the task size, deadline, queue, and residual battery charge. The average battery level, drop rate, and average latency of the DRL-OS were measured in simulations to analyze the scheduler performance. The DRL-OS exhibits a lower average battery level (up to 54%) and lower drop rate (up to 42.5%) than existing schemes. The scheduler also achieves a lower average latency of 0.01 to >0.25 s, despite subtle case-wise differences in the average latency.
format Online
Article
Text
id pubmed-9740101
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-97401012022-12-11 DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing Lim, Ducsun Lee, Wooyeob Kim, Won-Tae Joe, Inwhee Sensors (Basel) Article Hardware bottlenecks can throttle smart device (SD) performance when executing computation-intensive and delay-sensitive applications. Hence, task offloading can be used to transfer computation-intensive tasks to an external server or processor in Mobile Edge Computing. However, in this approach, the offloaded task can be useless when a process is significantly delayed or a deadline has expired. Due to the uncertain task processing via offloading, it is challenging for each SD to determine its offloading decision (whether to local or remote and drop). This study proposes a deep-reinforcement-learning-based offloading scheduler (DRL-OS) that considers the energy balance in selecting the method for performing a task, such as local computing, offloading, or dropping. The proposed DRL-OS is based on the double dueling deep Q-network (D3QN) and selects an appropriate action by learning the task size, deadline, queue, and residual battery charge. The average battery level, drop rate, and average latency of the DRL-OS were measured in simulations to analyze the scheduler performance. The DRL-OS exhibits a lower average battery level (up to 54%) and lower drop rate (up to 42.5%) than existing schemes. The scheduler also achieves a lower average latency of 0.01 to >0.25 s, despite subtle case-wise differences in the average latency. MDPI 2022-11-26 /pmc/articles/PMC9740101/ /pubmed/36501914 http://dx.doi.org/10.3390/s22239212 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Lim, Ducsun
Lee, Wooyeob
Kim, Won-Tae
Joe, Inwhee
DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing
title DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing
title_full DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing
title_fullStr DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing
title_full_unstemmed DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing
title_short DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing
title_sort drl-os: a deep reinforcement learning-based offloading scheduler in mobile edge computing
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9740101/
https://www.ncbi.nlm.nih.gov/pubmed/36501914
http://dx.doi.org/10.3390/s22239212
work_keys_str_mv AT limducsun drlosadeepreinforcementlearningbasedoffloadingschedulerinmobileedgecomputing
AT leewooyeob drlosadeepreinforcementlearningbasedoffloadingschedulerinmobileedgecomputing
AT kimwontae drlosadeepreinforcementlearningbasedoffloadingschedulerinmobileedgecomputing
AT joeinwhee drlosadeepreinforcementlearningbasedoffloadingschedulerinmobileedgecomputing