Cargando…
A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks
As an emerging computing model, edge computing greatly expands the collaboration capabilities of the servers. It makes full use of the available resources around the users to quickly complete the task request coming from the terminal devices. Task offloading is a common solution for improving the ef...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9602031/ https://www.ncbi.nlm.nih.gov/pubmed/37420485 http://dx.doi.org/10.3390/e24101464 |
_version_ | 1784817212286042112 |
---|---|
author | Han, Pu Han, Lin Yuan, Bo Pan, Jeng-Shyang Shang, Jiandong |
author_facet | Han, Pu Han, Lin Yuan, Bo Pan, Jeng-Shyang Shang, Jiandong |
author_sort | Han, Pu |
collection | PubMed |
description | As an emerging computing model, edge computing greatly expands the collaboration capabilities of the servers. It makes full use of the available resources around the users to quickly complete the task request coming from the terminal devices. Task offloading is a common solution for improving the efficiency of task execution on edge networks. However, the peculiarities of the edge networks, especially the random access of mobile devices, brings unpredictable challenges to the task offloading in a mobile edge network. In this paper, we propose a trajectory prediction model for moving targets in edge networks without users’ historical paths which represents their habitual movement trajectory. We also put forward a mobility-aware parallelizable task offloading strategy based on a trajectory prediction model and parallel mechanisms of tasks. In our experiments, we compared the hit ratio of the prediction model, network bandwidth and task execution efficiency of the edge networks by using the EUA data set. Experimental results showed that our model is much better than random, non-position prediction parallel, non-parallel strategy-based position prediction. Where the task offloading hit rate is closed to the user’s moving speed, when the speed is less 12.96 m/s, the hit rate can reach more than 80%. Meanwhile, we we also find that the bandwidth occupancy is significantly related to the degree of task parallelism and the number of services running on servers in the network. The parallel strategy can boost network bandwidth utilization by more than eight times when compared to a non-parallel policy as the number of parallel activities grows. |
format | Online Article Text |
id | pubmed-9602031 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-96020312022-10-27 A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks Han, Pu Han, Lin Yuan, Bo Pan, Jeng-Shyang Shang, Jiandong Entropy (Basel) Article As an emerging computing model, edge computing greatly expands the collaboration capabilities of the servers. It makes full use of the available resources around the users to quickly complete the task request coming from the terminal devices. Task offloading is a common solution for improving the efficiency of task execution on edge networks. However, the peculiarities of the edge networks, especially the random access of mobile devices, brings unpredictable challenges to the task offloading in a mobile edge network. In this paper, we propose a trajectory prediction model for moving targets in edge networks without users’ historical paths which represents their habitual movement trajectory. We also put forward a mobility-aware parallelizable task offloading strategy based on a trajectory prediction model and parallel mechanisms of tasks. In our experiments, we compared the hit ratio of the prediction model, network bandwidth and task execution efficiency of the edge networks by using the EUA data set. Experimental results showed that our model is much better than random, non-position prediction parallel, non-parallel strategy-based position prediction. Where the task offloading hit rate is closed to the user’s moving speed, when the speed is less 12.96 m/s, the hit rate can reach more than 80%. Meanwhile, we we also find that the bandwidth occupancy is significantly related to the degree of task parallelism and the number of services running on servers in the network. The parallel strategy can boost network bandwidth utilization by more than eight times when compared to a non-parallel policy as the number of parallel activities grows. MDPI 2022-10-14 /pmc/articles/PMC9602031/ /pubmed/37420485 http://dx.doi.org/10.3390/e24101464 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Han, Pu Han, Lin Yuan, Bo Pan, Jeng-Shyang Shang, Jiandong A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks |
title | A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks |
title_full | A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks |
title_fullStr | A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks |
title_full_unstemmed | A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks |
title_short | A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks |
title_sort | parallelizable task offloading model with trajectory-prediction for mobile edge networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9602031/ https://www.ncbi.nlm.nih.gov/pubmed/37420485 http://dx.doi.org/10.3390/e24101464 |
work_keys_str_mv | AT hanpu aparallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT hanlin aparallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT yuanbo aparallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT panjengshyang aparallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT shangjiandong aparallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT hanpu parallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT hanlin parallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT yuanbo parallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT panjengshyang parallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks AT shangjiandong parallelizabletaskoffloadingmodelwithtrajectorypredictionformobileedgenetworks |