Cargando…

Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy co...

Descripción completa

Detalles Bibliográficos
Autores principales: Tian, Xianzhong, Zhu, Juan, Xu, Ting, Li, Yanjun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7795226/
https://www.ncbi.nlm.nih.gov/pubmed/33401409
http://dx.doi.org/10.3390/s21010229
_version_ 1783634395240857600
author Tian, Xianzhong
Zhu, Juan
Xu, Ting
Li, Yanjun
author_facet Tian, Xianzhong
Zhu, Juan
Xu, Ting
Li, Yanjun
author_sort Tian, Xianzhong
collection PubMed
description The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.
format Online
Article
Text
id pubmed-7795226
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-77952262021-01-10 Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds Tian, Xianzhong Zhu, Juan Xu, Ting Li, Yanjun Sensors (Basel) Article The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions. MDPI 2021-01-01 /pmc/articles/PMC7795226/ /pubmed/33401409 http://dx.doi.org/10.3390/s21010229 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tian, Xianzhong
Zhu, Juan
Xu, Ting
Li, Yanjun
Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds
title Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds
title_full Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds
title_fullStr Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds
title_full_unstemmed Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds
title_short Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds
title_sort mobility-included dnn partition offloading from mobile devices to edge clouds
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7795226/
https://www.ncbi.nlm.nih.gov/pubmed/33401409
http://dx.doi.org/10.3390/s21010229
work_keys_str_mv AT tianxianzhong mobilityincludeddnnpartitionoffloadingfrommobiledevicestoedgeclouds
AT zhujuan mobilityincludeddnnpartitionoffloadingfrommobiledevicestoedgeclouds
AT xuting mobilityincludeddnnpartitionoffloadingfrommobiledevicestoedgeclouds
AT liyanjun mobilityincludeddnnpartitionoffloadingfrommobiledevicestoedgeclouds