Cargando…
Multi-Model Running Latency Optimization in an Edge Computing Paradigm
Recent advances in both lightweight deep learning algorithms and edge computing increasingly enable multiple model inference tasks to be conducted concurrently on resource-constrained edge devices, allowing us to achieve one goal collaboratively rather than getting high quality in each standalone ta...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9415810/ https://www.ncbi.nlm.nih.gov/pubmed/36015856 http://dx.doi.org/10.3390/s22166097 |
_version_ | 1784776324347330560 |
---|---|
author | Li, Peisong Wang, Xinheng Huang, Kaizhu Huang, Yi Li, Shancang Iqbal, Muddesar |
author_facet | Li, Peisong Wang, Xinheng Huang, Kaizhu Huang, Yi Li, Shancang Iqbal, Muddesar |
author_sort | Li, Peisong |
collection | PubMed |
description | Recent advances in both lightweight deep learning algorithms and edge computing increasingly enable multiple model inference tasks to be conducted concurrently on resource-constrained edge devices, allowing us to achieve one goal collaboratively rather than getting high quality in each standalone task. However, the high overall running latency for performing multi-model inferences always negatively affects the real-time applications. To combat latency, the algorithms should be optimized to minimize the latency for multi-model deployment without compromising the safety-critical situation. This work focuses on the real-time task scheduling strategy for multi-model deployment and investigating the model inference using an open neural network exchange (ONNX) runtime engine. Then, an application deployment strategy is proposed based on the container technology and inference tasks are scheduled to different containers based on the scheduling strategies. Experimental results show that the proposed solution is able to significantly reduce the overall running latency in real-time applications. |
format | Online Article Text |
id | pubmed-9415810 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-94158102022-08-27 Multi-Model Running Latency Optimization in an Edge Computing Paradigm Li, Peisong Wang, Xinheng Huang, Kaizhu Huang, Yi Li, Shancang Iqbal, Muddesar Sensors (Basel) Article Recent advances in both lightweight deep learning algorithms and edge computing increasingly enable multiple model inference tasks to be conducted concurrently on resource-constrained edge devices, allowing us to achieve one goal collaboratively rather than getting high quality in each standalone task. However, the high overall running latency for performing multi-model inferences always negatively affects the real-time applications. To combat latency, the algorithms should be optimized to minimize the latency for multi-model deployment without compromising the safety-critical situation. This work focuses on the real-time task scheduling strategy for multi-model deployment and investigating the model inference using an open neural network exchange (ONNX) runtime engine. Then, an application deployment strategy is proposed based on the container technology and inference tasks are scheduled to different containers based on the scheduling strategies. Experimental results show that the proposed solution is able to significantly reduce the overall running latency in real-time applications. MDPI 2022-08-15 /pmc/articles/PMC9415810/ /pubmed/36015856 http://dx.doi.org/10.3390/s22166097 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Li, Peisong Wang, Xinheng Huang, Kaizhu Huang, Yi Li, Shancang Iqbal, Muddesar Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_full | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_fullStr | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_full_unstemmed | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_short | Multi-Model Running Latency Optimization in an Edge Computing Paradigm |
title_sort | multi-model running latency optimization in an edge computing paradigm |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9415810/ https://www.ncbi.nlm.nih.gov/pubmed/36015856 http://dx.doi.org/10.3390/s22166097 |
work_keys_str_mv | AT lipeisong multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT wangxinheng multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT huangkaizhu multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT huangyi multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT lishancang multimodelrunninglatencyoptimizationinanedgecomputingparadigm AT iqbalmuddesar multimodelrunninglatencyoptimizationinanedgecomputingparadigm |