Cargando…

A Joint Model Provisioning and Request Dispatch Solution for Low-Latency Inference Services on Edge

With the advancement of machine learning, a growing number of mobile users rely on machine learning inference for making time-sensitive and safety-critical decisions. Therefore, the demand for high-quality and low-latency inference services at the network edge has become the key to modern intelligen...

Descripción completa

Detalles Bibliográficos
Autores principales: Prasad, Anish, Mofjeld, Carl, Peng, Yang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8513104/
https://www.ncbi.nlm.nih.gov/pubmed/34640914
http://dx.doi.org/10.3390/s21196594