Cargando…

MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System

It is hard to directly deploy deep learning models on today’s smartphones due to the substantial computational costs introduced by millions of parameters. To compress the model, we develop an [Formula: see text]-based sparse group lasso model called MobilePrune which can generate extremely compact n...

Descripción completa

Detalles Bibliográficos
Autores principales: Shao, Yubo, Zhao, Kaikai, Cao, Zhiwen, Peng, Zhehao, Peng, Xingang, Li, Pan, Wang, Yijie, Ma, Jianzhu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9185446/
https://www.ncbi.nlm.nih.gov/pubmed/35684708
http://dx.doi.org/10.3390/s22114081
_version_ 1784724726208266240
author Shao, Yubo
Zhao, Kaikai
Cao, Zhiwen
Peng, Zhehao
Peng, Xingang
Li, Pan
Wang, Yijie
Ma, Jianzhu
author_facet Shao, Yubo
Zhao, Kaikai
Cao, Zhiwen
Peng, Zhehao
Peng, Xingang
Li, Pan
Wang, Yijie
Ma, Jianzhu
author_sort Shao, Yubo
collection PubMed
description It is hard to directly deploy deep learning models on today’s smartphones due to the substantial computational costs introduced by millions of parameters. To compress the model, we develop an [Formula: see text]-based sparse group lasso model called MobilePrune which can generate extremely compact neural network models for both desktop and mobile platforms. We adopt group lasso penalty to enforce sparsity at the group level to benefit General Matrix Multiply (GEMM) and develop the very first algorithm that can optimize the [Formula: see text] norm in an exact manner and achieve the global convergence guarantee in the deep learning context. MobilePrune also allows complicated group structures to be applied on the group penalty (i.e., trees and overlapping groups) to suit DNN models with more complex architectures. Empirically, we observe the substantial reduction of compression ratio and computational costs for various popular deep learning models on multiple benchmark datasets compared to the state-of-the-art methods. More importantly, the compression models are deployed on the android system to confirm that our approach is able to achieve less response delay and battery consumption on mobile phones.
format Online
Article
Text
id pubmed-9185446
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-91854462022-06-11 MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System Shao, Yubo Zhao, Kaikai Cao, Zhiwen Peng, Zhehao Peng, Xingang Li, Pan Wang, Yijie Ma, Jianzhu Sensors (Basel) Article It is hard to directly deploy deep learning models on today’s smartphones due to the substantial computational costs introduced by millions of parameters. To compress the model, we develop an [Formula: see text]-based sparse group lasso model called MobilePrune which can generate extremely compact neural network models for both desktop and mobile platforms. We adopt group lasso penalty to enforce sparsity at the group level to benefit General Matrix Multiply (GEMM) and develop the very first algorithm that can optimize the [Formula: see text] norm in an exact manner and achieve the global convergence guarantee in the deep learning context. MobilePrune also allows complicated group structures to be applied on the group penalty (i.e., trees and overlapping groups) to suit DNN models with more complex architectures. Empirically, we observe the substantial reduction of compression ratio and computational costs for various popular deep learning models on multiple benchmark datasets compared to the state-of-the-art methods. More importantly, the compression models are deployed on the android system to confirm that our approach is able to achieve less response delay and battery consumption on mobile phones. MDPI 2022-05-27 /pmc/articles/PMC9185446/ /pubmed/35684708 http://dx.doi.org/10.3390/s22114081 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Shao, Yubo
Zhao, Kaikai
Cao, Zhiwen
Peng, Zhehao
Peng, Xingang
Li, Pan
Wang, Yijie
Ma, Jianzhu
MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System
title MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System
title_full MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System
title_fullStr MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System
title_full_unstemmed MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System
title_short MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System
title_sort mobileprune: neural network compression via ℓ(0) sparse group lasso on the mobile system
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9185446/
https://www.ncbi.nlm.nih.gov/pubmed/35684708
http://dx.doi.org/10.3390/s22114081
work_keys_str_mv AT shaoyubo mobilepruneneuralnetworkcompressionvial0sparsegrouplassoonthemobilesystem
AT zhaokaikai mobilepruneneuralnetworkcompressionvial0sparsegrouplassoonthemobilesystem
AT caozhiwen mobilepruneneuralnetworkcompressionvial0sparsegrouplassoonthemobilesystem
AT pengzhehao mobilepruneneuralnetworkcompressionvial0sparsegrouplassoonthemobilesystem
AT pengxingang mobilepruneneuralnetworkcompressionvial0sparsegrouplassoonthemobilesystem
AT lipan mobilepruneneuralnetworkcompressionvial0sparsegrouplassoonthemobilesystem
AT wangyijie mobilepruneneuralnetworkcompressionvial0sparsegrouplassoonthemobilesystem
AT majianzhu mobilepruneneuralnetworkcompressionvial0sparsegrouplassoonthemobilesystem