Cargando…
MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System
It is hard to directly deploy deep learning models on today’s smartphones due to the substantial computational costs introduced by millions of parameters. To compress the model, we develop an [Formula: see text]-based sparse group lasso model called MobilePrune which can generate extremely compact n...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9185446/ https://www.ncbi.nlm.nih.gov/pubmed/35684708 http://dx.doi.org/10.3390/s22114081 |
Sumario: | It is hard to directly deploy deep learning models on today’s smartphones due to the substantial computational costs introduced by millions of parameters. To compress the model, we develop an [Formula: see text]-based sparse group lasso model called MobilePrune which can generate extremely compact neural network models for both desktop and mobile platforms. We adopt group lasso penalty to enforce sparsity at the group level to benefit General Matrix Multiply (GEMM) and develop the very first algorithm that can optimize the [Formula: see text] norm in an exact manner and achieve the global convergence guarantee in the deep learning context. MobilePrune also allows complicated group structures to be applied on the group penalty (i.e., trees and overlapping groups) to suit DNN models with more complex architectures. Empirically, we observe the substantial reduction of compression ratio and computational costs for various popular deep learning models on multiple benchmark datasets compared to the state-of-the-art methods. More importantly, the compression models are deployed on the android system to confirm that our approach is able to achieve less response delay and battery consumption on mobile phones. |
---|