Cargando…

Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression

Convolutional neural networks (CNNs) have achieved significant breakthroughs in various domains, such as natural language processing (NLP), and computer vision. However, performance improvement is often accompanied by large model size and computation costs, which make it not suitable for resource-co...

Descripción completa

Detalles Bibliográficos
Autores principales: Diao, Huabin, Hao, Yuexing, Xu, Shaoyun, Li, Gongyan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8155900/
https://www.ncbi.nlm.nih.gov/pubmed/34065680
http://dx.doi.org/10.3390/s21103464
_version_ 1783699311517761536
author Diao, Huabin
Hao, Yuexing
Xu, Shaoyun
Li, Gongyan
author_facet Diao, Huabin
Hao, Yuexing
Xu, Shaoyun
Li, Gongyan
author_sort Diao, Huabin
collection PubMed
description Convolutional neural networks (CNNs) have achieved significant breakthroughs in various domains, such as natural language processing (NLP), and computer vision. However, performance improvement is often accompanied by large model size and computation costs, which make it not suitable for resource-constrained devices. Consequently, there is an urgent need to compress CNNs, so as to reduce model size and computation costs. This paper proposes a layer-wise differentiable compression (LWDC) algorithm for compressing CNNs structurally. A differentiable selection operator OS is embedded in the model to compress and train the model simultaneously by gradient descent in one go. Instead of pruning parameters from redundant operators by contrast to most of the existing methods, our method replaces the original bulky operators with more lightweight ones directly, which only needs to specify the set of lightweight operators and the regularization factor in advance, rather than the compression rate for each layer. The compressed model produced by our method is generic and does not need any special hardware/software support. Experimental results on CIFAR-10, CIFAR-100 and ImageNet have demonstrated the effectiveness of our method. LWDC obtains more significant compression than state-of-the-art methods in most cases, while having lower performance degradation. The impact of lightweight operators and regularization factor on the compression rate and accuracy also is evaluated.
format Online
Article
Text
id pubmed-8155900
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-81559002021-05-28 Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression Diao, Huabin Hao, Yuexing Xu, Shaoyun Li, Gongyan Sensors (Basel) Article Convolutional neural networks (CNNs) have achieved significant breakthroughs in various domains, such as natural language processing (NLP), and computer vision. However, performance improvement is often accompanied by large model size and computation costs, which make it not suitable for resource-constrained devices. Consequently, there is an urgent need to compress CNNs, so as to reduce model size and computation costs. This paper proposes a layer-wise differentiable compression (LWDC) algorithm for compressing CNNs structurally. A differentiable selection operator OS is embedded in the model to compress and train the model simultaneously by gradient descent in one go. Instead of pruning parameters from redundant operators by contrast to most of the existing methods, our method replaces the original bulky operators with more lightweight ones directly, which only needs to specify the set of lightweight operators and the regularization factor in advance, rather than the compression rate for each layer. The compressed model produced by our method is generic and does not need any special hardware/software support. Experimental results on CIFAR-10, CIFAR-100 and ImageNet have demonstrated the effectiveness of our method. LWDC obtains more significant compression than state-of-the-art methods in most cases, while having lower performance degradation. The impact of lightweight operators and regularization factor on the compression rate and accuracy also is evaluated. MDPI 2021-05-16 /pmc/articles/PMC8155900/ /pubmed/34065680 http://dx.doi.org/10.3390/s21103464 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Diao, Huabin
Hao, Yuexing
Xu, Shaoyun
Li, Gongyan
Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression
title Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression
title_full Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression
title_fullStr Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression
title_full_unstemmed Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression
title_short Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression
title_sort implementation of lightweight convolutional neural networks via layer-wise differentiable compression
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8155900/
https://www.ncbi.nlm.nih.gov/pubmed/34065680
http://dx.doi.org/10.3390/s21103464
work_keys_str_mv AT diaohuabin implementationoflightweightconvolutionalneuralnetworksvialayerwisedifferentiablecompression
AT haoyuexing implementationoflightweightconvolutionalneuralnetworksvialayerwisedifferentiablecompression
AT xushaoyun implementationoflightweightconvolutionalneuralnetworksvialayerwisedifferentiablecompression
AT ligongyan implementationoflightweightconvolutionalneuralnetworksvialayerwisedifferentiablecompression