Cargando…
Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure
Deep neural networks may achieve excellent performance in many research fields. However, many deep neural network models are over-parameterized. The computation of weight matrices often consumes a lot of time, which requires plenty of computing resources. In order to solve these problems, a novel bl...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8391831/ https://www.ncbi.nlm.nih.gov/pubmed/34441182 http://dx.doi.org/10.3390/e23081042 |
_version_ | 1783743364324130816 |
---|---|
author | Huang, Lan Zeng, Jia Sun, Shiqi Wang, Wencong Wang, Yan Wang, Kangping |
author_facet | Huang, Lan Zeng, Jia Sun, Shiqi Wang, Wencong Wang, Yan Wang, Kangping |
author_sort | Huang, Lan |
collection | PubMed |
description | Deep neural networks may achieve excellent performance in many research fields. However, many deep neural network models are over-parameterized. The computation of weight matrices often consumes a lot of time, which requires plenty of computing resources. In order to solve these problems, a novel block-based division method and a special coarse-grained block pruning strategy are proposed in this paper to simplify and compress the fully connected structure, and the pruned weight matrices with a blocky structure are then stored in the format of Block Sparse Row (BSR) to accelerate the calculation of the weight matrices. First, the weight matrices are divided into square sub-blocks based on spatial aggregation. Second, a coarse-grained block pruning procedure is utilized to scale down the model parameters. Finally, the BSR storage format, which is much more friendly to block sparse matrix storage and computation, is employed to store these pruned dense weight blocks to speed up the calculation. In the following experiments on MNIST and Fashion-MNIST datasets, the trend of accuracies with different pruning granularities and different sparsity is explored in order to analyze our method. The experimental results show that our coarse-grained block pruning method can compress the network and can reduce the computational cost without greatly degrading the classification accuracy. The experiment on the CIFAR-10 dataset shows that our block pruning strategy can combine well with the convolutional networks. |
format | Online Article Text |
id | pubmed-8391831 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-83918312021-08-28 Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure Huang, Lan Zeng, Jia Sun, Shiqi Wang, Wencong Wang, Yan Wang, Kangping Entropy (Basel) Article Deep neural networks may achieve excellent performance in many research fields. However, many deep neural network models are over-parameterized. The computation of weight matrices often consumes a lot of time, which requires plenty of computing resources. In order to solve these problems, a novel block-based division method and a special coarse-grained block pruning strategy are proposed in this paper to simplify and compress the fully connected structure, and the pruned weight matrices with a blocky structure are then stored in the format of Block Sparse Row (BSR) to accelerate the calculation of the weight matrices. First, the weight matrices are divided into square sub-blocks based on spatial aggregation. Second, a coarse-grained block pruning procedure is utilized to scale down the model parameters. Finally, the BSR storage format, which is much more friendly to block sparse matrix storage and computation, is employed to store these pruned dense weight blocks to speed up the calculation. In the following experiments on MNIST and Fashion-MNIST datasets, the trend of accuracies with different pruning granularities and different sparsity is explored in order to analyze our method. The experimental results show that our coarse-grained block pruning method can compress the network and can reduce the computational cost without greatly degrading the classification accuracy. The experiment on the CIFAR-10 dataset shows that our block pruning strategy can combine well with the convolutional networks. MDPI 2021-08-13 /pmc/articles/PMC8391831/ /pubmed/34441182 http://dx.doi.org/10.3390/e23081042 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Huang, Lan Zeng, Jia Sun, Shiqi Wang, Wencong Wang, Yan Wang, Kangping Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure |
title | Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure |
title_full | Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure |
title_fullStr | Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure |
title_full_unstemmed | Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure |
title_short | Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure |
title_sort | coarse-grained pruning of neural network models based on blocky sparse structure |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8391831/ https://www.ncbi.nlm.nih.gov/pubmed/34441182 http://dx.doi.org/10.3390/e23081042 |
work_keys_str_mv | AT huanglan coarsegrainedpruningofneuralnetworkmodelsbasedonblockysparsestructure AT zengjia coarsegrainedpruningofneuralnetworkmodelsbasedonblockysparsestructure AT sunshiqi coarsegrainedpruningofneuralnetworkmodelsbasedonblockysparsestructure AT wangwencong coarsegrainedpruningofneuralnetworkmodelsbasedonblockysparsestructure AT wangyan coarsegrainedpruningofneuralnetworkmodelsbasedonblockysparsestructure AT wangkangping coarsegrainedpruningofneuralnetworkmodelsbasedonblockysparsestructure |