Cargando…

Random pruning: channel sparsity by expectation scaling factor

Pruning is an efficient method for deep neural network model compression and acceleration. However, existing pruning strategies, both at the filter level and at the channel level, often introduce a large amount of computation and adopt complex methods for finding sub-networks. It is found that there...

Descripción completa

Detalles Bibliográficos
Autores principales: Sun, Chuanmeng, Chen, Jiaxin, Li, Yong, Wang, Wenbo, Ma, Tiehua
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10495938/
https://www.ncbi.nlm.nih.gov/pubmed/37705629
http://dx.doi.org/10.7717/peerj-cs.1564
_version_ 1785104999850704896
author Sun, Chuanmeng
Chen, Jiaxin
Li, Yong
Wang, Wenbo
Ma, Tiehua
author_facet Sun, Chuanmeng
Chen, Jiaxin
Li, Yong
Wang, Wenbo
Ma, Tiehua
author_sort Sun, Chuanmeng
collection PubMed
description Pruning is an efficient method for deep neural network model compression and acceleration. However, existing pruning strategies, both at the filter level and at the channel level, often introduce a large amount of computation and adopt complex methods for finding sub-networks. It is found that there is a linear relationship between the sum of matrix elements of the channels in convolutional neural networks (CNNs) and the expectation scaling ratio of the image pixel distribution, which is reflects the relationship between the expectation change of the pixel distribution between the feature mapping and the input data. This implies that channels with similar expectation scaling factors ( [Image: see text] ) cause similar expectation changes to the input data, thus producing redundant feature mappings. Thus, this article proposes a new structured pruning method called EXP. In the proposed method, the channels with similar [Image: see text] are randomly removed in each convolutional layer, and thus the whole network achieves random sparsity to obtain non-redundant and non-unique sub-networks. Experiments on pruning various networks show that EXP can achieve a significant reduction of FLOPs. For example, on the CIFAR-10 dataset, EXP reduces the FLOPs of the ResNet-56 model by 71.9% with a 0.23% loss in Top-1 accuracy. On ILSVRC-2012, it reduces the FLOPs of the ResNet-50 model by 60.0% with a 1.13% loss of Top-1 accuracy. Our code is available at: https://github.com/EXP-Pruning/EXP_Pruning and DOI: 10.5281/zenodo.8141065.
format Online
Article
Text
id pubmed-10495938
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-104959382023-09-13 Random pruning: channel sparsity by expectation scaling factor Sun, Chuanmeng Chen, Jiaxin Li, Yong Wang, Wenbo Ma, Tiehua PeerJ Comput Sci Artificial Intelligence Pruning is an efficient method for deep neural network model compression and acceleration. However, existing pruning strategies, both at the filter level and at the channel level, often introduce a large amount of computation and adopt complex methods for finding sub-networks. It is found that there is a linear relationship between the sum of matrix elements of the channels in convolutional neural networks (CNNs) and the expectation scaling ratio of the image pixel distribution, which is reflects the relationship between the expectation change of the pixel distribution between the feature mapping and the input data. This implies that channels with similar expectation scaling factors ( [Image: see text] ) cause similar expectation changes to the input data, thus producing redundant feature mappings. Thus, this article proposes a new structured pruning method called EXP. In the proposed method, the channels with similar [Image: see text] are randomly removed in each convolutional layer, and thus the whole network achieves random sparsity to obtain non-redundant and non-unique sub-networks. Experiments on pruning various networks show that EXP can achieve a significant reduction of FLOPs. For example, on the CIFAR-10 dataset, EXP reduces the FLOPs of the ResNet-56 model by 71.9% with a 0.23% loss in Top-1 accuracy. On ILSVRC-2012, it reduces the FLOPs of the ResNet-50 model by 60.0% with a 1.13% loss of Top-1 accuracy. Our code is available at: https://github.com/EXP-Pruning/EXP_Pruning and DOI: 10.5281/zenodo.8141065. PeerJ Inc. 2023-09-05 /pmc/articles/PMC10495938/ /pubmed/37705629 http://dx.doi.org/10.7717/peerj-cs.1564 Text en © 2023 Sun et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Sun, Chuanmeng
Chen, Jiaxin
Li, Yong
Wang, Wenbo
Ma, Tiehua
Random pruning: channel sparsity by expectation scaling factor
title Random pruning: channel sparsity by expectation scaling factor
title_full Random pruning: channel sparsity by expectation scaling factor
title_fullStr Random pruning: channel sparsity by expectation scaling factor
title_full_unstemmed Random pruning: channel sparsity by expectation scaling factor
title_short Random pruning: channel sparsity by expectation scaling factor
title_sort random pruning: channel sparsity by expectation scaling factor
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10495938/
https://www.ncbi.nlm.nih.gov/pubmed/37705629
http://dx.doi.org/10.7717/peerj-cs.1564
work_keys_str_mv AT sunchuanmeng randompruningchannelsparsitybyexpectationscalingfactor
AT chenjiaxin randompruningchannelsparsitybyexpectationscalingfactor
AT liyong randompruningchannelsparsitybyexpectationscalingfactor
AT wangwenbo randompruningchannelsparsitybyexpectationscalingfactor
AT matiehua randompruningchannelsparsitybyexpectationscalingfactor