Cargando…
Differentiable Network Pruning via Polarization of Probabilistic Channelwise Soft Masks
Channel pruning has been demonstrated as a highly effective approach to compress large convolutional neural networks. Existing differentiable channel pruning methods usually use deterministic soft masks to scale the channelwise outputs and explore an appropriate threshold on the masks to remove unim...
Autores principales: | Ma, Ming, Wang, Jiapeng, Yu, Zhenhua |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9098282/ https://www.ncbi.nlm.nih.gov/pubmed/35571691 http://dx.doi.org/10.1155/2022/7775419 |
Ejemplares similares
-
A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications
por: Shi, Yuhan, et al.
Publicado: (2019) -
Probabilistic logics and probabilistic networks
por: Haenni, Rolf, et al.
Publicado: (2014) -
Enhanced Permutation Tests via Multiple Pruning
por: Leem, Sangseob, et al.
Publicado: (2020) -
MobilePrune: Neural Network Compression via ℓ(0) Sparse Group Lasso on the Mobile System
por: Shao, Yubo, et al.
Publicado: (2022) -
Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks
por: Wu, Tao, et al.
Publicado: (2021)