Cargando…
Random pruning: channel sparsity by expectation scaling factor
Pruning is an efficient method for deep neural network model compression and acceleration. However, existing pruning strategies, both at the filter level and at the channel level, often introduce a large amount of computation and adopt complex methods for finding sub-networks. It is found that there...
Autores principales: | Sun, Chuanmeng, Chen, Jiaxin, Li, Yong, Wang, Wenbo, Ma, Tiehua |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10495938/ https://www.ncbi.nlm.nih.gov/pubmed/37705629 http://dx.doi.org/10.7717/peerj-cs.1564 |
Ejemplares similares
-
Addressing Label Sparsity With Class-Level Common Sense for Google Maps
por: Welty, Chris, et al.
Publicado: (2022) -
Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference
por: Hawks, Benjamin, et al.
Publicado: (2021) -
A Synaptic Pruning-Based Spiking Neural Network for Hand-Written Digits Classification
por: Faghihi, Faramarz, et al.
Publicado: (2022) -
Implementation of a Commitment Machine for an Adaptive and Robust Expected Shortfall Estimation
por: Bagnato, Marco, et al.
Publicado: (2021) -
Psychological assessment of AI-based decision support systems: tool development and expected benefits
por: Buschmeyer, Katharina, et al.
Publicado: (2023)