Cargando…
Rethinking Weight Decay for Efficient Neural Network Pruning
Introduced in the late 1980s for generalization purposes, pruning has now become a staple for compressing deep neural networks. Despite many innovations in recent decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in t...
Autores principales: | Tessier, Hugo, Gripon, Vincent, Léonardon, Mathieu, Arzel, Matthieu, Hannagan, Thomas, Bertrand, David |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8950981/ https://www.ncbi.nlm.nih.gov/pubmed/35324619 http://dx.doi.org/10.3390/jimaging8030064 |
Ejemplares similares
-
Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks
por: Wu, Tao, et al.
Publicado: (2021) -
Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference
por: Hawks, Benjamin, et al.
Publicado: (2021) -
Unsupervised Adaptive Weight Pruning for Energy-Efficient Neuromorphic Systems
por: Guo, Wenzhe, et al.
Publicado: (2020) -
Quantization and Deployment of Deep Neural Networks on Microcontrollers
por: Novac, Pierre-Emmanuel, et al.
Publicado: (2021) -
Rethinking arithmetic for deep neural networks
por: Constantinides, G. A.
Publicado: (2020)