Cargando…
Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks
Deep neural networks have evolved significantly in the past decades and are now able to achieve better progression of sensor data. Nonetheless, most of the deep models verify the ruling maxim in deep learning—bigger is better—so they have very complex structures. As the models become more complex, t...
Autores principales: | Wu, Tao, Li, Xiaoyang, Zhou, Deyun, Li, Na, Shi, Jiao |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7865320/ https://www.ncbi.nlm.nih.gov/pubmed/33525527 http://dx.doi.org/10.3390/s21030880 |
Ejemplares similares
-
Optimizing the Deep Neural Networks by Layer-Wise Refined Pruning and the Acceleration on FPGA
por: Li, Hengyi, et al.
Publicado: (2022) -
Evolutionary Multi-Objective One-Shot Filter Pruning for Designing Lightweight Convolutional Neural Network
por: Wu, Tao, et al.
Publicado: (2021) -
Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression
por: Diao, Huabin, et al.
Publicado: (2021) -
Weight Pruning-UNet: Weight Pruning UNet with Depth-wise Separable Convolutions for Semantic Segmentation of Kidney Tumors
por: Rao, Patike Kiran, et al.
Publicado: (2022) -
Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design
por: Zhu, Zheqi, et al.
Publicado: (2023)