Cargando…
Optimizing the Deep Neural Networks by Layer-Wise Refined Pruning and the Acceleration on FPGA
To accelerate the practical applications of artificial intelligence, this paper proposes a high efficient layer-wise refined pruning method for deep neural networks at the software level and accelerates the inference process at the hardware level on a field-programmable gate array (FPGA). The refine...
Autores principales: | Li, Hengyi, Yue, Xuebin, Wang, Zhichen, Chai, Zhilei, Wang, Wenwen, Tomiyama, Hiroyuki, Meng, Lin |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9177312/ https://www.ncbi.nlm.nih.gov/pubmed/35694575 http://dx.doi.org/10.1155/2022/8039281 |
Ejemplares similares
-
Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks
por: Wu, Tao, et al.
Publicado: (2021) -
Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design
por: Zhu, Zheqi, et al.
Publicado: (2023) -
Lightweight image steganalysis with block-wise pruning
por: Hong, Eungi, et al.
Publicado: (2023) -
Author Correction: Lightweight image steganalysis with block-wise pruning
por: Hong, Eungi, et al.
Publicado: (2023) -
Weight Pruning-UNet: Weight Pruning UNet with Depth-wise Separable Convolutions for Semantic Segmentation of Kidney Tumors
por: Rao, Patike Kiran, et al.
Publicado: (2022)