Cargando…

A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation

To address the problems of large storage requirements, computational pressure, untimely data supply of off-chip memory, and low computational efficiency during hardware deployment due to the large number of convolutional neural network (CNN) parameters, we developed an innovative hardware-friendly C...

Descripción completa

Detalles Bibliográficos
Autores principales: Sui, Xuefu, Lv, Qunbo, Zhi, Liangjie, Zhu, Baoyu, Yang, Yuanbo, Zhang, Yu, Tan, Zheng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9862432/
https://www.ncbi.nlm.nih.gov/pubmed/36679624
http://dx.doi.org/10.3390/s23020824
_version_ 1784875091143688192
author Sui, Xuefu
Lv, Qunbo
Zhi, Liangjie
Zhu, Baoyu
Yang, Yuanbo
Zhang, Yu
Tan, Zheng
author_facet Sui, Xuefu
Lv, Qunbo
Zhi, Liangjie
Zhu, Baoyu
Yang, Yuanbo
Zhang, Yu
Tan, Zheng
author_sort Sui, Xuefu
collection PubMed
description To address the problems of large storage requirements, computational pressure, untimely data supply of off-chip memory, and low computational efficiency during hardware deployment due to the large number of convolutional neural network (CNN) parameters, we developed an innovative hardware-friendly CNN pruning method called KRP, which prunes the convolutional kernel on a row scale. A new retraining method based on LR tracking was used to obtain a CNN model with both a high pruning rate and accuracy. Furthermore, we designed a high-performance convolutional computation module on the FPGA platform to help deploy KRP pruning models. The results of comparative experiments on CNNs such as VGG and ResNet showed that KRP has higher accuracy than most pruning methods. At the same time, the KRP method, together with the GSNQ quantization method developed in our previous study, forms a high-precision hardware-friendly network compression framework that can achieve “lossless” CNN compression with a 27× reduction in network model storage. The results of the comparative experiments on the FPGA showed that the KRP pruning method not only requires much less storage space, but also helps to reduce the on-chip hardware resource consumption by more than half and effectively improves the parallelism of the model in FPGAs with a strong hardware-friendly feature. This study provides more ideas for the application of CNNs in the field of edge computing.
format Online
Article
Text
id pubmed-9862432
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98624322023-01-22 A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation Sui, Xuefu Lv, Qunbo Zhi, Liangjie Zhu, Baoyu Yang, Yuanbo Zhang, Yu Tan, Zheng Sensors (Basel) Article To address the problems of large storage requirements, computational pressure, untimely data supply of off-chip memory, and low computational efficiency during hardware deployment due to the large number of convolutional neural network (CNN) parameters, we developed an innovative hardware-friendly CNN pruning method called KRP, which prunes the convolutional kernel on a row scale. A new retraining method based on LR tracking was used to obtain a CNN model with both a high pruning rate and accuracy. Furthermore, we designed a high-performance convolutional computation module on the FPGA platform to help deploy KRP pruning models. The results of comparative experiments on CNNs such as VGG and ResNet showed that KRP has higher accuracy than most pruning methods. At the same time, the KRP method, together with the GSNQ quantization method developed in our previous study, forms a high-precision hardware-friendly network compression framework that can achieve “lossless” CNN compression with a 27× reduction in network model storage. The results of the comparative experiments on the FPGA showed that the KRP pruning method not only requires much less storage space, but also helps to reduce the on-chip hardware resource consumption by more than half and effectively improves the parallelism of the model in FPGAs with a strong hardware-friendly feature. This study provides more ideas for the application of CNNs in the field of edge computing. MDPI 2023-01-11 /pmc/articles/PMC9862432/ /pubmed/36679624 http://dx.doi.org/10.3390/s23020824 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Sui, Xuefu
Lv, Qunbo
Zhi, Liangjie
Zhu, Baoyu
Yang, Yuanbo
Zhang, Yu
Tan, Zheng
A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation
title A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation
title_full A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation
title_fullStr A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation
title_full_unstemmed A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation
title_short A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation
title_sort hardware-friendly high-precision cnn pruning method and its fpga implementation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9862432/
https://www.ncbi.nlm.nih.gov/pubmed/36679624
http://dx.doi.org/10.3390/s23020824
work_keys_str_mv AT suixuefu ahardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT lvqunbo ahardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT zhiliangjie ahardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT zhubaoyu ahardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT yangyuanbo ahardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT zhangyu ahardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT tanzheng ahardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT suixuefu hardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT lvqunbo hardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT zhiliangjie hardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT zhubaoyu hardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT yangyuanbo hardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT zhangyu hardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation
AT tanzheng hardwarefriendlyhighprecisioncnnpruningmethodanditsfpgaimplementation