Cargando…
Resource-constrained FPGA/DNN co-design
Deep neural networks (DNNs) have demonstrated super performance in most learning tasks. However, a DNN typically contains a large number of parameters and operations, requiring a high-end processing platform for high-speed execution. To address this challenge, hardware-and-software co-design strateg...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer London
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8122185/ https://www.ncbi.nlm.nih.gov/pubmed/34025038 http://dx.doi.org/10.1007/s00521-021-06113-4 |
Sumario: | Deep neural networks (DNNs) have demonstrated super performance in most learning tasks. However, a DNN typically contains a large number of parameters and operations, requiring a high-end processing platform for high-speed execution. To address this challenge, hardware-and-software co-design strategies, which involve joint DNN optimization and hardware implementation, can be applied. These strategies reduce the parameters and operations of the DNN, and fit it into a low-resource processing platform. In this paper, a DNN model is used for the analysis of the data captured using an electrochemical method to determine the concentration of a neurotransmitter and the recoding electrode. Next, a DNN miniaturization algorithm is introduced, involving combined pruning and compression, to reduce the DNN resource utilization. Here, the DNN is transformed to have sparse parameters by pruning a percentage of its weights. The Lempel–Ziv–Welch algorithm is then applied to compress the sparse DNN. Next, a DNN overlay is developed, combining the decompression of the DNN parameters and DNN inference, to allow the execution of the DNN on a FPGA on the PYNQ-Z2 board. This approach helps avoid the need for inclusion of a complex quantization algorithm. It compresses the DNN by a factor of 6.18, leading to about 50% reduction in the resource utilization on the FPGA. |
---|