Cargando…

Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI

Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to in...

Descripción completa

Detalles Bibliográficos
Autores principales: Pistellato, Mara, Bergamasco, Filippo, Bigaglia, Gianluca, Gasparetto, Andrea, Albarelli, Andrea, Boschetti, Marco, Passerone, Roberto
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10222267/
https://www.ncbi.nlm.nih.gov/pubmed/37430583
http://dx.doi.org/10.3390/s23104667
_version_ 1785049656463458304
author Pistellato, Mara
Bergamasco, Filippo
Bigaglia, Gianluca
Gasparetto, Andrea
Albarelli, Andrea
Boschetti, Marco
Passerone, Roberto
author_facet Pistellato, Mara
Bergamasco, Filippo
Bigaglia, Gianluca
Gasparetto, Andrea
Albarelli, Andrea
Boschetti, Marco
Passerone, Roberto
author_sort Pistellato, Mara
collection PubMed
description Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called Requantizer, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only quantization-aware, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators.
format Online
Article
Text
id pubmed-10222267
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102222672023-05-28 Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI Pistellato, Mara Bergamasco, Filippo Bigaglia, Gianluca Gasparetto, Andrea Albarelli, Andrea Boschetti, Marco Passerone, Roberto Sensors (Basel) Article Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called Requantizer, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only quantization-aware, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators. MDPI 2023-05-11 /pmc/articles/PMC10222267/ /pubmed/37430583 http://dx.doi.org/10.3390/s23104667 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Pistellato, Mara
Bergamasco, Filippo
Bigaglia, Gianluca
Gasparetto, Andrea
Albarelli, Andrea
Boschetti, Marco
Passerone, Roberto
Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_full Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_fullStr Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_full_unstemmed Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_short Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_sort quantization-aware nn layers with high-throughput fpga implementation for edge ai
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10222267/
https://www.ncbi.nlm.nih.gov/pubmed/37430583
http://dx.doi.org/10.3390/s23104667
work_keys_str_mv AT pistellatomara quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT bergamascofilippo quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT bigagliagianluca quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT gasparettoandrea quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT albarelliandrea quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT boschettimarco quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT passeroneroberto quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai