Cargando…

Design Space Exploration of a Sparse MobileNetV2 Using High-Level Synthesis and Sparse Matrix Techniques on FPGAs

Convolution Neural Networks (CNNs) are gaining ground in deep learning and Artificial Intelligence (AI) domains, and they can benefit from rapid prototyping in order to produce efficient and low-power hardware designs. The inference process of a Deep Neural Network (DNN) is considered a computationa...

Descripción completa

Detalles Bibliográficos
Autores principales: Tragoudaras, Antonios, Stoikos, Pavlos, Fanaras, Konstantinos, Tziouvaras, Athanasios, Floros, George, Dimitriou, Georgios, Kolomvatsos, Kostas, Stamoulis, Georgios
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9229434/
https://www.ncbi.nlm.nih.gov/pubmed/35746100
http://dx.doi.org/10.3390/s22124318
_version_ 1784734746057637888
author Tragoudaras, Antonios
Stoikos, Pavlos
Fanaras, Konstantinos
Tziouvaras, Athanasios
Floros, George
Dimitriou, Georgios
Kolomvatsos, Kostas
Stamoulis, Georgios
author_facet Tragoudaras, Antonios
Stoikos, Pavlos
Fanaras, Konstantinos
Tziouvaras, Athanasios
Floros, George
Dimitriou, Georgios
Kolomvatsos, Kostas
Stamoulis, Georgios
author_sort Tragoudaras, Antonios
collection PubMed
description Convolution Neural Networks (CNNs) are gaining ground in deep learning and Artificial Intelligence (AI) domains, and they can benefit from rapid prototyping in order to produce efficient and low-power hardware designs. The inference process of a Deep Neural Network (DNN) is considered a computationally intensive process that requires hardware accelerators to operate in real-world scenarios due to the low latency requirements of real-time applications. As a result, High-Level Synthesis (HLS) tools are gaining popularity since they provide attractive ways to reduce design time complexity directly in register transfer level (RTL). In this paper, we implement a MobileNetV2 model using a state-of-the-art HLS tool in order to conduct a design space exploration and to provide insights on complex hardware designs which are tailored for DNN inference. Our goal is to combine design methodologies with sparsification techniques to produce hardware accelerators that achieve comparable error metrics within the same order of magnitude with the corresponding state-of-the-art systems while also significantly reducing the inference latency and resource utilization. Toward this end, we apply sparse matrix techniques on a MobileNetV2 model for efficient data representation, and we evaluate our designs in two different weight pruning approaches. Experimental results are evaluated with respect to the CIFAR-10 data set using several different design methodologies in order to fully explore their effects on the performance of the model under examination.
format Online
Article
Text
id pubmed-9229434
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-92294342022-06-25 Design Space Exploration of a Sparse MobileNetV2 Using High-Level Synthesis and Sparse Matrix Techniques on FPGAs Tragoudaras, Antonios Stoikos, Pavlos Fanaras, Konstantinos Tziouvaras, Athanasios Floros, George Dimitriou, Georgios Kolomvatsos, Kostas Stamoulis, Georgios Sensors (Basel) Article Convolution Neural Networks (CNNs) are gaining ground in deep learning and Artificial Intelligence (AI) domains, and they can benefit from rapid prototyping in order to produce efficient and low-power hardware designs. The inference process of a Deep Neural Network (DNN) is considered a computationally intensive process that requires hardware accelerators to operate in real-world scenarios due to the low latency requirements of real-time applications. As a result, High-Level Synthesis (HLS) tools are gaining popularity since they provide attractive ways to reduce design time complexity directly in register transfer level (RTL). In this paper, we implement a MobileNetV2 model using a state-of-the-art HLS tool in order to conduct a design space exploration and to provide insights on complex hardware designs which are tailored for DNN inference. Our goal is to combine design methodologies with sparsification techniques to produce hardware accelerators that achieve comparable error metrics within the same order of magnitude with the corresponding state-of-the-art systems while also significantly reducing the inference latency and resource utilization. Toward this end, we apply sparse matrix techniques on a MobileNetV2 model for efficient data representation, and we evaluate our designs in two different weight pruning approaches. Experimental results are evaluated with respect to the CIFAR-10 data set using several different design methodologies in order to fully explore their effects on the performance of the model under examination. MDPI 2022-06-07 /pmc/articles/PMC9229434/ /pubmed/35746100 http://dx.doi.org/10.3390/s22124318 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tragoudaras, Antonios
Stoikos, Pavlos
Fanaras, Konstantinos
Tziouvaras, Athanasios
Floros, George
Dimitriou, Georgios
Kolomvatsos, Kostas
Stamoulis, Georgios
Design Space Exploration of a Sparse MobileNetV2 Using High-Level Synthesis and Sparse Matrix Techniques on FPGAs
title Design Space Exploration of a Sparse MobileNetV2 Using High-Level Synthesis and Sparse Matrix Techniques on FPGAs
title_full Design Space Exploration of a Sparse MobileNetV2 Using High-Level Synthesis and Sparse Matrix Techniques on FPGAs
title_fullStr Design Space Exploration of a Sparse MobileNetV2 Using High-Level Synthesis and Sparse Matrix Techniques on FPGAs
title_full_unstemmed Design Space Exploration of a Sparse MobileNetV2 Using High-Level Synthesis and Sparse Matrix Techniques on FPGAs
title_short Design Space Exploration of a Sparse MobileNetV2 Using High-Level Synthesis and Sparse Matrix Techniques on FPGAs
title_sort design space exploration of a sparse mobilenetv2 using high-level synthesis and sparse matrix techniques on fpgas
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9229434/
https://www.ncbi.nlm.nih.gov/pubmed/35746100
http://dx.doi.org/10.3390/s22124318
work_keys_str_mv AT tragoudarasantonios designspaceexplorationofasparsemobilenetv2usinghighlevelsynthesisandsparsematrixtechniquesonfpgas
AT stoikospavlos designspaceexplorationofasparsemobilenetv2usinghighlevelsynthesisandsparsematrixtechniquesonfpgas
AT fanaraskonstantinos designspaceexplorationofasparsemobilenetv2usinghighlevelsynthesisandsparsematrixtechniquesonfpgas
AT tziouvarasathanasios designspaceexplorationofasparsemobilenetv2usinghighlevelsynthesisandsparsematrixtechniquesonfpgas
AT florosgeorge designspaceexplorationofasparsemobilenetv2usinghighlevelsynthesisandsparsematrixtechniquesonfpgas
AT dimitriougeorgios designspaceexplorationofasparsemobilenetv2usinghighlevelsynthesisandsparsematrixtechniquesonfpgas
AT kolomvatsoskostas designspaceexplorationofasparsemobilenetv2usinghighlevelsynthesisandsparsematrixtechniquesonfpgas
AT stamoulisgeorgios designspaceexplorationofasparsemobilenetv2usinghighlevelsynthesisandsparsematrixtechniquesonfpgas