Cargando…
Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks
In this paper, we explore efficient hardware implementation of feedforward artificial neural networks (ANNs) using approximate adders and multipliers. Due to a large area requirement in a parallel architecture, the ANNs are implemented under the time-multiplexed architecture where computing resource...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10123482/ https://www.ncbi.nlm.nih.gov/pubmed/37359149 http://dx.doi.org/10.1007/s00034-023-02363-w |
_version_ | 1785029668608409600 |
---|---|
author | Esmali Nojehdeh, Mohammadreza Altun, Mustafa |
author_facet | Esmali Nojehdeh, Mohammadreza Altun, Mustafa |
author_sort | Esmali Nojehdeh, Mohammadreza |
collection | PubMed |
description | In this paper, we explore efficient hardware implementation of feedforward artificial neural networks (ANNs) using approximate adders and multipliers. Due to a large area requirement in a parallel architecture, the ANNs are implemented under the time-multiplexed architecture where computing resources are re-used in the multiply accumulate (MAC) blocks. The efficient hardware implementation of ANNs is realized by replacing the exact adders and multipliers in the MAC blocks by the approximate ones taking into account the hardware accuracy. Additionally, an algorithm to determine the approximate level of multipliers and adders due to the expected accuracy is proposed. As an application, the MNIST and SVHN databases are considered. To examine the efficiency of the proposed method, various architectures and structures of ANNs are realized. Experimental results show that the ANNs designed using the proposed approximate multiplier have a smaller area and consume less energy than those designed using previously proposed prominent approximate multipliers. It is also observed that the use of both approximate adders and multipliers yields, respectively, up to 50% and 10% reduction in energy consumption and area of the ANN design with a small deviation or better hardware accuracy when compared to the exact adders and multipliers. |
format | Online Article Text |
id | pubmed-10123482 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-101234822023-04-25 Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks Esmali Nojehdeh, Mohammadreza Altun, Mustafa Circuits Syst Signal Process Article In this paper, we explore efficient hardware implementation of feedforward artificial neural networks (ANNs) using approximate adders and multipliers. Due to a large area requirement in a parallel architecture, the ANNs are implemented under the time-multiplexed architecture where computing resources are re-used in the multiply accumulate (MAC) blocks. The efficient hardware implementation of ANNs is realized by replacing the exact adders and multipliers in the MAC blocks by the approximate ones taking into account the hardware accuracy. Additionally, an algorithm to determine the approximate level of multipliers and adders due to the expected accuracy is proposed. As an application, the MNIST and SVHN databases are considered. To examine the efficiency of the proposed method, various architectures and structures of ANNs are realized. Experimental results show that the ANNs designed using the proposed approximate multiplier have a smaller area and consume less energy than those designed using previously proposed prominent approximate multipliers. It is also observed that the use of both approximate adders and multipliers yields, respectively, up to 50% and 10% reduction in energy consumption and area of the ANN design with a small deviation or better hardware accuracy when compared to the exact adders and multipliers. Springer US 2023-04-24 /pmc/articles/PMC10123482/ /pubmed/37359149 http://dx.doi.org/10.1007/s00034-023-02363-w Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Esmali Nojehdeh, Mohammadreza Altun, Mustafa Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks |
title | Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks |
title_full | Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks |
title_fullStr | Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks |
title_full_unstemmed | Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks |
title_short | Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks |
title_sort | energy-efficient hardware implementation of fully connected artificial neural networks using approximate arithmetic blocks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10123482/ https://www.ncbi.nlm.nih.gov/pubmed/37359149 http://dx.doi.org/10.1007/s00034-023-02363-w |
work_keys_str_mv | AT esmalinojehdehmohammadreza energyefficienthardwareimplementationoffullyconnectedartificialneuralnetworksusingapproximatearithmeticblocks AT altunmustafa energyefficienthardwareimplementationoffullyconnectedartificialneuralnetworksusingapproximatearithmeticblocks |