Cargando…

On the Reduction of Computational Complexity of Deep Convolutional Neural Networks †

Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets i...

Descripción completa

Detalles Bibliográficos
Autores principales: Maji, Partha, Mullins, Robert
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512823/
https://www.ncbi.nlm.nih.gov/pubmed/33265396
http://dx.doi.org/10.3390/e20040305
_version_ 1783586246755352576
author Maji, Partha
Mullins, Robert
author_facet Maji, Partha
Mullins, Robert
author_sort Maji, Partha
collection PubMed
description Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D) convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy.
format Online
Article
Text
id pubmed-7512823
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75128232020-11-09 On the Reduction of Computational Complexity of Deep Convolutional Neural Networks † Maji, Partha Mullins, Robert Entropy (Basel) Article Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D) convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy. MDPI 2018-04-23 /pmc/articles/PMC7512823/ /pubmed/33265396 http://dx.doi.org/10.3390/e20040305 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Maji, Partha
Mullins, Robert
On the Reduction of Computational Complexity of Deep Convolutional Neural Networks †
title On the Reduction of Computational Complexity of Deep Convolutional Neural Networks †
title_full On the Reduction of Computational Complexity of Deep Convolutional Neural Networks †
title_fullStr On the Reduction of Computational Complexity of Deep Convolutional Neural Networks †
title_full_unstemmed On the Reduction of Computational Complexity of Deep Convolutional Neural Networks †
title_short On the Reduction of Computational Complexity of Deep Convolutional Neural Networks †
title_sort on the reduction of computational complexity of deep convolutional neural networks †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512823/
https://www.ncbi.nlm.nih.gov/pubmed/33265396
http://dx.doi.org/10.3390/e20040305
work_keys_str_mv AT majipartha onthereductionofcomputationalcomplexityofdeepconvolutionalneuralnetworks
AT mullinsrobert onthereductionofcomputationalcomplexityofdeepconvolutionalneuralnetworks