Cargando…

Joint design and compression of convolutional neural networks as a Bi-level optimization problem

Over the last decade, deep neural networks have shown great success in the fields of machine learning and computer vision. Currently, the CNN (convolutional neural network) is one of the most successful networks, having been applied in a wide variety of application domains, including pattern recogni...

Descripción completa

Detalles Bibliográficos
Autores principales: Louati, Hassen, Bechikh, Slim, Louati, Ali, Aldaej, Abdulaziz, Said, Lamjed Ben
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer London 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9112272/
https://www.ncbi.nlm.nih.gov/pubmed/35599971
http://dx.doi.org/10.1007/s00521-022-07331-0
_version_ 1784709386681188352
author Louati, Hassen
Bechikh, Slim
Louati, Ali
Aldaej, Abdulaziz
Said, Lamjed Ben
author_facet Louati, Hassen
Bechikh, Slim
Louati, Ali
Aldaej, Abdulaziz
Said, Lamjed Ben
author_sort Louati, Hassen
collection PubMed
description Over the last decade, deep neural networks have shown great success in the fields of machine learning and computer vision. Currently, the CNN (convolutional neural network) is one of the most successful networks, having been applied in a wide variety of application domains, including pattern recognition, medical diagnosis and signal processing. Despite CNNs’ impressive performance, their architectural design remains a significant challenge for researchers and practitioners. The problem of selecting hyperparameters is extremely important for these networks. The reason for this is that the search space grows exponentially in size as the number of layers increases. In fact, all existing classical and evolutionary pruning methods take as input an already pre-trained or designed architecture. None of them take pruning into account during the design process. However, to evaluate the quality and possible compactness of any generated architecture, filter pruning should be applied before the communication with the data set to compute the classification error. For instance, a medium-quality architecture in terms of classification could become a very light and accurate architecture after pruning, and vice versa. Many cases are possible, and the number of possibilities is huge. This motivated us to frame the whole process as a bi-level optimization problem where: (1) architecture generation is done at the upper level (with minimum NB and NNB) while (2) its filter pruning optimization is done at the lower level. Motivated by evolutionary algorithms’ (EAs) success in bi-level optimization, we use the newly suggested co-evolutionary migration-based algorithm (CEMBA) as a search engine in this research to address our bi-level architectural optimization problem. The performance of our suggested technique, called Bi-CNN-D-C (Bi-level convolution neural network design and compression), is evaluated using the widely used benchmark data sets for image classification, called CIFAR-10, CIFAR-100 and ImageNet. Our proposed approach is validated by means of a set of comparative experiments with respect to relevant state-of-the-art architectures.
format Online
Article
Text
id pubmed-9112272
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer London
record_format MEDLINE/PubMed
spelling pubmed-91122722022-05-17 Joint design and compression of convolutional neural networks as a Bi-level optimization problem Louati, Hassen Bechikh, Slim Louati, Ali Aldaej, Abdulaziz Said, Lamjed Ben Neural Comput Appl Original Article Over the last decade, deep neural networks have shown great success in the fields of machine learning and computer vision. Currently, the CNN (convolutional neural network) is one of the most successful networks, having been applied in a wide variety of application domains, including pattern recognition, medical diagnosis and signal processing. Despite CNNs’ impressive performance, their architectural design remains a significant challenge for researchers and practitioners. The problem of selecting hyperparameters is extremely important for these networks. The reason for this is that the search space grows exponentially in size as the number of layers increases. In fact, all existing classical and evolutionary pruning methods take as input an already pre-trained or designed architecture. None of them take pruning into account during the design process. However, to evaluate the quality and possible compactness of any generated architecture, filter pruning should be applied before the communication with the data set to compute the classification error. For instance, a medium-quality architecture in terms of classification could become a very light and accurate architecture after pruning, and vice versa. Many cases are possible, and the number of possibilities is huge. This motivated us to frame the whole process as a bi-level optimization problem where: (1) architecture generation is done at the upper level (with minimum NB and NNB) while (2) its filter pruning optimization is done at the lower level. Motivated by evolutionary algorithms’ (EAs) success in bi-level optimization, we use the newly suggested co-evolutionary migration-based algorithm (CEMBA) as a search engine in this research to address our bi-level architectural optimization problem. The performance of our suggested technique, called Bi-CNN-D-C (Bi-level convolution neural network design and compression), is evaluated using the widely used benchmark data sets for image classification, called CIFAR-10, CIFAR-100 and ImageNet. Our proposed approach is validated by means of a set of comparative experiments with respect to relevant state-of-the-art architectures. Springer London 2022-05-17 2022 /pmc/articles/PMC9112272/ /pubmed/35599971 http://dx.doi.org/10.1007/s00521-022-07331-0 Text en © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Original Article
Louati, Hassen
Bechikh, Slim
Louati, Ali
Aldaej, Abdulaziz
Said, Lamjed Ben
Joint design and compression of convolutional neural networks as a Bi-level optimization problem
title Joint design and compression of convolutional neural networks as a Bi-level optimization problem
title_full Joint design and compression of convolutional neural networks as a Bi-level optimization problem
title_fullStr Joint design and compression of convolutional neural networks as a Bi-level optimization problem
title_full_unstemmed Joint design and compression of convolutional neural networks as a Bi-level optimization problem
title_short Joint design and compression of convolutional neural networks as a Bi-level optimization problem
title_sort joint design and compression of convolutional neural networks as a bi-level optimization problem
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9112272/
https://www.ncbi.nlm.nih.gov/pubmed/35599971
http://dx.doi.org/10.1007/s00521-022-07331-0
work_keys_str_mv AT louatihassen jointdesignandcompressionofconvolutionalneuralnetworksasabileveloptimizationproblem
AT bechikhslim jointdesignandcompressionofconvolutionalneuralnetworksasabileveloptimizationproblem
AT louatiali jointdesignandcompressionofconvolutionalneuralnetworksasabileveloptimizationproblem
AT aldaejabdulaziz jointdesignandcompressionofconvolutionalneuralnetworksasabileveloptimizationproblem
AT saidlamjedben jointdesignandcompressionofconvolutionalneuralnetworksasabileveloptimizationproblem