Cargando…

Efficient shallow learning as an alternative to deep learning

The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer...

Descripción completa

Detalles Bibliográficos
Autores principales: Meir, Yuval, Tevet, Ofek, Tzach, Yarden, Hodassman, Shiri, Gross, Ronit D., Kanter, Ido
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10119101/
https://www.ncbi.nlm.nih.gov/pubmed/37080998
http://dx.doi.org/10.1038/s41598-023-32559-8
_version_ 1785028951023812608
author Meir, Yuval
Tevet, Ofek
Tzach, Yarden
Hodassman, Shiri
Gross, Ronit D.
Kanter, Ido
author_facet Meir, Yuval
Tevet, Ofek
Tzach, Yarden
Hodassman, Shiri
Gross, Ronit D.
Kanter, Ido
author_sort Meir, Yuval
collection PubMed
description The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. Here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow LeNet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. The extrapolation of this power law indicates that the generalized LeNet can achieve small error rates that were previously obtained for the CIFAR-10 database using DL architectures. A power law with a similar exponent also characterizes the generalized VGG-16 architecture. However, this results in a significantly increased number of operations required to achieve a given error rate with respect to LeNet. This power law phenomenon governs various generalized LeNet and VGG-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time–space complexity among machine learning architectures. Additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. The efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments.
format Online
Article
Text
id pubmed-10119101
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-101191012023-04-22 Efficient shallow learning as an alternative to deep learning Meir, Yuval Tevet, Ofek Tzach, Yarden Hodassman, Shiri Gross, Ronit D. Kanter, Ido Sci Rep Article The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. Here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow LeNet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. The extrapolation of this power law indicates that the generalized LeNet can achieve small error rates that were previously obtained for the CIFAR-10 database using DL architectures. A power law with a similar exponent also characterizes the generalized VGG-16 architecture. However, this results in a significantly increased number of operations required to achieve a given error rate with respect to LeNet. This power law phenomenon governs various generalized LeNet and VGG-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time–space complexity among machine learning architectures. Additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. The efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments. Nature Publishing Group UK 2023-04-20 /pmc/articles/PMC10119101/ /pubmed/37080998 http://dx.doi.org/10.1038/s41598-023-32559-8 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Meir, Yuval
Tevet, Ofek
Tzach, Yarden
Hodassman, Shiri
Gross, Ronit D.
Kanter, Ido
Efficient shallow learning as an alternative to deep learning
title Efficient shallow learning as an alternative to deep learning
title_full Efficient shallow learning as an alternative to deep learning
title_fullStr Efficient shallow learning as an alternative to deep learning
title_full_unstemmed Efficient shallow learning as an alternative to deep learning
title_short Efficient shallow learning as an alternative to deep learning
title_sort efficient shallow learning as an alternative to deep learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10119101/
https://www.ncbi.nlm.nih.gov/pubmed/37080998
http://dx.doi.org/10.1038/s41598-023-32559-8
work_keys_str_mv AT meiryuval efficientshallowlearningasanalternativetodeeplearning
AT tevetofek efficientshallowlearningasanalternativetodeeplearning
AT tzachyarden efficientshallowlearningasanalternativetodeeplearning
AT hodassmanshiri efficientshallowlearningasanalternativetodeeplearning
AT grossronitd efficientshallowlearningasanalternativetodeeplearning
AT kanterido efficientshallowlearningasanalternativetodeeplearning