Cargando…

Rosenblatt’s First Theorem and Frugality of Deep Learning

The Rosenblatt’s first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded nu...

Descripción completa

Detalles Bibliográficos
Autores principales: Kirdin, Alexander, Sidorov, Sergey, Zolotykh, Nikolai
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9689667/
https://www.ncbi.nlm.nih.gov/pubmed/36359726
http://dx.doi.org/10.3390/e24111635
_version_ 1784836593343791104
author Kirdin, Alexander
Sidorov, Sergey
Zolotykh, Nikolai
author_facet Kirdin, Alexander
Sidorov, Sergey
Zolotykh, Nikolai
author_sort Kirdin, Alexander
collection PubMed
description The Rosenblatt’s first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of the receptive field for each neuron at the hidden layer. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. In this note, we demonstrated Rosenblatt’s first theorem at work, showed how an elementary perceptron can solve a version of the travel maze problem, and analysed the complexity of that solution. We also constructed a deep network algorithm for the same problem. It is much more efficient. The shallow network uses an exponentially large number of neurons on the hidden layer (Rosenblatt’s A-elements), whereas for the deep network, the second-order polynomial complexity is sufficient. We demonstrated that for the same complex problem, the deep network can be much smaller and reveal a heuristic behind this effect.
format Online
Article
Text
id pubmed-9689667
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96896672022-11-25 Rosenblatt’s First Theorem and Frugality of Deep Learning Kirdin, Alexander Sidorov, Sergey Zolotykh, Nikolai Entropy (Basel) Article The Rosenblatt’s first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of the receptive field for each neuron at the hidden layer. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. In this note, we demonstrated Rosenblatt’s first theorem at work, showed how an elementary perceptron can solve a version of the travel maze problem, and analysed the complexity of that solution. We also constructed a deep network algorithm for the same problem. It is much more efficient. The shallow network uses an exponentially large number of neurons on the hidden layer (Rosenblatt’s A-elements), whereas for the deep network, the second-order polynomial complexity is sufficient. We demonstrated that for the same complex problem, the deep network can be much smaller and reveal a heuristic behind this effect. MDPI 2022-11-10 /pmc/articles/PMC9689667/ /pubmed/36359726 http://dx.doi.org/10.3390/e24111635 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kirdin, Alexander
Sidorov, Sergey
Zolotykh, Nikolai
Rosenblatt’s First Theorem and Frugality of Deep Learning
title Rosenblatt’s First Theorem and Frugality of Deep Learning
title_full Rosenblatt’s First Theorem and Frugality of Deep Learning
title_fullStr Rosenblatt’s First Theorem and Frugality of Deep Learning
title_full_unstemmed Rosenblatt’s First Theorem and Frugality of Deep Learning
title_short Rosenblatt’s First Theorem and Frugality of Deep Learning
title_sort rosenblatt’s first theorem and frugality of deep learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9689667/
https://www.ncbi.nlm.nih.gov/pubmed/36359726
http://dx.doi.org/10.3390/e24111635
work_keys_str_mv AT kirdinalexander rosenblattsfirsttheoremandfrugalityofdeeplearning
AT sidorovsergey rosenblattsfirsttheoremandfrugalityofdeeplearning
AT zolotykhnikolai rosenblattsfirsttheoremandfrugalityofdeeplearning