Cargando…

Doing the Impossible: Why Neural Networks Can Be Trained at All

As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billion...

Descripción completa

Detalles Bibliográficos
Autores principales: Hodas, Nathan O., Stinis, Panos
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6052125/
https://www.ncbi.nlm.nih.gov/pubmed/30050485
http://dx.doi.org/10.3389/fpsyg.2018.01185
_version_ 1783340611561062400
author Hodas, Nathan O.
Stinis, Panos
author_facet Hodas, Nathan O.
Stinis, Panos
author_sort Hodas, Nathan O.
collection PubMed
description As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network leads to higher mutual information between layers. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights, providing insight into why neural networks with far more weights than training points can be reliably trained.
format Online
Article
Text
id pubmed-6052125
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-60521252018-07-26 Doing the Impossible: Why Neural Networks Can Be Trained at All Hodas, Nathan O. Stinis, Panos Front Psychol Psychology As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network leads to higher mutual information between layers. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights, providing insight into why neural networks with far more weights than training points can be reliably trained. Frontiers Media S.A. 2018-07-12 /pmc/articles/PMC6052125/ /pubmed/30050485 http://dx.doi.org/10.3389/fpsyg.2018.01185 Text en Copyright © 2018 Hodas and Stinis. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Hodas, Nathan O.
Stinis, Panos
Doing the Impossible: Why Neural Networks Can Be Trained at All
title Doing the Impossible: Why Neural Networks Can Be Trained at All
title_full Doing the Impossible: Why Neural Networks Can Be Trained at All
title_fullStr Doing the Impossible: Why Neural Networks Can Be Trained at All
title_full_unstemmed Doing the Impossible: Why Neural Networks Can Be Trained at All
title_short Doing the Impossible: Why Neural Networks Can Be Trained at All
title_sort doing the impossible: why neural networks can be trained at all
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6052125/
https://www.ncbi.nlm.nih.gov/pubmed/30050485
http://dx.doi.org/10.3389/fpsyg.2018.01185
work_keys_str_mv AT hodasnathano doingtheimpossiblewhyneuralnetworkscanbetrainedatall
AT stinispanos doingtheimpossiblewhyneuralnetworkscanbetrainedatall