Cargando…

High-dimensional dynamics of generalization error in neural networks

We perform an analysis of the average generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant “high-dimensional” regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the d...

Descripción completa

Detalles Bibliográficos
Autores principales: Advani, Madhu S., Saxe, Andrew M., Sompolinsky, Haim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Pergamon Press 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7685244/
https://www.ncbi.nlm.nih.gov/pubmed/33022471
http://dx.doi.org/10.1016/j.neunet.2020.08.022
_version_ 1783613148797861888
author Advani, Madhu S.
Saxe, Andrew M.
Sompolinsky, Haim
author_facet Advani, Madhu S.
Saxe, Andrew M.
Sompolinsky, Haim
author_sort Advani, Madhu S.
collection PubMed
description We perform an analysis of the average generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant “high-dimensional” regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset. Using random matrix theory and exact solutions in linear models, we derive the generalization error and training error dynamics of learning and analyze how they depend on the dimensionality of data and signal to noise ratio of the learning problem. We find that the dynamics of gradient descent learning naturally protect against overtraining and overfitting in large networks. Overtraining is worst at intermediate network sizes, when the effective number of free parameters equals the number of samples, and thus can be reduced by making a network smaller or larger. Additionally, in the high-dimensional regime, low generalization error requires starting with small initial weights. We then turn to non-linear neural networks, and show that making networks very large does not harm their generalization performance. On the contrary, it can in fact reduce overtraining, even without early stopping or regularization of any sort. We identify two novel phenomena underlying this behavior in overcomplete models: first, there is a frozen subspace of the weights in which no learning occurs under gradient descent; and second, the statistical properties of the high-dimensional regime yield better-conditioned input correlations which protect against overtraining. We demonstrate that standard application of theories such as Rademacher complexity are inaccurate in predicting the generalization performance of deep neural networks, and derive an alternative bound which incorporates the frozen subspace and conditioning effects and qualitatively matches the behavior observed in simulation.
format Online
Article
Text
id pubmed-7685244
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Pergamon Press
record_format MEDLINE/PubMed
spelling pubmed-76852442020-12-07 High-dimensional dynamics of generalization error in neural networks Advani, Madhu S. Saxe, Andrew M. Sompolinsky, Haim Neural Netw Article We perform an analysis of the average generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant “high-dimensional” regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset. Using random matrix theory and exact solutions in linear models, we derive the generalization error and training error dynamics of learning and analyze how they depend on the dimensionality of data and signal to noise ratio of the learning problem. We find that the dynamics of gradient descent learning naturally protect against overtraining and overfitting in large networks. Overtraining is worst at intermediate network sizes, when the effective number of free parameters equals the number of samples, and thus can be reduced by making a network smaller or larger. Additionally, in the high-dimensional regime, low generalization error requires starting with small initial weights. We then turn to non-linear neural networks, and show that making networks very large does not harm their generalization performance. On the contrary, it can in fact reduce overtraining, even without early stopping or regularization of any sort. We identify two novel phenomena underlying this behavior in overcomplete models: first, there is a frozen subspace of the weights in which no learning occurs under gradient descent; and second, the statistical properties of the high-dimensional regime yield better-conditioned input correlations which protect against overtraining. We demonstrate that standard application of theories such as Rademacher complexity are inaccurate in predicting the generalization performance of deep neural networks, and derive an alternative bound which incorporates the frozen subspace and conditioning effects and qualitatively matches the behavior observed in simulation. Pergamon Press 2020-12 /pmc/articles/PMC7685244/ /pubmed/33022471 http://dx.doi.org/10.1016/j.neunet.2020.08.022 Text en © 2020 The Authors http://creativecommons.org/licenses/by/4.0/ This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Advani, Madhu S.
Saxe, Andrew M.
Sompolinsky, Haim
High-dimensional dynamics of generalization error in neural networks
title High-dimensional dynamics of generalization error in neural networks
title_full High-dimensional dynamics of generalization error in neural networks
title_fullStr High-dimensional dynamics of generalization error in neural networks
title_full_unstemmed High-dimensional dynamics of generalization error in neural networks
title_short High-dimensional dynamics of generalization error in neural networks
title_sort high-dimensional dynamics of generalization error in neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7685244/
https://www.ncbi.nlm.nih.gov/pubmed/33022471
http://dx.doi.org/10.1016/j.neunet.2020.08.022
work_keys_str_mv AT advanimadhus highdimensionaldynamicsofgeneralizationerrorinneuralnetworks
AT saxeandrewm highdimensionaldynamicsofgeneralizationerrorinneuralnetworks
AT sompolinskyhaim highdimensionaldynamicsofgeneralizationerrorinneuralnetworks