Cargando…
Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup
Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in the teacher–student setup, where one network, the...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
IOP Publishing and SISSA
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8252911/ https://www.ncbi.nlm.nih.gov/pubmed/34262607 http://dx.doi.org/10.1088/1742-5468/abc61e |
_version_ | 1783717398073835520 |
---|---|
author | Goldt, Sebastian Advani, Madhu S Saxe, Andrew M Krzakala, Florent Zdeborová, Lenka |
author_facet | Goldt, Sebastian Advani, Madhu S Saxe, Andrew M Krzakala, Florent Zdeborová, Lenka |
author_sort | Goldt, Sebastian |
collection | PubMed |
description | Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in the teacher–student setup, where one network, the student, is trained on data generated by another network, called the teacher. We show how the dynamics of stochastic gradient descent (SGD) is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalisation error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviours have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalisation in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set. |
format | Online Article Text |
id | pubmed-8252911 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | IOP Publishing and SISSA |
record_format | MEDLINE/PubMed |
spelling | pubmed-82529112021-07-12 Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup Goldt, Sebastian Advani, Madhu S Saxe, Andrew M Krzakala, Florent Zdeborová, Lenka J Stat Mech Paper Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in the teacher–student setup, where one network, the student, is trained on data generated by another network, called the teacher. We show how the dynamics of stochastic gradient descent (SGD) is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalisation error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviours have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalisation in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set. IOP Publishing and SISSA 2020-12 2020-12-21 /pmc/articles/PMC8252911/ /pubmed/34262607 http://dx.doi.org/10.1088/1742-5468/abc61e Text en © 2020 IOP Publishing Ltd and SISSA Medialab srl https://creativecommons.org/licenses/by/4.0/Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence (https://creativecommons.org/licenses/by/4.0/) . Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. |
spellingShingle | Paper Goldt, Sebastian Advani, Madhu S Saxe, Andrew M Krzakala, Florent Zdeborová, Lenka Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup |
title | Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup
|
title_full | Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup
|
title_fullStr | Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup
|
title_full_unstemmed | Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup
|
title_short | Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup
|
title_sort | dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup |
topic | Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8252911/ https://www.ncbi.nlm.nih.gov/pubmed/34262607 http://dx.doi.org/10.1088/1742-5468/abc61e |
work_keys_str_mv | AT goldtsebastian dynamicsofstochasticgradientdescentfortwolayerneuralnetworksintheteacherstudentsetup AT advanimadhus dynamicsofstochasticgradientdescentfortwolayerneuralnetworksintheteacherstudentsetup AT saxeandrewm dynamicsofstochasticgradientdescentfortwolayerneuralnetworksintheteacherstudentsetup AT krzakalaflorent dynamicsofstochasticgradientdescentfortwolayerneuralnetworksintheteacherstudentsetup AT zdeborovalenka dynamicsofstochasticgradientdescentfortwolayerneuralnetworksintheteacherstudentsetup |