Cargando…

Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks

A theoretical understanding of generalization remains an open problem for many machine learning models, including deep networks where overparameterization leads to better performance, contradicting the conventional wisdom from classical statistics. Here, we investigate generalization error for kerne...

Descripción completa

Detalles Bibliográficos
Autores principales: Canatar, Abdulkadir, Bordelon, Blake, Pehlevan, Cengiz
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8131612/
https://www.ncbi.nlm.nih.gov/pubmed/34006842
http://dx.doi.org/10.1038/s41467-021-23103-1
_version_ 1783694735138881536
author Canatar, Abdulkadir
Bordelon, Blake
Pehlevan, Cengiz
author_facet Canatar, Abdulkadir
Bordelon, Blake
Pehlevan, Cengiz
author_sort Canatar, Abdulkadir
collection PubMed
description A theoretical understanding of generalization remains an open problem for many machine learning models, including deep networks where overparameterization leads to better performance, contradicting the conventional wisdom from classical statistics. Here, we investigate generalization error for kernel regression, which, besides being a popular machine learning method, also describes certain infinitely overparameterized neural networks. We use techniques from statistical mechanics to derive an analytical expression for generalization error applicable to any kernel and data distribution. We present applications of our theory to real and synthetic datasets, and for many kernels including those that arise from training deep networks in the infinite-width limit. We elucidate an inductive bias of kernel regression to explain data with simple functions, characterize whether a kernel is compatible with a learning task, and show that more data may impair generalization when noisy or not expressible by the kernel, leading to non-monotonic learning curves with possibly many peaks.
format Online
Article
Text
id pubmed-8131612
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-81316122021-05-24 Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks Canatar, Abdulkadir Bordelon, Blake Pehlevan, Cengiz Nat Commun Article A theoretical understanding of generalization remains an open problem for many machine learning models, including deep networks where overparameterization leads to better performance, contradicting the conventional wisdom from classical statistics. Here, we investigate generalization error for kernel regression, which, besides being a popular machine learning method, also describes certain infinitely overparameterized neural networks. We use techniques from statistical mechanics to derive an analytical expression for generalization error applicable to any kernel and data distribution. We present applications of our theory to real and synthetic datasets, and for many kernels including those that arise from training deep networks in the infinite-width limit. We elucidate an inductive bias of kernel regression to explain data with simple functions, characterize whether a kernel is compatible with a learning task, and show that more data may impair generalization when noisy or not expressible by the kernel, leading to non-monotonic learning curves with possibly many peaks. Nature Publishing Group UK 2021-05-18 /pmc/articles/PMC8131612/ /pubmed/34006842 http://dx.doi.org/10.1038/s41467-021-23103-1 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Canatar, Abdulkadir
Bordelon, Blake
Pehlevan, Cengiz
Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
title Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
title_full Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
title_fullStr Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
title_full_unstemmed Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
title_short Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
title_sort spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8131612/
https://www.ncbi.nlm.nih.gov/pubmed/34006842
http://dx.doi.org/10.1038/s41467-021-23103-1
work_keys_str_mv AT canatarabdulkadir spectralbiasandtaskmodelalignmentexplaingeneralizationinkernelregressionandinfinitelywideneuralnetworks
AT bordelonblake spectralbiasandtaskmodelalignmentexplaingeneralizationinkernelregressionandinfinitelywideneuralnetworks
AT pehlevancengiz spectralbiasandtaskmodelalignmentexplaingeneralizationinkernelregressionandinfinitelywideneuralnetworks