Cargando…

Simple, fast, and flexible framework for matrix completion with infinite width neural networks

Matrix completion problems arise in many applications including recommendation systems, computer vision, and genomics. Increasingly larger neural networks have been successful in many of these applications but at considerable computational costs. Remarkably, taking the width of a neural network to i...

Descripción completa

Detalles Bibliográficos
Autores principales: Radhakrishnan, Adityanarayanan, Stefanakis, George, Belkin, Mikhail, Uhler, Caroline
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9169779/
https://www.ncbi.nlm.nih.gov/pubmed/35412891
http://dx.doi.org/10.1073/pnas.2115064119
_version_ 1784721272395005952
author Radhakrishnan, Adityanarayanan
Stefanakis, George
Belkin, Mikhail
Uhler, Caroline
author_facet Radhakrishnan, Adityanarayanan
Stefanakis, George
Belkin, Mikhail
Uhler, Caroline
author_sort Radhakrishnan, Adityanarayanan
collection PubMed
description Matrix completion problems arise in many applications including recommendation systems, computer vision, and genomics. Increasingly larger neural networks have been successful in many of these applications but at considerable computational costs. Remarkably, taking the width of a neural network to infinity allows for improved computational performance. In this work, we develop an infinite width neural network framework for matrix completion that is simple, fast, and flexible. Simplicity and speed come from the connection between the infinite width limit of neural networks and kernels known as neural tangent kernels (NTK). In particular, we derive the NTK for fully connected and convolutional neural networks for matrix completion. The flexibility stems from a feature prior, which allows encoding relationships between coordinates of the target matrix, akin to semisupervised learning. The effectiveness of our framework is demonstrated through competitive results for virtual drug screening and image inpainting/reconstruction. We also provide an implementation in Python to make our framework accessible on standard hardware to a broad audience.
format Online
Article
Text
id pubmed-9169779
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-91697792022-10-11 Simple, fast, and flexible framework for matrix completion with infinite width neural networks Radhakrishnan, Adityanarayanan Stefanakis, George Belkin, Mikhail Uhler, Caroline Proc Natl Acad Sci U S A Physical Sciences Matrix completion problems arise in many applications including recommendation systems, computer vision, and genomics. Increasingly larger neural networks have been successful in many of these applications but at considerable computational costs. Remarkably, taking the width of a neural network to infinity allows for improved computational performance. In this work, we develop an infinite width neural network framework for matrix completion that is simple, fast, and flexible. Simplicity and speed come from the connection between the infinite width limit of neural networks and kernels known as neural tangent kernels (NTK). In particular, we derive the NTK for fully connected and convolutional neural networks for matrix completion. The flexibility stems from a feature prior, which allows encoding relationships between coordinates of the target matrix, akin to semisupervised learning. The effectiveness of our framework is demonstrated through competitive results for virtual drug screening and image inpainting/reconstruction. We also provide an implementation in Python to make our framework accessible on standard hardware to a broad audience. National Academy of Sciences 2022-04-11 2022-04-19 /pmc/articles/PMC9169779/ /pubmed/35412891 http://dx.doi.org/10.1073/pnas.2115064119 Text en Copyright © 2022 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by-nc-nd/4.0/This article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND) (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Physical Sciences
Radhakrishnan, Adityanarayanan
Stefanakis, George
Belkin, Mikhail
Uhler, Caroline
Simple, fast, and flexible framework for matrix completion with infinite width neural networks
title Simple, fast, and flexible framework for matrix completion with infinite width neural networks
title_full Simple, fast, and flexible framework for matrix completion with infinite width neural networks
title_fullStr Simple, fast, and flexible framework for matrix completion with infinite width neural networks
title_full_unstemmed Simple, fast, and flexible framework for matrix completion with infinite width neural networks
title_short Simple, fast, and flexible framework for matrix completion with infinite width neural networks
title_sort simple, fast, and flexible framework for matrix completion with infinite width neural networks
topic Physical Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9169779/
https://www.ncbi.nlm.nih.gov/pubmed/35412891
http://dx.doi.org/10.1073/pnas.2115064119
work_keys_str_mv AT radhakrishnanadityanarayanan simplefastandflexibleframeworkformatrixcompletionwithinfinitewidthneuralnetworks
AT stefanakisgeorge simplefastandflexibleframeworkformatrixcompletionwithinfinitewidthneuralnetworks
AT belkinmikhail simplefastandflexibleframeworkformatrixcompletionwithinfinitewidthneuralnetworks
AT uhlercaroline simplefastandflexibleframeworkformatrixcompletionwithinfinitewidthneuralnetworks