Cargando…

Neural representational geometry underlies few-shot concept learning

Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few...

Descripción completa

Detalles Bibliográficos
Autores principales: Sorscher, Ben, Ganguli, Surya, Sompolinsky, Haim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9618072/
https://www.ncbi.nlm.nih.gov/pubmed/36251997
http://dx.doi.org/10.1073/pnas.2200800119
_version_ 1784820974622867456
author Sorscher, Ben
Ganguli, Surya
Sompolinsky, Haim
author_facet Sorscher, Ben
Ganguli, Surya
Sompolinsky, Haim
author_sort Sorscher, Ben
collection PubMed
description Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments.
format Online
Article
Text
id pubmed-9618072
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-96180722022-10-31 Neural representational geometry underlies few-shot concept learning Sorscher, Ben Ganguli, Surya Sompolinsky, Haim Proc Natl Acad Sci U S A Biological Sciences Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments. National Academy of Sciences 2022-10-17 2022-10-25 /pmc/articles/PMC9618072/ /pubmed/36251997 http://dx.doi.org/10.1073/pnas.2200800119 Text en Copyright © 2022 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by-nc-nd/4.0/This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND) (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Biological Sciences
Sorscher, Ben
Ganguli, Surya
Sompolinsky, Haim
Neural representational geometry underlies few-shot concept learning
title Neural representational geometry underlies few-shot concept learning
title_full Neural representational geometry underlies few-shot concept learning
title_fullStr Neural representational geometry underlies few-shot concept learning
title_full_unstemmed Neural representational geometry underlies few-shot concept learning
title_short Neural representational geometry underlies few-shot concept learning
title_sort neural representational geometry underlies few-shot concept learning
topic Biological Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9618072/
https://www.ncbi.nlm.nih.gov/pubmed/36251997
http://dx.doi.org/10.1073/pnas.2200800119
work_keys_str_mv AT sorscherben neuralrepresentationalgeometryunderliesfewshotconceptlearning
AT gangulisurya neuralrepresentationalgeometryunderliesfewshotconceptlearning
AT sompolinskyhaim neuralrepresentationalgeometryunderliesfewshotconceptlearning