Cargando…

Unsupervised neural network models of the ventral visual stream

Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhuang, Chengxu, Yan, Siming, Nayebi, Aran, Schrimpf, Martin, Frank, Michael C., DiCarlo, James J., Yamins, Daniel L. K.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7826371/
https://www.ncbi.nlm.nih.gov/pubmed/33431673
http://dx.doi.org/10.1073/pnas.2014196118
_version_ 1783640517587763200
author Zhuang, Chengxu
Yan, Siming
Nayebi, Aran
Schrimpf, Martin
Frank, Michael C.
DiCarlo, James J.
Yamins, Daniel L. K.
author_facet Zhuang, Chengxu
Yan, Siming
Nayebi, Aran
Schrimpf, Martin
Frank, Michael C.
DiCarlo, James J.
Yamins, Daniel L. K.
author_sort Zhuang, Chengxu
collection PubMed
description Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods and that the mapping of these neural network models’ hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.
format Online
Article
Text
id pubmed-7826371
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-78263712021-01-28 Unsupervised neural network models of the ventral visual stream Zhuang, Chengxu Yan, Siming Nayebi, Aran Schrimpf, Martin Frank, Michael C. DiCarlo, James J. Yamins, Daniel L. K. Proc Natl Acad Sci U S A Biological Sciences Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods and that the mapping of these neural network models’ hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning. National Academy of Sciences 2021-01-19 2021-01-11 /pmc/articles/PMC7826371/ /pubmed/33431673 http://dx.doi.org/10.1073/pnas.2014196118 Text en Copyright © 2021 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by-nc-nd/4.0/ https://creativecommons.org/licenses/by-nc-nd/4.0/This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND) (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Biological Sciences
Zhuang, Chengxu
Yan, Siming
Nayebi, Aran
Schrimpf, Martin
Frank, Michael C.
DiCarlo, James J.
Yamins, Daniel L. K.
Unsupervised neural network models of the ventral visual stream
title Unsupervised neural network models of the ventral visual stream
title_full Unsupervised neural network models of the ventral visual stream
title_fullStr Unsupervised neural network models of the ventral visual stream
title_full_unstemmed Unsupervised neural network models of the ventral visual stream
title_short Unsupervised neural network models of the ventral visual stream
title_sort unsupervised neural network models of the ventral visual stream
topic Biological Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7826371/
https://www.ncbi.nlm.nih.gov/pubmed/33431673
http://dx.doi.org/10.1073/pnas.2014196118
work_keys_str_mv AT zhuangchengxu unsupervisedneuralnetworkmodelsoftheventralvisualstream
AT yansiming unsupervisedneuralnetworkmodelsoftheventralvisualstream
AT nayebiaran unsupervisedneuralnetworkmodelsoftheventralvisualstream
AT schrimpfmartin unsupervisedneuralnetworkmodelsoftheventralvisualstream
AT frankmichaelc unsupervisedneuralnetworkmodelsoftheventralvisualstream
AT dicarlojamesj unsupervisedneuralnetworkmodelsoftheventralvisualstream
AT yaminsdaniellk unsupervisedneuralnetworkmodelsoftheventralvisualstream