Cargando…

Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback

Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations...

Descripción completa

Detalles Bibliográficos
Autores principales: Orhan, A. Emin, Ma, Wei Ji
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5527101/
https://www.ncbi.nlm.nih.gov/pubmed/28743932
http://dx.doi.org/10.1038/s41467-017-00181-8
_version_ 1783252918456025088
author Orhan, A. Emin
Ma, Wei Ji
author_facet Orhan, A. Emin
Ma, Wei Ji
author_sort Orhan, A. Emin
collection PubMed
description Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey’s learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.
format Online
Article
Text
id pubmed-5527101
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-55271012017-07-31 Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback Orhan, A. Emin Ma, Wei Ji Nat Commun Article Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey’s learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules. Nature Publishing Group UK 2017-07-26 /pmc/articles/PMC5527101/ /pubmed/28743932 http://dx.doi.org/10.1038/s41467-017-00181-8 Text en © The Author(s) 2017 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Orhan, A. Emin
Ma, Wei Ji
Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback
title Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback
title_full Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback
title_fullStr Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback
title_full_unstemmed Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback
title_short Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback
title_sort efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5527101/
https://www.ncbi.nlm.nih.gov/pubmed/28743932
http://dx.doi.org/10.1038/s41467-017-00181-8
work_keys_str_mv AT orhanaemin efficientprobabilisticinferenceingenericneuralnetworkstrainedwithnonprobabilisticfeedback
AT maweiji efficientprobabilisticinferenceingenericneuralnetworkstrainedwithnonprobabilisticfeedback