Cargando…

Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception

The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: perceived objects must be mapped to high-level representations invariantly of the precise viewing conditions, and a generative model must be learned that allows, for instance, to fill in occluded infor...

Descripción completa

Detalles Bibliográficos
Autores principales: Brucklacher, Matthias, Bohté, Sander M., Mejias, Jorge F., Pennartz, Cyriel M. A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10561268/
https://www.ncbi.nlm.nih.gov/pubmed/37818157
http://dx.doi.org/10.3389/fncom.2023.1207361
_version_ 1785117885386981376
author Brucklacher, Matthias
Bohté, Sander M.
Mejias, Jorge F.
Pennartz, Cyriel M. A.
author_facet Brucklacher, Matthias
Bohté, Sander M.
Mejias, Jorge F.
Pennartz, Cyriel M. A.
author_sort Brucklacher, Matthias
collection PubMed
description The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: perceived objects must be mapped to high-level representations invariantly of the precise viewing conditions, and a generative model must be learned that allows, for instance, to fill in occluded information guided by visual experience. Here, we show how a multilayered predictive coding network can learn to recognize objects from the bottom up and to generate specific representations via a top-down pathway through a single learning rule: the local minimization of prediction errors. Trained on sequences of continuously transformed objects, neurons in the highest network area become tuned to object identity invariant of precise position, comparable to inferotemporal neurons in macaques. Drawing on this, the dynamic properties of invariant object representations reproduce experimentally observed hierarchies of timescales from low to high levels of the ventral processing stream. The predicted faster decorrelation of error-neuron activity compared to representation neurons is of relevance for the experimental search for neural correlates of prediction errors. Lastly, the generative capacity of the network is confirmed by reconstructing specific object images, robust to partial occlusion of the inputs. By learning invariance from temporal continuity within a generative model, the approach generalizes the predictive coding framework to dynamic inputs in a more biologically plausible way than self-supervised networks with non-local error-backpropagation. This was achieved simply by shifting the training paradigm to dynamic inputs, with little change in architecture and learning rule from static input-reconstructing Hebbian predictive coding networks.
format Online
Article
Text
id pubmed-10561268
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-105612682023-10-10 Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception Brucklacher, Matthias Bohté, Sander M. Mejias, Jorge F. Pennartz, Cyriel M. A. Front Comput Neurosci Neuroscience The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: perceived objects must be mapped to high-level representations invariantly of the precise viewing conditions, and a generative model must be learned that allows, for instance, to fill in occluded information guided by visual experience. Here, we show how a multilayered predictive coding network can learn to recognize objects from the bottom up and to generate specific representations via a top-down pathway through a single learning rule: the local minimization of prediction errors. Trained on sequences of continuously transformed objects, neurons in the highest network area become tuned to object identity invariant of precise position, comparable to inferotemporal neurons in macaques. Drawing on this, the dynamic properties of invariant object representations reproduce experimentally observed hierarchies of timescales from low to high levels of the ventral processing stream. The predicted faster decorrelation of error-neuron activity compared to representation neurons is of relevance for the experimental search for neural correlates of prediction errors. Lastly, the generative capacity of the network is confirmed by reconstructing specific object images, robust to partial occlusion of the inputs. By learning invariance from temporal continuity within a generative model, the approach generalizes the predictive coding framework to dynamic inputs in a more biologically plausible way than self-supervised networks with non-local error-backpropagation. This was achieved simply by shifting the training paradigm to dynamic inputs, with little change in architecture and learning rule from static input-reconstructing Hebbian predictive coding networks. Frontiers Media S.A. 2023-09-25 /pmc/articles/PMC10561268/ /pubmed/37818157 http://dx.doi.org/10.3389/fncom.2023.1207361 Text en Copyright © 2023 Brucklacher, Bohté, Mejias and Pennartz. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Brucklacher, Matthias
Bohté, Sander M.
Mejias, Jorge F.
Pennartz, Cyriel M. A.
Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception
title Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception
title_full Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception
title_fullStr Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception
title_full_unstemmed Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception
title_short Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception
title_sort local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10561268/
https://www.ncbi.nlm.nih.gov/pubmed/37818157
http://dx.doi.org/10.3389/fncom.2023.1207361
work_keys_str_mv AT brucklachermatthias localminimizationofpredictionerrorsdriveslearningofinvariantobjectrepresentationsinagenerativenetworkmodelofvisualperception
AT bohtesanderm localminimizationofpredictionerrorsdriveslearningofinvariantobjectrepresentationsinagenerativenetworkmodelofvisualperception
AT mejiasjorgef localminimizationofpredictionerrorsdriveslearningofinvariantobjectrepresentationsinagenerativenetworkmodelofvisualperception
AT pennartzcyrielma localminimizationofpredictionerrorsdriveslearningofinvariantobjectrepresentationsinagenerativenetworkmodelofvisualperception