Cargando…

Learning cortical hierarchies with temporal Hebbian updates

A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations....

Descripción completa

Detalles Bibliográficos
Autores principales: Aceituno, Pau Vilimelis, Farinha, Matilde Tristany, Loidl, Reinhard, Grewe, Benjamin F.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10244748/
https://www.ncbi.nlm.nih.gov/pubmed/37293353
http://dx.doi.org/10.3389/fncom.2023.1136010
_version_ 1785054711948247040
author Aceituno, Pau Vilimelis
Farinha, Matilde Tristany
Loidl, Reinhard
Grewe, Benjamin F.
author_facet Aceituno, Pau Vilimelis
Farinha, Matilde Tristany
Loidl, Reinhard
Grewe, Benjamin F.
author_sort Aceituno, Pau Vilimelis
collection PubMed
description A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
format Online
Article
Text
id pubmed-10244748
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-102447482023-06-08 Learning cortical hierarchies with temporal Hebbian updates Aceituno, Pau Vilimelis Farinha, Matilde Tristany Loidl, Reinhard Grewe, Benjamin F. Front Comput Neurosci Neuroscience A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning. Frontiers Media S.A. 2023-05-24 /pmc/articles/PMC10244748/ /pubmed/37293353 http://dx.doi.org/10.3389/fncom.2023.1136010 Text en Copyright © 2023 Aceituno, Farinha, Loidl and Grewe. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Aceituno, Pau Vilimelis
Farinha, Matilde Tristany
Loidl, Reinhard
Grewe, Benjamin F.
Learning cortical hierarchies with temporal Hebbian updates
title Learning cortical hierarchies with temporal Hebbian updates
title_full Learning cortical hierarchies with temporal Hebbian updates
title_fullStr Learning cortical hierarchies with temporal Hebbian updates
title_full_unstemmed Learning cortical hierarchies with temporal Hebbian updates
title_short Learning cortical hierarchies with temporal Hebbian updates
title_sort learning cortical hierarchies with temporal hebbian updates
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10244748/
https://www.ncbi.nlm.nih.gov/pubmed/37293353
http://dx.doi.org/10.3389/fncom.2023.1136010
work_keys_str_mv AT aceitunopauvilimelis learningcorticalhierarchieswithtemporalhebbianupdates
AT farinhamatildetristany learningcorticalhierarchieswithtemporalhebbianupdates
AT loidlreinhard learningcorticalhierarchieswithtemporalhebbianupdates
AT grewebenjaminf learningcorticalhierarchieswithtemporalhebbianupdates