Cargando…

Learning on Arbitrary Graph Topologies via Predictive Coding

Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a...

Descripción completa

Detalles Bibliográficos
Autores principales: Salvatori, Tommaso, Pinchetti, Luca, Millidge, Beren, Song, Yuhang, Bao, Tianyi, Bogacz, Rafal, Lukasiewicz, Thomas
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614467/
https://www.ncbi.nlm.nih.gov/pubmed/37090087
_version_ 1783605607418298368
author Salvatori, Tommaso
Pinchetti, Luca
Millidge, Beren
Song, Yuhang
Bao, Tianyi
Bogacz, Rafal
Lukasiewicz, Thomas
author_facet Salvatori, Tommaso
Pinchetti, Luca
Millidge, Beren
Song, Yuhang
Bao, Tianyi
Bogacz, Rafal
Lukasiewicz, Thomas
author_sort Salvatori, Tommaso
collection PubMed
description Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structure of the neural connections in the neocortex are potentially fundamental for its effectiveness. In this paper, we show how predictive coding (PC), a theory of information processing in the cortex, can be used to perform inference and learning on arbitrary graph topologies. We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons. This enables the model to be queried on stimuli with different structures, such as partial images, images with labels, or images without labels. We conclude by investigating how the topology of the graph influences the final performance, and comparing against simple baselines trained with BP.
format Online
Article
Text
id pubmed-7614467
institution National Center for Biotechnology Information
language English
publishDate 2022
record_format MEDLINE/PubMed
spelling pubmed-76144672023-04-21 Learning on Arbitrary Graph Topologies via Predictive Coding Salvatori, Tommaso Pinchetti, Luca Millidge, Beren Song, Yuhang Bao, Tianyi Bogacz, Rafal Lukasiewicz, Thomas Adv Neural Inf Process Syst Article Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structure of the neural connections in the neocortex are potentially fundamental for its effectiveness. In this paper, we show how predictive coding (PC), a theory of information processing in the cortex, can be used to perform inference and learning on arbitrary graph topologies. We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons. This enables the model to be queried on stimuli with different structures, such as partial images, images with labels, or images without labels. We conclude by investigating how the topology of the graph influences the final performance, and comparing against simple baselines trained with BP. 2022-11 /pmc/articles/PMC7614467/ /pubmed/37090087 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/) International license.
spellingShingle Article
Salvatori, Tommaso
Pinchetti, Luca
Millidge, Beren
Song, Yuhang
Bao, Tianyi
Bogacz, Rafal
Lukasiewicz, Thomas
Learning on Arbitrary Graph Topologies via Predictive Coding
title Learning on Arbitrary Graph Topologies via Predictive Coding
title_full Learning on Arbitrary Graph Topologies via Predictive Coding
title_fullStr Learning on Arbitrary Graph Topologies via Predictive Coding
title_full_unstemmed Learning on Arbitrary Graph Topologies via Predictive Coding
title_short Learning on Arbitrary Graph Topologies via Predictive Coding
title_sort learning on arbitrary graph topologies via predictive coding
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614467/
https://www.ncbi.nlm.nih.gov/pubmed/37090087
work_keys_str_mv AT salvatoritommaso learningonarbitrarygraphtopologiesviapredictivecoding
AT pinchettiluca learningonarbitrarygraphtopologiesviapredictivecoding
AT millidgeberen learningonarbitrarygraphtopologiesviapredictivecoding
AT songyuhang learningonarbitrarygraphtopologiesviapredictivecoding
AT baotianyi learningonarbitrarygraphtopologiesviapredictivecoding
AT bogaczrafal learningonarbitrarygraphtopologiesviapredictivecoding
AT lukasiewiczthomas learningonarbitrarygraphtopologiesviapredictivecoding