Cargando…

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias

Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operate...

Descripción completa

Detalles Bibliográficos
Autores principales: Laborieux, Axel, Ernoult, Maxence, Scellier, Benjamin, Bengio, Yoshua, Grollier, Julie, Querlioz, Damien
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7930909/
https://www.ncbi.nlm.nih.gov/pubmed/33679315
http://dx.doi.org/10.3389/fnins.2021.633674
_version_ 1783660180498546688
author Laborieux, Axel
Ernoult, Maxence
Scellier, Benjamin
Bengio, Yoshua
Grollier, Julie
Querlioz, Damien
author_facet Laborieux, Axel
Ernoult, Maxence
Scellier, Benjamin
Bengio, Yoshua
Grollier, Julie
Querlioz, Damien
author_sort Laborieux, Axel
collection PubMed
description Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operates in two phases, during which the network is let to evolve freely and then “nudged” toward a target; the weights of the network are then updated based solely on the states of the neurons that they connect. The weight updates of Equilibrium Propagation have been shown mathematically to approach those provided by Backpropagation Through Time (BPTT), the mainstream approach to train recurrent neural networks, when nudging is performed with infinitely small strength. In practice, however, the standard implementation of Equilibrium Propagation does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of equilibrium propagation, inherent in the use of finite nudging, is responsible for this phenomenon and that canceling it allows training deep convolutional neural networks. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize Equilibrium Propagation to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard Equilibrium Propagation that gives 86% test error. We also apply these techniques to train an architecture with unidirectional forward and backward connections, yielding a 13.2% test error. These results highlight equilibrium propagation as a compelling biologically-plausible approach to compute error gradients in deep neuromorphic systems.
format Online
Article
Text
id pubmed-7930909
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-79309092021-03-05 Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias Laborieux, Axel Ernoult, Maxence Scellier, Benjamin Bengio, Yoshua Grollier, Julie Querlioz, Damien Front Neurosci Neuroscience Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operates in two phases, during which the network is let to evolve freely and then “nudged” toward a target; the weights of the network are then updated based solely on the states of the neurons that they connect. The weight updates of Equilibrium Propagation have been shown mathematically to approach those provided by Backpropagation Through Time (BPTT), the mainstream approach to train recurrent neural networks, when nudging is performed with infinitely small strength. In practice, however, the standard implementation of Equilibrium Propagation does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of equilibrium propagation, inherent in the use of finite nudging, is responsible for this phenomenon and that canceling it allows training deep convolutional neural networks. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize Equilibrium Propagation to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard Equilibrium Propagation that gives 86% test error. We also apply these techniques to train an architecture with unidirectional forward and backward connections, yielding a 13.2% test error. These results highlight equilibrium propagation as a compelling biologically-plausible approach to compute error gradients in deep neuromorphic systems. Frontiers Media S.A. 2021-02-18 /pmc/articles/PMC7930909/ /pubmed/33679315 http://dx.doi.org/10.3389/fnins.2021.633674 Text en Copyright © 2021 Laborieux, Ernoult, Scellier, Bengio, Grollier and Querlioz. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Laborieux, Axel
Ernoult, Maxence
Scellier, Benjamin
Bengio, Yoshua
Grollier, Julie
Querlioz, Damien
Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias
title Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias
title_full Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias
title_fullStr Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias
title_full_unstemmed Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias
title_short Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias
title_sort scaling equilibrium propagation to deep convnets by drastically reducing its gradient estimator bias
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7930909/
https://www.ncbi.nlm.nih.gov/pubmed/33679315
http://dx.doi.org/10.3389/fnins.2021.633674
work_keys_str_mv AT laborieuxaxel scalingequilibriumpropagationtodeepconvnetsbydrasticallyreducingitsgradientestimatorbias
AT ernoultmaxence scalingequilibriumpropagationtodeepconvnetsbydrasticallyreducingitsgradientestimatorbias
AT scellierbenjamin scalingequilibriumpropagationtodeepconvnetsbydrasticallyreducingitsgradientestimatorbias
AT bengioyoshua scalingequilibriumpropagationtodeepconvnetsbydrasticallyreducingitsgradientestimatorbias
AT grollierjulie scalingequilibriumpropagationtodeepconvnetsbydrasticallyreducingitsgradientestimatorbias
AT querliozdamien scalingequilibriumpropagationtodeepconvnetsbydrasticallyreducingitsgradientestimatorbias