Cargando…

Deep Learning With Asymmetric Connections and Hebbian Updates

We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedfo...

Descripción completa

Detalles Bibliográficos
Autor principal: Amit, Yali
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6458299/
https://www.ncbi.nlm.nih.gov/pubmed/31019458
http://dx.doi.org/10.3389/fncom.2019.00018
_version_ 1783409981872144384
author Amit, Yali
author_facet Amit, Yali
author_sort Amit, Yali
collection PubMed
description We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights—a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et al. (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights.
format Online
Article
Text
id pubmed-6458299
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-64582992019-04-24 Deep Learning With Asymmetric Connections and Hebbian Updates Amit, Yali Front Comput Neurosci Neuroscience We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights—a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et al. (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights. Frontiers Media S.A. 2019-04-04 /pmc/articles/PMC6458299/ /pubmed/31019458 http://dx.doi.org/10.3389/fncom.2019.00018 Text en Copyright © 2019 Amit. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Amit, Yali
Deep Learning With Asymmetric Connections and Hebbian Updates
title Deep Learning With Asymmetric Connections and Hebbian Updates
title_full Deep Learning With Asymmetric Connections and Hebbian Updates
title_fullStr Deep Learning With Asymmetric Connections and Hebbian Updates
title_full_unstemmed Deep Learning With Asymmetric Connections and Hebbian Updates
title_short Deep Learning With Asymmetric Connections and Hebbian Updates
title_sort deep learning with asymmetric connections and hebbian updates
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6458299/
https://www.ncbi.nlm.nih.gov/pubmed/31019458
http://dx.doi.org/10.3389/fncom.2019.00018
work_keys_str_mv AT amityali deeplearningwithasymmetricconnectionsandhebbianupdates