Cargando…

Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks

While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder...

Descripción completa

Detalles Bibliográficos
Autores principales: Frenkel, Charlotte, Lefebvre, Martin, Bol, David
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7902857/
https://www.ncbi.nlm.nih.gov/pubmed/33642986
http://dx.doi.org/10.3389/fnins.2021.629892
_version_ 1783654616032870400
author Frenkel, Charlotte
Lefebvre, Martin
Bol, David
author_facet Frenkel, Charlotte
Lefebvre, Martin
Bol, David
author_sort Frenkel, Charlotte
collection PubMed
description While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder the development of low-cost adaptive smart sensors at the edge, as they severely constrain memory accesses and entail buffering overhead. In this work, we show that the one-hot-encoded labels provided in supervised classification problems, denoted as targets, can be viewed as a proxy for the error sign. Therefore, their fixed random projections enable a layerwise feedforward training of the hidden layers, thus solving the weight transport and update locking problems while relaxing the computational and memory requirements. Based on these observations, we propose the direct random target projection (DRTP) algorithm and demonstrate that it provides a tradeoff between accuracy and computational cost that is suitable for adaptive edge computing devices.
format Online
Article
Text
id pubmed-7902857
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-79028572021-02-25 Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks Frenkel, Charlotte Lefebvre, Martin Bol, David Front Neurosci Neuroscience While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder the development of low-cost adaptive smart sensors at the edge, as they severely constrain memory accesses and entail buffering overhead. In this work, we show that the one-hot-encoded labels provided in supervised classification problems, denoted as targets, can be viewed as a proxy for the error sign. Therefore, their fixed random projections enable a layerwise feedforward training of the hidden layers, thus solving the weight transport and update locking problems while relaxing the computational and memory requirements. Based on these observations, we propose the direct random target projection (DRTP) algorithm and demonstrate that it provides a tradeoff between accuracy and computational cost that is suitable for adaptive edge computing devices. Frontiers Media S.A. 2021-02-10 /pmc/articles/PMC7902857/ /pubmed/33642986 http://dx.doi.org/10.3389/fnins.2021.629892 Text en Copyright © 2021 Frenkel, Lefebvre and Bol. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Frenkel, Charlotte
Lefebvre, Martin
Bol, David
Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks
title Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks
title_full Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks
title_fullStr Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks
title_full_unstemmed Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks
title_short Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks
title_sort learning without feedback: fixed random learning signals allow for feedforward training of deep neural networks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7902857/
https://www.ncbi.nlm.nih.gov/pubmed/33642986
http://dx.doi.org/10.3389/fnins.2021.629892
work_keys_str_mv AT frenkelcharlotte learningwithoutfeedbackfixedrandomlearningsignalsallowforfeedforwardtrainingofdeepneuralnetworks
AT lefebvremartin learningwithoutfeedbackfixedrandomlearningsignalsallowforfeedforwardtrainingofdeepneuralnetworks
AT boldavid learningwithoutfeedbackfixedrandomlearningsignalsallowforfeedforwardtrainingofdeepneuralnetworks