Cargando…

Random synaptic feedback weights support error backpropagation for deep learning

The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by...

Descripción completa

Detalles Bibliográficos
Autores principales: Lillicrap, Timothy P., Cownden, Daniel, Tweed, Douglas B., Akerman, Colin J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5105169/
https://www.ncbi.nlm.nih.gov/pubmed/27824044
http://dx.doi.org/10.1038/ncomms13276
_version_ 1782466851096231936
author Lillicrap, Timothy P.
Cownden, Daniel
Tweed, Douglas B.
Akerman, Colin J.
author_facet Lillicrap, Timothy P.
Cownden, Daniel
Tweed, Douglas B.
Akerman, Colin J.
author_sort Lillicrap, Timothy P.
collection PubMed
description The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
format Online
Article
Text
id pubmed-5105169
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Nature Publishing Group
record_format MEDLINE/PubMed
spelling pubmed-51051692016-11-18 Random synaptic feedback weights support error backpropagation for deep learning Lillicrap, Timothy P. Cownden, Daniel Tweed, Douglas B. Akerman, Colin J. Nat Commun Article The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. Nature Publishing Group 2016-11-08 /pmc/articles/PMC5105169/ /pubmed/27824044 http://dx.doi.org/10.1038/ncomms13276 Text en Copyright © 2016, The Author(s) http://creativecommons.org/licenses/by/4.0/ This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
spellingShingle Article
Lillicrap, Timothy P.
Cownden, Daniel
Tweed, Douglas B.
Akerman, Colin J.
Random synaptic feedback weights support error backpropagation for deep learning
title Random synaptic feedback weights support error backpropagation for deep learning
title_full Random synaptic feedback weights support error backpropagation for deep learning
title_fullStr Random synaptic feedback weights support error backpropagation for deep learning
title_full_unstemmed Random synaptic feedback weights support error backpropagation for deep learning
title_short Random synaptic feedback weights support error backpropagation for deep learning
title_sort random synaptic feedback weights support error backpropagation for deep learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5105169/
https://www.ncbi.nlm.nih.gov/pubmed/27824044
http://dx.doi.org/10.1038/ncomms13276
work_keys_str_mv AT lillicraptimothyp randomsynapticfeedbackweightssupporterrorbackpropagationfordeeplearning
AT cowndendaniel randomsynapticfeedbackweightssupporterrorbackpropagationfordeeplearning
AT tweeddouglasb randomsynapticfeedbackweightssupporterrorbackpropagationfordeeplearning
AT akermancolinj randomsynapticfeedbackweightssupporterrorbackpropagationfordeeplearning