Cargando…

Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison

A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for exampl...

Descripción completa

Detalles Bibliográficos
Autores principales: Kolodziejski, Christoph, Porr, Bernd, Wörgötter, Florentin
Formato: Texto
Lenguaje:English
Publicado: Springer-Verlag 2008
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2798052/
https://www.ncbi.nlm.nih.gov/pubmed/18196266
http://dx.doi.org/10.1007/s00422-007-0209-6
_version_ 1782175714743681024
author Kolodziejski, Christoph
Porr, Bernd
Wörgötter, Florentin
author_facet Kolodziejski, Christoph
Porr, Bernd
Wörgötter, Florentin
author_sort Kolodziejski, Christoph
collection PubMed
description A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications.
format Text
id pubmed-2798052
institution National Center for Biotechnology Information
language English
publishDate 2008
publisher Springer-Verlag
record_format MEDLINE/PubMed
spelling pubmed-27980522010-01-13 Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison Kolodziejski, Christoph Porr, Bernd Wörgötter, Florentin Biol Cybern Original Paper A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications. Springer-Verlag 2008-01-15 2008 /pmc/articles/PMC2798052/ /pubmed/18196266 http://dx.doi.org/10.1007/s00422-007-0209-6 Text en © Springer-Verlag 2008 https://creativecommons.org/licenses/by-nc/4.0/This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
spellingShingle Original Paper
Kolodziejski, Christoph
Porr, Bernd
Wörgötter, Florentin
Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison
title Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison
title_full Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison
title_fullStr Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison
title_full_unstemmed Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison
title_short Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison
title_sort mathematical properties of neuronal td-rules and differential hebbian learning: a comparison
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2798052/
https://www.ncbi.nlm.nih.gov/pubmed/18196266
http://dx.doi.org/10.1007/s00422-007-0209-6
work_keys_str_mv AT kolodziejskichristoph mathematicalpropertiesofneuronaltdrulesanddifferentialhebbianlearningacomparison
AT porrbernd mathematicalpropertiesofneuronaltdrulesanddifferentialhebbianlearningacomparison
AT worgotterflorentin mathematicalpropertiesofneuronaltdrulesanddifferentialhebbianlearningacomparison