Cargando…

Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory

The interplay of reinforcement learning and memory is at the core of several recent neural network models, such as the Attention-Gated MEmory Tagging (AuGMEnT) model. While successful at various animal learning tasks, we find that the AuGMEnT network is unable to cope with some hierarchical tasks, w...

Descripción completa

Detalles Bibliográficos
Autores principales: Martinolli, Marco, Gerstner, Wulfram, Gilra, Aditya
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6055065/
https://www.ncbi.nlm.nih.gov/pubmed/30061819
http://dx.doi.org/10.3389/fncom.2018.00050
_version_ 1783341113547948032
author Martinolli, Marco
Gerstner, Wulfram
Gilra, Aditya
author_facet Martinolli, Marco
Gerstner, Wulfram
Gilra, Aditya
author_sort Martinolli, Marco
collection PubMed
description The interplay of reinforcement learning and memory is at the core of several recent neural network models, such as the Attention-Gated MEmory Tagging (AuGMEnT) model. While successful at various animal learning tasks, we find that the AuGMEnT network is unable to cope with some hierarchical tasks, where higher-level stimuli have to be maintained over a long time, while lower-level stimuli need to be remembered and forgotten over a shorter timescale. To overcome this limitation, we introduce a hybrid AuGMEnT, with leaky (or short-timescale) and non-leaky (or long-timescale) memory units, that allows the exchange of low-level information while maintaining high-level one. We test the performance of the hybrid AuGMEnT network on two cognitive reference tasks, sequence prediction and 12AX.
format Online
Article
Text
id pubmed-6055065
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-60550652018-07-30 Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory Martinolli, Marco Gerstner, Wulfram Gilra, Aditya Front Comput Neurosci Neuroscience The interplay of reinforcement learning and memory is at the core of several recent neural network models, such as the Attention-Gated MEmory Tagging (AuGMEnT) model. While successful at various animal learning tasks, we find that the AuGMEnT network is unable to cope with some hierarchical tasks, where higher-level stimuli have to be maintained over a long time, while lower-level stimuli need to be remembered and forgotten over a shorter timescale. To overcome this limitation, we introduce a hybrid AuGMEnT, with leaky (or short-timescale) and non-leaky (or long-timescale) memory units, that allows the exchange of low-level information while maintaining high-level one. We test the performance of the hybrid AuGMEnT network on two cognitive reference tasks, sequence prediction and 12AX. Frontiers Media S.A. 2018-07-12 /pmc/articles/PMC6055065/ /pubmed/30061819 http://dx.doi.org/10.3389/fncom.2018.00050 Text en Copyright © 2018 Martinolli, Gerstner and Gilra. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Martinolli, Marco
Gerstner, Wulfram
Gilra, Aditya
Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory
title Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory
title_full Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory
title_fullStr Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory
title_full_unstemmed Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory
title_short Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory
title_sort multi-timescale memory dynamics extend task repertoire in a reinforcement learning network with attention-gated memory
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6055065/
https://www.ncbi.nlm.nih.gov/pubmed/30061819
http://dx.doi.org/10.3389/fncom.2018.00050
work_keys_str_mv AT martinollimarco multitimescalememorydynamicsextendtaskrepertoireinareinforcementlearningnetworkwithattentiongatedmemory
AT gerstnerwulfram multitimescalememorydynamicsextendtaskrepertoireinareinforcementlearningnetworkwithattentiongatedmemory
AT gilraaditya multitimescalememorydynamicsextendtaskrepertoireinareinforcementlearningnetworkwithattentiongatedmemory