Cargando…

Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization

Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing chall...

Descripción completa

Detalles Bibliográficos
Autores principales: Parisi, German I., Tani, Jun, Weber, Cornelius, Wermter, Stefan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6279894/
https://www.ncbi.nlm.nih.gov/pubmed/30546302
http://dx.doi.org/10.3389/fnbot.2018.00078
_version_ 1783378562587295744
author Parisi, German I.
Tani, Jun
Weber, Cornelius
Wermter, Stefan
author_facet Parisi, German I.
Tani, Jun
Weber, Cornelius
Wermter, Stefan
author_sort Parisi, German I.
collection PubMed
description Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting in which novel sensory experience interferes with existing representations and leads to abrupt decreases in the performance on previously acquired knowledge. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. Therefore, specialized neural network mechanisms are required that adapt to novel sequential experience while preventing disruptive interference with existing representations. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenarios.
format Online
Article
Text
id pubmed-6279894
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-62798942018-12-13 Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization Parisi, German I. Tani, Jun Weber, Cornelius Wermter, Stefan Front Neurorobot Neuroscience Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting in which novel sensory experience interferes with existing representations and leads to abrupt decreases in the performance on previously acquired knowledge. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. Therefore, specialized neural network mechanisms are required that adapt to novel sequential experience while preventing disruptive interference with existing representations. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenarios. Frontiers Media S.A. 2018-11-28 /pmc/articles/PMC6279894/ /pubmed/30546302 http://dx.doi.org/10.3389/fnbot.2018.00078 Text en Copyright © 2018 Parisi, Tani, Weber and Wermter. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Parisi, German I.
Tani, Jun
Weber, Cornelius
Wermter, Stefan
Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_full Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_fullStr Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_full_unstemmed Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_short Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_sort lifelong learning of spatiotemporal representations with dual-memory recurrent self-organization
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6279894/
https://www.ncbi.nlm.nih.gov/pubmed/30546302
http://dx.doi.org/10.3389/fnbot.2018.00078
work_keys_str_mv AT parisigermani lifelonglearningofspatiotemporalrepresentationswithdualmemoryrecurrentselforganization
AT tanijun lifelonglearningofspatiotemporalrepresentationswithdualmemoryrecurrentselforganization
AT webercornelius lifelonglearningofspatiotemporalrepresentationswithdualmemoryrecurrentselforganization
AT wermterstefan lifelonglearningofspatiotemporalrepresentationswithdualmemoryrecurrentselforganization