Cargando…

Interactive reservoir computing for chunking information streams

Chunking is the process by which frequently repeated segments of temporal inputs are concatenated into single units that are easy to process. Such a process is fundamental to time-series analysis in biological and artificial information processing systems. The brain efficiently acquires chunks from...

Descripción completa

Detalles Bibliográficos
Autores principales: Asabuki, Toshitake, Hiratani, Naoki, Fukai, Tomoki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6193738/
https://www.ncbi.nlm.nih.gov/pubmed/30296262
http://dx.doi.org/10.1371/journal.pcbi.1006400
_version_ 1783364121006178304
author Asabuki, Toshitake
Hiratani, Naoki
Fukai, Tomoki
author_facet Asabuki, Toshitake
Hiratani, Naoki
Fukai, Tomoki
author_sort Asabuki, Toshitake
collection PubMed
description Chunking is the process by which frequently repeated segments of temporal inputs are concatenated into single units that are easy to process. Such a process is fundamental to time-series analysis in biological and artificial information processing systems. The brain efficiently acquires chunks from various information streams in an unsupervised manner; however, the underlying mechanisms of this process remain elusive. A widely-adopted statistical method for chunking consists of predicting frequently repeated contiguous elements in an input sequence based on unequal transition probabilities over sequence elements. However, recent experimental findings suggest that the brain is unlikely to adopt this method, as human subjects can chunk sequences with uniform transition probabilities. In this study, we propose a novel conceptual framework to overcome this limitation. In this process, neural networks learn to predict dynamical response patterns to sequence input rather than to directly learn transition patterns. Using a mutually supervising pair of reservoir computing modules, we demonstrate how this mechanism works in chunking sequences of letters or visual images with variable regularity and complexity. In addition, we demonstrate that background noise plays a crucial role in correctly learning chunks in this model. In particular, the model can successfully chunk sequences that conventional statistical approaches fail to chunk due to uniform transition probabilities. In addition, the neural responses of the model exhibit an interesting similarity to those of the basal ganglia observed after motor habit formation.
format Online
Article
Text
id pubmed-6193738
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-61937382018-11-05 Interactive reservoir computing for chunking information streams Asabuki, Toshitake Hiratani, Naoki Fukai, Tomoki PLoS Comput Biol Research Article Chunking is the process by which frequently repeated segments of temporal inputs are concatenated into single units that are easy to process. Such a process is fundamental to time-series analysis in biological and artificial information processing systems. The brain efficiently acquires chunks from various information streams in an unsupervised manner; however, the underlying mechanisms of this process remain elusive. A widely-adopted statistical method for chunking consists of predicting frequently repeated contiguous elements in an input sequence based on unequal transition probabilities over sequence elements. However, recent experimental findings suggest that the brain is unlikely to adopt this method, as human subjects can chunk sequences with uniform transition probabilities. In this study, we propose a novel conceptual framework to overcome this limitation. In this process, neural networks learn to predict dynamical response patterns to sequence input rather than to directly learn transition patterns. Using a mutually supervising pair of reservoir computing modules, we demonstrate how this mechanism works in chunking sequences of letters or visual images with variable regularity and complexity. In addition, we demonstrate that background noise plays a crucial role in correctly learning chunks in this model. In particular, the model can successfully chunk sequences that conventional statistical approaches fail to chunk due to uniform transition probabilities. In addition, the neural responses of the model exhibit an interesting similarity to those of the basal ganglia observed after motor habit formation. Public Library of Science 2018-10-08 /pmc/articles/PMC6193738/ /pubmed/30296262 http://dx.doi.org/10.1371/journal.pcbi.1006400 Text en © 2018 Asabuki et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Asabuki, Toshitake
Hiratani, Naoki
Fukai, Tomoki
Interactive reservoir computing for chunking information streams
title Interactive reservoir computing for chunking information streams
title_full Interactive reservoir computing for chunking information streams
title_fullStr Interactive reservoir computing for chunking information streams
title_full_unstemmed Interactive reservoir computing for chunking information streams
title_short Interactive reservoir computing for chunking information streams
title_sort interactive reservoir computing for chunking information streams
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6193738/
https://www.ncbi.nlm.nih.gov/pubmed/30296262
http://dx.doi.org/10.1371/journal.pcbi.1006400
work_keys_str_mv AT asabukitoshitake interactivereservoircomputingforchunkinginformationstreams
AT hirataninaoki interactivereservoircomputingforchunkinginformationstreams
AT fukaitomoki interactivereservoircomputingforchunkinginformationstreams