Cargando…

Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory

How are sequences of events that are temporarily stored in a cognitive working memory unitized, or chunked, through learning? Such sequential learning is needed by the brain in order to enable language, spatial understanding, and motor skills to develop. In particular, how does the brain learn categ...

Descripción completa

Detalles Bibliográficos
Autores principales: Kazerounian, Sohrob, Grossberg, Stephen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4186345/
https://www.ncbi.nlm.nih.gov/pubmed/25339918
http://dx.doi.org/10.3389/fpsyg.2014.01053
_version_ 1782338042045923328
author Kazerounian, Sohrob
Grossberg, Stephen
author_facet Kazerounian, Sohrob
Grossberg, Stephen
author_sort Kazerounian, Sohrob
collection PubMed
description How are sequences of events that are temporarily stored in a cognitive working memory unitized, or chunked, through learning? Such sequential learning is needed by the brain in order to enable language, spatial understanding, and motor skills to develop. In particular, how does the brain learn categories, or list chunks, that become selectively tuned to different temporal sequences of items in lists of variable length as they are stored in working memory, and how does this learning process occur in real time? The present article introduces a neural model that simulates learning of such list chunks. In this model, sequences of items are temporarily stored in an Item-and-Order, or competitive queuing, working memory before learning categorizes them using a categorization network, called a Masking Field, which is a self-similar, multiple-scale, recurrent on-center off-surround network that can weigh the evidence for variable-length sequences of items as they are stored in the working memory through time. A Masking Field hereby activates the learned list chunks that represent the most predictive item groupings at any time, while suppressing less predictive chunks. In a network with a given number of input items, all possible ordered sets of these item sequences, up to a fixed length, can be learned with unsupervised or supervised learning. The self-similar multiple-scale properties of Masking Fields interacting with an Item-and-Order working memory provide a natural explanation of George Miller's Magical Number Seven and Nelson Cowan's Magical Number Four. The article explains why linguistic, spatial, and action event sequences may all be stored by Item-and-Order working memories that obey similar design principles, and thus how the current results may apply across modalities. Item-and-Order properties may readily be extended to Item-Order-Rank working memories in which the same item can be stored in multiple list positions, or ranks, as in the list ABADBD. Comparisons with other models, including TRACE, MERGE, and TISK, are made.
format Online
Article
Text
id pubmed-4186345
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-41863452014-10-22 Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory Kazerounian, Sohrob Grossberg, Stephen Front Psychol Psychology How are sequences of events that are temporarily stored in a cognitive working memory unitized, or chunked, through learning? Such sequential learning is needed by the brain in order to enable language, spatial understanding, and motor skills to develop. In particular, how does the brain learn categories, or list chunks, that become selectively tuned to different temporal sequences of items in lists of variable length as they are stored in working memory, and how does this learning process occur in real time? The present article introduces a neural model that simulates learning of such list chunks. In this model, sequences of items are temporarily stored in an Item-and-Order, or competitive queuing, working memory before learning categorizes them using a categorization network, called a Masking Field, which is a self-similar, multiple-scale, recurrent on-center off-surround network that can weigh the evidence for variable-length sequences of items as they are stored in the working memory through time. A Masking Field hereby activates the learned list chunks that represent the most predictive item groupings at any time, while suppressing less predictive chunks. In a network with a given number of input items, all possible ordered sets of these item sequences, up to a fixed length, can be learned with unsupervised or supervised learning. The self-similar multiple-scale properties of Masking Fields interacting with an Item-and-Order working memory provide a natural explanation of George Miller's Magical Number Seven and Nelson Cowan's Magical Number Four. The article explains why linguistic, spatial, and action event sequences may all be stored by Item-and-Order working memories that obey similar design principles, and thus how the current results may apply across modalities. Item-and-Order properties may readily be extended to Item-Order-Rank working memories in which the same item can be stored in multiple list positions, or ranks, as in the list ABADBD. Comparisons with other models, including TRACE, MERGE, and TISK, are made. Frontiers Media S.A. 2014-10-06 /pmc/articles/PMC4186345/ /pubmed/25339918 http://dx.doi.org/10.3389/fpsyg.2014.01053 Text en Copyright © 2014 Kazerounian and Grossberg. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Kazerounian, Sohrob
Grossberg, Stephen
Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory
title Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory
title_full Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory
title_fullStr Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory
title_full_unstemmed Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory
title_short Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory
title_sort real-time learning of predictive recognition categories that chunk sequences of items stored in working memory
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4186345/
https://www.ncbi.nlm.nih.gov/pubmed/25339918
http://dx.doi.org/10.3389/fpsyg.2014.01053
work_keys_str_mv AT kazerouniansohrob realtimelearningofpredictiverecognitioncategoriesthatchunksequencesofitemsstoredinworkingmemory
AT grossbergstephen realtimelearningofpredictiverecognitioncategoriesthatchunksequencesofitemsstoredinworkingmemory