Cargando…
A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning
Although many studies have provided evidence that abstract knowledge can be acquired in artificial grammar learning, it remains unclear how abstract knowledge can be attained in sequence learning. To address this issue, we proposed a dual simple recurrent network (DSRN) model that includes a surface...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8129006/ https://www.ncbi.nlm.nih.gov/pubmed/34017276 http://dx.doi.org/10.3389/fpsyg.2021.587405 |
_version_ | 1783694218835787776 |
---|---|
author | Wang, Lituan Feng, Yangqin Fu, Qiufang Wang, Jianyong Sun, Xunwei Fu, Xiaolan Zhang, Lei Yi, Zhang |
author_facet | Wang, Lituan Feng, Yangqin Fu, Qiufang Wang, Jianyong Sun, Xunwei Fu, Xiaolan Zhang, Lei Yi, Zhang |
author_sort | Wang, Lituan |
collection | PubMed |
description | Although many studies have provided evidence that abstract knowledge can be acquired in artificial grammar learning, it remains unclear how abstract knowledge can be attained in sequence learning. To address this issue, we proposed a dual simple recurrent network (DSRN) model that includes a surface SRN encoding and predicting the surface properties of stimuli and an abstract SRN encoding and predicting the abstract properties of stimuli. The results of Simulations 1 and 2 showed that the DSRN model can account for learning effects in the serial reaction time (SRT) task under different conditions, and the manipulation of the contribution weight of each SRN accounted for the contribution of conscious and unconscious processes in inclusion and exclusion tests in previous studies. The results of human performance in Simulation 3 provided further evidence that people can implicitly learn both chunking and abstract knowledge in sequence learning, and the results of Simulation 3 confirmed that the DSRN model can account for how people implicitly acquire the two types of knowledge in sequence learning. These findings extend the learning ability of the SRN model and help understand how different types of knowledge can be acquired implicitly in sequence learning. |
format | Online Article Text |
id | pubmed-8129006 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-81290062021-05-19 A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning Wang, Lituan Feng, Yangqin Fu, Qiufang Wang, Jianyong Sun, Xunwei Fu, Xiaolan Zhang, Lei Yi, Zhang Front Psychol Psychology Although many studies have provided evidence that abstract knowledge can be acquired in artificial grammar learning, it remains unclear how abstract knowledge can be attained in sequence learning. To address this issue, we proposed a dual simple recurrent network (DSRN) model that includes a surface SRN encoding and predicting the surface properties of stimuli and an abstract SRN encoding and predicting the abstract properties of stimuli. The results of Simulations 1 and 2 showed that the DSRN model can account for learning effects in the serial reaction time (SRT) task under different conditions, and the manipulation of the contribution weight of each SRN accounted for the contribution of conscious and unconscious processes in inclusion and exclusion tests in previous studies. The results of human performance in Simulation 3 provided further evidence that people can implicitly learn both chunking and abstract knowledge in sequence learning, and the results of Simulation 3 confirmed that the DSRN model can account for how people implicitly acquire the two types of knowledge in sequence learning. These findings extend the learning ability of the SRN model and help understand how different types of knowledge can be acquired implicitly in sequence learning. Frontiers Media S.A. 2021-05-04 /pmc/articles/PMC8129006/ /pubmed/34017276 http://dx.doi.org/10.3389/fpsyg.2021.587405 Text en Copyright © 2021 Wang, Feng, Fu, Wang, Sun, Fu, Zhang and Yi. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Wang, Lituan Feng, Yangqin Fu, Qiufang Wang, Jianyong Sun, Xunwei Fu, Xiaolan Zhang, Lei Yi, Zhang A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning |
title | A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning |
title_full | A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning |
title_fullStr | A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning |
title_full_unstemmed | A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning |
title_short | A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning |
title_sort | dual simple recurrent network model for chunking and abstract processes in sequence learning |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8129006/ https://www.ncbi.nlm.nih.gov/pubmed/34017276 http://dx.doi.org/10.3389/fpsyg.2021.587405 |
work_keys_str_mv | AT wanglituan adualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT fengyangqin adualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT fuqiufang adualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT wangjianyong adualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT sunxunwei adualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT fuxiaolan adualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT zhanglei adualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT yizhang adualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT wanglituan dualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT fengyangqin dualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT fuqiufang dualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT wangjianyong dualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT sunxunwei dualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT fuxiaolan dualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT zhanglei dualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning AT yizhang dualsimplerecurrentnetworkmodelforchunkingandabstractprocessesinsequencelearning |