Cargando…
Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks
Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate t...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4174743/ https://www.ncbi.nlm.nih.gov/pubmed/25309495 http://dx.doi.org/10.3389/fpsyg.2014.01084 |
_version_ | 1782336386145189888 |
---|---|
author | Schiff, Rachel Katan, Pesia |
author_facet | Schiff, Rachel Katan, Pesia |
author_sort | Schiff, Rachel |
collection | PubMed |
description | Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate the comprehensive effect of grammar complexity on task performance. The present study focused on computerizing Bollt and Jones's (2000) technique of calculating topological entropy (TE), a quantitative measure of AGL charts' complexity, with the aim of examining associations between grammar systems' TE and learners' AGL task performance. We surveyed the literature and identified 56 previous AGL experiments based on 10 different grammars that met the sampling criteria. Using the automated matrix-lift-action method, we assigned a TE value for each of these 10 previously used AGL systems and examined its correlation with learners' task performance. The meta-regression analysis showed a significant correlation, demonstrating that the complexity effect transcended the different settings and conditions in which the categorization task was performed. The results reinforced the importance of using this new automated tool to uniformly measure grammar systems' complexity when experimenting with and evaluating the findings of AGL studies. |
format | Online Article Text |
id | pubmed-4174743 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-41747432014-10-10 Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks Schiff, Rachel Katan, Pesia Front Psychol Psychology Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate the comprehensive effect of grammar complexity on task performance. The present study focused on computerizing Bollt and Jones's (2000) technique of calculating topological entropy (TE), a quantitative measure of AGL charts' complexity, with the aim of examining associations between grammar systems' TE and learners' AGL task performance. We surveyed the literature and identified 56 previous AGL experiments based on 10 different grammars that met the sampling criteria. Using the automated matrix-lift-action method, we assigned a TE value for each of these 10 previously used AGL systems and examined its correlation with learners' task performance. The meta-regression analysis showed a significant correlation, demonstrating that the complexity effect transcended the different settings and conditions in which the categorization task was performed. The results reinforced the importance of using this new automated tool to uniformly measure grammar systems' complexity when experimenting with and evaluating the findings of AGL studies. Frontiers Media S.A. 2014-09-25 /pmc/articles/PMC4174743/ /pubmed/25309495 http://dx.doi.org/10.3389/fpsyg.2014.01084 Text en Copyright © 2014 Schiff and Katan. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Schiff, Rachel Katan, Pesia Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks |
title | Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks |
title_full | Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks |
title_fullStr | Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks |
title_full_unstemmed | Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks |
title_short | Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks |
title_sort | does complexity matter? meta-analysis of learner performance in artificial grammar tasks |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4174743/ https://www.ncbi.nlm.nih.gov/pubmed/25309495 http://dx.doi.org/10.3389/fpsyg.2014.01084 |
work_keys_str_mv | AT schiffrachel doescomplexitymattermetaanalysisoflearnerperformanceinartificialgrammartasks AT katanpesia doescomplexitymattermetaanalysisoflearnerperformanceinartificialgrammartasks |