Cargando…

Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax

Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “grammars”) according to complexity. This framework applies to patterns...

Descripción completa

Detalles Bibliográficos
Autores principales: Westphal-Fitch, Gesche, Giustolisi, Beatrice, Cecchetto, Carlo, Martin, Jordan S., Fitch, W. Tecumseh
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6066649/
https://www.ncbi.nlm.nih.gov/pubmed/30087630
http://dx.doi.org/10.3389/fpsyg.2018.01210
_version_ 1783343002775715840
author Westphal-Fitch, Gesche
Giustolisi, Beatrice
Cecchetto, Carlo
Martin, Jordan S.
Fitch, W. Tecumseh
author_facet Westphal-Fitch, Gesche
Giustolisi, Beatrice
Cecchetto, Carlo
Martin, Jordan S.
Fitch, W. Tecumseh
author_sort Westphal-Fitch, Gesche
collection PubMed
description Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “grammars”) according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the visual domain by generating abstract visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial “grammars” (rule sets) at three key complexity levels. Because human linguistic syntax is classed as “mildly context-sensitive,” we specifically included a visual grammar at this complexity level. Acquisition of these three grammars was tested in an artificial grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract visual patterns with no meaning.
format Online
Article
Text
id pubmed-6066649
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-60666492018-08-07 Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax Westphal-Fitch, Gesche Giustolisi, Beatrice Cecchetto, Carlo Martin, Jordan S. Fitch, W. Tecumseh Front Psychol Psychology Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “grammars”) according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the visual domain by generating abstract visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial “grammars” (rule sets) at three key complexity levels. Because human linguistic syntax is classed as “mildly context-sensitive,” we specifically included a visual grammar at this complexity level. Acquisition of these three grammars was tested in an artificial grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract visual patterns with no meaning. Frontiers Media S.A. 2018-07-24 /pmc/articles/PMC6066649/ /pubmed/30087630 http://dx.doi.org/10.3389/fpsyg.2018.01210 Text en Copyright © 2018 Westphal-Fitch, Giustolisi, Cecchetto, Martin and Fitch. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Westphal-Fitch, Gesche
Giustolisi, Beatrice
Cecchetto, Carlo
Martin, Jordan S.
Fitch, W. Tecumseh
Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax
title Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax
title_full Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax
title_fullStr Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax
title_full_unstemmed Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax
title_short Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax
title_sort artificial grammar learning capabilities in an abstract visual task match requirements for linguistic syntax
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6066649/
https://www.ncbi.nlm.nih.gov/pubmed/30087630
http://dx.doi.org/10.3389/fpsyg.2018.01210
work_keys_str_mv AT westphalfitchgesche artificialgrammarlearningcapabilitiesinanabstractvisualtaskmatchrequirementsforlinguisticsyntax
AT giustolisibeatrice artificialgrammarlearningcapabilitiesinanabstractvisualtaskmatchrequirementsforlinguisticsyntax
AT cecchettocarlo artificialgrammarlearningcapabilitiesinanabstractvisualtaskmatchrequirementsforlinguisticsyntax
AT martinjordans artificialgrammarlearningcapabilitiesinanabstractvisualtaskmatchrequirementsforlinguisticsyntax
AT fitchwtecumseh artificialgrammarlearningcapabilitiesinanabstractvisualtaskmatchrequirementsforlinguisticsyntax