Cargando…

An Entropy Model for Artificial Grammar Learning

A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test i...

Descripción completa

Detalles Bibliográficos
Autor principal: Pothos, Emmanuel M.
Formato: Texto
Lenguaje:English
Publicado: Frontiers Research Foundation 2010
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3095384/
https://www.ncbi.nlm.nih.gov/pubmed/21607072
http://dx.doi.org/10.3389/fpsyg.2010.00016
_version_ 1782203643667152896
author Pothos, Emmanuel M.
author_facet Pothos, Emmanuel M.
author_sort Pothos, Emmanuel M.
collection PubMed
description A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy.
format Text
id pubmed-3095384
institution National Center for Biotechnology Information
language English
publishDate 2010
publisher Frontiers Research Foundation
record_format MEDLINE/PubMed
spelling pubmed-30953842011-05-23 An Entropy Model for Artificial Grammar Learning Pothos, Emmanuel M. Front Psychol Psychology A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy. Frontiers Research Foundation 2010-06-17 /pmc/articles/PMC3095384/ /pubmed/21607072 http://dx.doi.org/10.3389/fpsyg.2010.00016 Text en Copyright © 2010 Pothos. http://www.frontiersin.org/licenseagreement This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.
spellingShingle Psychology
Pothos, Emmanuel M.
An Entropy Model for Artificial Grammar Learning
title An Entropy Model for Artificial Grammar Learning
title_full An Entropy Model for Artificial Grammar Learning
title_fullStr An Entropy Model for Artificial Grammar Learning
title_full_unstemmed An Entropy Model for Artificial Grammar Learning
title_short An Entropy Model for Artificial Grammar Learning
title_sort entropy model for artificial grammar learning
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3095384/
https://www.ncbi.nlm.nih.gov/pubmed/21607072
http://dx.doi.org/10.3389/fpsyg.2010.00016
work_keys_str_mv AT pothosemmanuelm anentropymodelforartificialgrammarlearning
AT pothosemmanuelm entropymodelforartificialgrammarlearning