Cargando…
Remembering Words in Context as Predicted by an Associative Read-Out Model
Interactive activation models (IAMs) simulate orthographic and phonological processes in implicit memory tasks, but they neither account for associative relations between words nor explicit memory performance. To overcome both limitations, we introduce the associative read-out model (AROM), an IAM e...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Research Foundation
2011
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3185299/ https://www.ncbi.nlm.nih.gov/pubmed/22007183 http://dx.doi.org/10.3389/fpsyg.2011.00252 |
_version_ | 1782213203840729088 |
---|---|
author | Hofmann, Markus J. Kuchinke, Lars Biemann, Chris Tamm, Sascha Jacobs, Arthur M. |
author_facet | Hofmann, Markus J. Kuchinke, Lars Biemann, Chris Tamm, Sascha Jacobs, Arthur M. |
author_sort | Hofmann, Markus J. |
collection | PubMed |
description | Interactive activation models (IAMs) simulate orthographic and phonological processes in implicit memory tasks, but they neither account for associative relations between words nor explicit memory performance. To overcome both limitations, we introduce the associative read-out model (AROM), an IAM extended by an associative layer implementing long-term associations between words. According to Hebbian learning, two words were defined as “associated” if they co-occurred significantly often in the sentences of a large corpus. In a study-test task, a greater amount of associated items in the stimulus set increased the “yes” response rates of non-learned and learned words. To model test-phase performance, the associative layer is initialized with greater activation for learned than for non-learned items. Because IAMs scale inhibitory activation changes by the initial activation, learned items gain a greater signal variability than non-learned items, irrespective of the choice of the free parameters. This explains why the slope of the z-transformed receiver-operating characteristics (z-ROCs) is lower one during recognition memory. When fitting the model to the empirical z-ROCs, it likewise predicted which word is recognized with which probability at the item-level. Since many of the strongest associates reflect semantic relations to the presented word (e.g., synonymy), the AROM merges form-based aspects of meaning representation with meaning relations between words. |
format | Online Article Text |
id | pubmed-3185299 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2011 |
publisher | Frontiers Research Foundation |
record_format | MEDLINE/PubMed |
spelling | pubmed-31852992011-10-17 Remembering Words in Context as Predicted by an Associative Read-Out Model Hofmann, Markus J. Kuchinke, Lars Biemann, Chris Tamm, Sascha Jacobs, Arthur M. Front Psychol Psychology Interactive activation models (IAMs) simulate orthographic and phonological processes in implicit memory tasks, but they neither account for associative relations between words nor explicit memory performance. To overcome both limitations, we introduce the associative read-out model (AROM), an IAM extended by an associative layer implementing long-term associations between words. According to Hebbian learning, two words were defined as “associated” if they co-occurred significantly often in the sentences of a large corpus. In a study-test task, a greater amount of associated items in the stimulus set increased the “yes” response rates of non-learned and learned words. To model test-phase performance, the associative layer is initialized with greater activation for learned than for non-learned items. Because IAMs scale inhibitory activation changes by the initial activation, learned items gain a greater signal variability than non-learned items, irrespective of the choice of the free parameters. This explains why the slope of the z-transformed receiver-operating characteristics (z-ROCs) is lower one during recognition memory. When fitting the model to the empirical z-ROCs, it likewise predicted which word is recognized with which probability at the item-level. Since many of the strongest associates reflect semantic relations to the presented word (e.g., synonymy), the AROM merges form-based aspects of meaning representation with meaning relations between words. Frontiers Research Foundation 2011-10-04 /pmc/articles/PMC3185299/ /pubmed/22007183 http://dx.doi.org/10.3389/fpsyg.2011.00252 Text en Copyright © 2011 Hofmann, Kuchinke, Biemann, Tamm and Jacobs. http://www.frontiersin.org/licenseagreement This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with. |
spellingShingle | Psychology Hofmann, Markus J. Kuchinke, Lars Biemann, Chris Tamm, Sascha Jacobs, Arthur M. Remembering Words in Context as Predicted by an Associative Read-Out Model |
title | Remembering Words in Context as Predicted by an Associative Read-Out Model |
title_full | Remembering Words in Context as Predicted by an Associative Read-Out Model |
title_fullStr | Remembering Words in Context as Predicted by an Associative Read-Out Model |
title_full_unstemmed | Remembering Words in Context as Predicted by an Associative Read-Out Model |
title_short | Remembering Words in Context as Predicted by an Associative Read-Out Model |
title_sort | remembering words in context as predicted by an associative read-out model |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3185299/ https://www.ncbi.nlm.nih.gov/pubmed/22007183 http://dx.doi.org/10.3389/fpsyg.2011.00252 |
work_keys_str_mv | AT hofmannmarkusj rememberingwordsincontextaspredictedbyanassociativereadoutmodel AT kuchinkelars rememberingwordsincontextaspredictedbyanassociativereadoutmodel AT biemannchris rememberingwordsincontextaspredictedbyanassociativereadoutmodel AT tammsascha rememberingwordsincontextaspredictedbyanassociativereadoutmodel AT jacobsarthurm rememberingwordsincontextaspredictedbyanassociativereadoutmodel |