Cargando…
An amodal shared resource model of language-mediated visual attention
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3744873/ https://www.ncbi.nlm.nih.gov/pubmed/23966967 http://dx.doi.org/10.3389/fpsyg.2013.00528 |
_version_ | 1782280657572986880 |
---|---|
author | Smith, Alastair C. Monaghan, Padraic Huettig, Falk |
author_facet | Smith, Alastair C. Monaghan, Padraic Huettig, Falk |
author_sort | Smith, Alastair C. |
collection | PubMed |
description | Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. |
format | Online Article Text |
id | pubmed-3744873 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2013 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-37448732013-08-21 An amodal shared resource model of language-mediated visual attention Smith, Alastair C. Monaghan, Padraic Huettig, Falk Front Psychol Psychology Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. Frontiers Media S.A. 2013-08-16 /pmc/articles/PMC3744873/ /pubmed/23966967 http://dx.doi.org/10.3389/fpsyg.2013.00528 Text en Copyright © 2013 Smith, Monaghan and Huettig. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Smith, Alastair C. Monaghan, Padraic Huettig, Falk An amodal shared resource model of language-mediated visual attention |
title | An amodal shared resource model of language-mediated visual attention |
title_full | An amodal shared resource model of language-mediated visual attention |
title_fullStr | An amodal shared resource model of language-mediated visual attention |
title_full_unstemmed | An amodal shared resource model of language-mediated visual attention |
title_short | An amodal shared resource model of language-mediated visual attention |
title_sort | amodal shared resource model of language-mediated visual attention |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3744873/ https://www.ncbi.nlm.nih.gov/pubmed/23966967 http://dx.doi.org/10.3389/fpsyg.2013.00528 |
work_keys_str_mv | AT smithalastairc anamodalsharedresourcemodeloflanguagemediatedvisualattention AT monaghanpadraic anamodalsharedresourcemodeloflanguagemediatedvisualattention AT huettigfalk anamodalsharedresourcemodeloflanguagemediatedvisualattention AT smithalastairc amodalsharedresourcemodeloflanguagemediatedvisualattention AT monaghanpadraic amodalsharedresourcemodeloflanguagemediatedvisualattention AT huettigfalk amodalsharedresourcemodeloflanguagemediatedvisualattention |