Cargando…

Crossmodal learning of target-context associations: When would tactile context predict visual search?

It is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participa...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Siyi, Shi, Zhuanghua, Zang, Xuelian, Zhu, Xiuna, Assumpção, Leonardo, Müller, Hermann J., Geyer, Thomas
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7297845/
https://www.ncbi.nlm.nih.gov/pubmed/31845105
http://dx.doi.org/10.3758/s13414-019-01907-0
_version_ 1783547094503522304
author Chen, Siyi
Shi, Zhuanghua
Zang, Xuelian
Zhu, Xiuna
Assumpção, Leonardo
Müller, Hermann J.
Geyer, Thomas
author_facet Chen, Siyi
Shi, Zhuanghua
Zang, Xuelian
Zhu, Xiuna
Assumpção, Leonardo
Müller, Hermann J.
Geyer, Thomas
author_sort Chen, Siyi
collection PubMed
description It is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format.
format Online
Article
Text
id pubmed-7297845
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-72978452020-06-19 Crossmodal learning of target-context associations: When would tactile context predict visual search? Chen, Siyi Shi, Zhuanghua Zang, Xuelian Zhu, Xiuna Assumpção, Leonardo Müller, Hermann J. Geyer, Thomas Atten Percept Psychophys Article It is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format. Springer US 2019-12-16 2020 /pmc/articles/PMC7297845/ /pubmed/31845105 http://dx.doi.org/10.3758/s13414-019-01907-0 Text en © The Author(s) 2019 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Chen, Siyi
Shi, Zhuanghua
Zang, Xuelian
Zhu, Xiuna
Assumpção, Leonardo
Müller, Hermann J.
Geyer, Thomas
Crossmodal learning of target-context associations: When would tactile context predict visual search?
title Crossmodal learning of target-context associations: When would tactile context predict visual search?
title_full Crossmodal learning of target-context associations: When would tactile context predict visual search?
title_fullStr Crossmodal learning of target-context associations: When would tactile context predict visual search?
title_full_unstemmed Crossmodal learning of target-context associations: When would tactile context predict visual search?
title_short Crossmodal learning of target-context associations: When would tactile context predict visual search?
title_sort crossmodal learning of target-context associations: when would tactile context predict visual search?
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7297845/
https://www.ncbi.nlm.nih.gov/pubmed/31845105
http://dx.doi.org/10.3758/s13414-019-01907-0
work_keys_str_mv AT chensiyi crossmodallearningoftargetcontextassociationswhenwouldtactilecontextpredictvisualsearch
AT shizhuanghua crossmodallearningoftargetcontextassociationswhenwouldtactilecontextpredictvisualsearch
AT zangxuelian crossmodallearningoftargetcontextassociationswhenwouldtactilecontextpredictvisualsearch
AT zhuxiuna crossmodallearningoftargetcontextassociationswhenwouldtactilecontextpredictvisualsearch
AT assumpcaoleonardo crossmodallearningoftargetcontextassociationswhenwouldtactilecontextpredictvisualsearch
AT mullerhermannj crossmodallearningoftargetcontextassociationswhenwouldtactilecontextpredictvisualsearch
AT geyerthomas crossmodallearningoftargetcontextassociationswhenwouldtactilecontextpredictvisualsearch