Cargando…

Temporal Reference, Attentional Modulation, and Crossmodal Assimilation

Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval) of the subsequent task-relevant target events in another se...

Descripción completa

Detalles Bibliográficos
Autores principales: Wan, Yingqi, Chen, Lihan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5996128/
https://www.ncbi.nlm.nih.gov/pubmed/29922143
http://dx.doi.org/10.3389/fncom.2018.00039
_version_ 1783330771890601984
author Wan, Yingqi
Chen, Lihan
author_facet Wan, Yingqi
Chen, Lihan
author_sort Wan, Yingqi
collection PubMed
description Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval) of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion). We demonstrated that both the ensemble (geometric) mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last) interval elicited more reports of group motion, whereas the shorter mean (or last) auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations.
format Online
Article
Text
id pubmed-5996128
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-59961282018-06-19 Temporal Reference, Attentional Modulation, and Crossmodal Assimilation Wan, Yingqi Chen, Lihan Front Comput Neurosci Neuroscience Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval) of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion). We demonstrated that both the ensemble (geometric) mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last) interval elicited more reports of group motion, whereas the shorter mean (or last) auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations. Frontiers Media S.A. 2018-06-05 /pmc/articles/PMC5996128/ /pubmed/29922143 http://dx.doi.org/10.3389/fncom.2018.00039 Text en Copyright © 2018 Wan and Chen. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Wan, Yingqi
Chen, Lihan
Temporal Reference, Attentional Modulation, and Crossmodal Assimilation
title Temporal Reference, Attentional Modulation, and Crossmodal Assimilation
title_full Temporal Reference, Attentional Modulation, and Crossmodal Assimilation
title_fullStr Temporal Reference, Attentional Modulation, and Crossmodal Assimilation
title_full_unstemmed Temporal Reference, Attentional Modulation, and Crossmodal Assimilation
title_short Temporal Reference, Attentional Modulation, and Crossmodal Assimilation
title_sort temporal reference, attentional modulation, and crossmodal assimilation
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5996128/
https://www.ncbi.nlm.nih.gov/pubmed/29922143
http://dx.doi.org/10.3389/fncom.2018.00039
work_keys_str_mv AT wanyingqi temporalreferenceattentionalmodulationandcrossmodalassimilation
AT chenlihan temporalreferenceattentionalmodulationandcrossmodalassimilation