Cargando…

Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony

Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predicted enhan...

Descripción completa

Detalles Bibliográficos
Autores principales: Habets, Boukje, Bruns, Patrick, Röder, Brigitte
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5431144/
https://www.ncbi.nlm.nih.gov/pubmed/28469137
http://dx.doi.org/10.1038/s41598-017-01252-y
_version_ 1783236375068278784
author Habets, Boukje
Bruns, Patrick
Röder, Brigitte
author_facet Habets, Boukje
Bruns, Patrick
Röder, Brigitte
author_sort Habets, Boukje
collection PubMed
description Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predicted enhancement of multisensory binding as assessed with a simultaneity judgment task. In an initial learning phase participants were exposed to a subset of auditory-visual combinations. In the test phase the previously encountered audio-visual stimuli were presented together with new combinations of the auditory and visual stimuli from the learning phase, audio-visual stimuli containing one learned and one new sensory component, and audio-visual stimuli containing completely new auditory and visual material. Auditory-visual asynchrony was manipulated. A higher proportion of simultaneity judgements was observed for the learned cross-modal combinations than for new combinations of the same auditory and visual elements, as well as for all other conditions. This result suggests that prior exposure to certain auditory-visual combinations changed the expectation (i.e., the prior) that their elements belonged to the same event. As a result, multisensory binding became more likely despite unchanged sensory evidence of the auditory and visual elements.
format Online
Article
Text
id pubmed-5431144
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-54311442017-05-16 Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony Habets, Boukje Bruns, Patrick Röder, Brigitte Sci Rep Article Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predicted enhancement of multisensory binding as assessed with a simultaneity judgment task. In an initial learning phase participants were exposed to a subset of auditory-visual combinations. In the test phase the previously encountered audio-visual stimuli were presented together with new combinations of the auditory and visual stimuli from the learning phase, audio-visual stimuli containing one learned and one new sensory component, and audio-visual stimuli containing completely new auditory and visual material. Auditory-visual asynchrony was manipulated. A higher proportion of simultaneity judgements was observed for the learned cross-modal combinations than for new combinations of the same auditory and visual elements, as well as for all other conditions. This result suggests that prior exposure to certain auditory-visual combinations changed the expectation (i.e., the prior) that their elements belonged to the same event. As a result, multisensory binding became more likely despite unchanged sensory evidence of the auditory and visual elements. Nature Publishing Group UK 2017-05-03 /pmc/articles/PMC5431144/ /pubmed/28469137 http://dx.doi.org/10.1038/s41598-017-01252-y Text en © The Author(s) 2017 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Habets, Boukje
Bruns, Patrick
Röder, Brigitte
Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_full Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_fullStr Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_full_unstemmed Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_short Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_sort experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5431144/
https://www.ncbi.nlm.nih.gov/pubmed/28469137
http://dx.doi.org/10.1038/s41598-017-01252-y
work_keys_str_mv AT habetsboukje experiencewithcrossmodalstatisticsreducesthesensitivityforaudiovisualtemporalasynchrony
AT brunspatrick experiencewithcrossmodalstatisticsreducesthesensitivityforaudiovisualtemporalasynchrony
AT roderbrigitte experiencewithcrossmodalstatisticsreducesthesensitivityforaudiovisualtemporalasynchrony