Cargando…
How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four exper...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5159801/ https://www.ncbi.nlm.nih.gov/pubmed/27982063 http://dx.doi.org/10.1038/srep39086 |
_version_ | 1782481821594812416 |
---|---|
author | Desantis, Andrea Haggard, Patrick |
author_facet | Desantis, Andrea Haggard, Patrick |
author_sort | Desantis, Andrea |
collection | PubMed |
description | To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. |
format | Online Article Text |
id | pubmed-5159801 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | Nature Publishing Group |
record_format | MEDLINE/PubMed |
spelling | pubmed-51598012016-12-21 How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding Desantis, Andrea Haggard, Patrick Sci Rep Article To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. Nature Publishing Group 2016-12-16 /pmc/articles/PMC5159801/ /pubmed/27982063 http://dx.doi.org/10.1038/srep39086 Text en Copyright © 2016, The Author(s) http://creativecommons.org/licenses/by/4.0/ This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ |
spellingShingle | Article Desantis, Andrea Haggard, Patrick How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding |
title | How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding |
title_full | How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding |
title_fullStr | How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding |
title_full_unstemmed | How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding |
title_short | How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding |
title_sort | how actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5159801/ https://www.ncbi.nlm.nih.gov/pubmed/27982063 http://dx.doi.org/10.1038/srep39086 |
work_keys_str_mv | AT desantisandrea howactionsshapeperceptionlearningactionoutcomerelationsandpredictingsensoryoutcomespromoteaudiovisualtemporalbinding AT haggardpatrick howactionsshapeperceptionlearningactionoutcomerelationsandpredictingsensoryoutcomespromoteaudiovisualtemporalbinding |