Cargando…
Suboptimal human multisensory cue combination
Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes c...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6435731/ https://www.ncbi.nlm.nih.gov/pubmed/30914673 http://dx.doi.org/10.1038/s41598-018-37888-7 |
_version_ | 1783406698284711936 |
---|---|
author | Arnold, Derek H. Petrie, Kirstie Murray, Cailem Johnston, Alan |
author_facet | Arnold, Derek H. Petrie, Kirstie Murray, Cailem Johnston, Alan |
author_sort | Arnold, Derek H. |
collection | PubMed |
description | Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception. |
format | Online Article Text |
id | pubmed-6435731 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-64357312019-04-03 Suboptimal human multisensory cue combination Arnold, Derek H. Petrie, Kirstie Murray, Cailem Johnston, Alan Sci Rep Article Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception. Nature Publishing Group UK 2019-03-26 /pmc/articles/PMC6435731/ /pubmed/30914673 http://dx.doi.org/10.1038/s41598-018-37888-7 Text en © The Author(s) 2019 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Article Arnold, Derek H. Petrie, Kirstie Murray, Cailem Johnston, Alan Suboptimal human multisensory cue combination |
title | Suboptimal human multisensory cue combination |
title_full | Suboptimal human multisensory cue combination |
title_fullStr | Suboptimal human multisensory cue combination |
title_full_unstemmed | Suboptimal human multisensory cue combination |
title_short | Suboptimal human multisensory cue combination |
title_sort | suboptimal human multisensory cue combination |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6435731/ https://www.ncbi.nlm.nih.gov/pubmed/30914673 http://dx.doi.org/10.1038/s41598-018-37888-7 |
work_keys_str_mv | AT arnoldderekh suboptimalhumanmultisensorycuecombination AT petriekirstie suboptimalhumanmultisensorycuecombination AT murraycailem suboptimalhumanmultisensorycuecombination AT johnstonalan suboptimalhumanmultisensorycuecombination |