Cargando…

Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space

Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals...

Descripción completa

Detalles Bibliográficos
Autores principales: Ma, Wei Ji, Zhou, Xiang, Ross, Lars A., Foxe, John J., Parra, Lucas C.
Formato: Texto
Lenguaje:English
Publicado: Public Library of Science 2009
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2645675/
https://www.ncbi.nlm.nih.gov/pubmed/19259259
http://dx.doi.org/10.1371/journal.pone.0004638
_version_ 1782164793963053056
author Ma, Wei Ji
Zhou, Xiang
Ross, Lars A.
Foxe, John J.
Parra, Lucas C.
author_facet Ma, Wei Ji
Zhou, Xiang
Ross, Lars A.
Foxe, John J.
Parra, Lucas C.
author_sort Ma, Wei Ji
collection PubMed
description Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
format Text
id pubmed-2645675
institution National Center for Biotechnology Information
language English
publishDate 2009
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-26456752009-03-04 Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space Ma, Wei Ji Zhou, Xiang Ross, Lars A. Foxe, John J. Parra, Lucas C. PLoS One Research Article Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli. Public Library of Science 2009-03-04 /pmc/articles/PMC2645675/ /pubmed/19259259 http://dx.doi.org/10.1371/journal.pone.0004638 Text en Ma et al. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Ma, Wei Ji
Zhou, Xiang
Ross, Lars A.
Foxe, John J.
Parra, Lucas C.
Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space
title Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space
title_full Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space
title_fullStr Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space
title_full_unstemmed Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space
title_short Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space
title_sort lip-reading aids word recognition most in moderate noise: a bayesian explanation using high-dimensional feature space
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2645675/
https://www.ncbi.nlm.nih.gov/pubmed/19259259
http://dx.doi.org/10.1371/journal.pone.0004638
work_keys_str_mv AT maweiji lipreadingaidswordrecognitionmostinmoderatenoiseabayesianexplanationusinghighdimensionalfeaturespace
AT zhouxiang lipreadingaidswordrecognitionmostinmoderatenoiseabayesianexplanationusinghighdimensionalfeaturespace
AT rosslarsa lipreadingaidswordrecognitionmostinmoderatenoiseabayesianexplanationusinghighdimensionalfeaturespace
AT foxejohnj lipreadingaidswordrecognitionmostinmoderatenoiseabayesianexplanationusinghighdimensionalfeaturespace
AT parralucasc lipreadingaidswordrecognitionmostinmoderatenoiseabayesianexplanationusinghighdimensionalfeaturespace