Cargando…
Causal inference of asynchronous audiovisual speech
During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speec...
Autores principales: | Magnotti, John F., Ma, Wei Ji, Beauchamp, Michael S. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3826594/ https://www.ncbi.nlm.nih.gov/pubmed/24294207 http://dx.doi.org/10.3389/fpsyg.2013.00798 |
Ejemplares similares
-
A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech
por: Magnotti, John F., et al.
Publicado: (2017) -
Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech
por: García-Pérez, Miguel A., et al.
Publicado: (2015) -
No “Self” Advantage for Audiovisual Speech Aftereffects
por: Modelska, Maria, et al.
Publicado: (2019) -
Temporal causal inference with stochastic audiovisual sequences
por: Locke, Shannon M., et al.
Publicado: (2017) -
A causal inference explanation for enhancement of multisensory integration by co-articulation
por: Magnotti, John F., et al.
Publicado: (2018)