Cargando…

Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception

Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. Howe...

Descripción completa

Detalles Bibliográficos
Autores principales: Hisanaga, Satoko, Sekiyama, Kaoru, Igasaki, Tomohiko, Murayama, Nobuki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5062344/
https://www.ncbi.nlm.nih.gov/pubmed/27734953
http://dx.doi.org/10.1038/srep35265
_version_ 1782459759980445696
author Hisanaga, Satoko
Sekiyama, Kaoru
Igasaki, Tomohiko
Murayama, Nobuki
author_facet Hisanaga, Satoko
Sekiyama, Kaoru
Igasaki, Tomohiko
Murayama, Nobuki
author_sort Hisanaga, Satoko
collection PubMed
description Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs’ response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs’ early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.
format Online
Article
Text
id pubmed-5062344
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Nature Publishing Group
record_format MEDLINE/PubMed
spelling pubmed-50623442016-10-24 Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception Hisanaga, Satoko Sekiyama, Kaoru Igasaki, Tomohiko Murayama, Nobuki Sci Rep Article Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs’ response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs’ early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception. Nature Publishing Group 2016-10-13 /pmc/articles/PMC5062344/ /pubmed/27734953 http://dx.doi.org/10.1038/srep35265 Text en Copyright © 2016, The Author(s) http://creativecommons.org/licenses/by/4.0/ This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
spellingShingle Article
Hisanaga, Satoko
Sekiyama, Kaoru
Igasaki, Tomohiko
Murayama, Nobuki
Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception
title Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception
title_full Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception
title_fullStr Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception
title_full_unstemmed Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception
title_short Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception
title_sort language/culture modulates brain and gaze processes in audiovisual speech perception
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5062344/
https://www.ncbi.nlm.nih.gov/pubmed/27734953
http://dx.doi.org/10.1038/srep35265
work_keys_str_mv AT hisanagasatoko languageculturemodulatesbrainandgazeprocessesinaudiovisualspeechperception
AT sekiyamakaoru languageculturemodulatesbrainandgazeprocessesinaudiovisualspeechperception
AT igasakitomohiko languageculturemodulatesbrainandgazeprocessesinaudiovisualspeechperception
AT murayamanobuki languageculturemodulatesbrainandgazeprocessesinaudiovisualspeechperception