Cargando…
The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study
The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7692090/ https://www.ncbi.nlm.nih.gov/pubmed/33147691 http://dx.doi.org/10.3390/brainsci10110810 |
_version_ | 1783614430576115712 |
---|---|
author | Shen, Stanley Kerlin, Jess R. Bortfeld, Heather Shahin, Antoine J. |
author_facet | Shen, Stanley Kerlin, Jess R. Bortfeld, Heather Shahin, Antoine J. |
author_sort | Shen, Stanley |
collection | PubMed |
description | The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible degraded (vocoded) words and made binary judgments about word meaning (animate or inanimate). We found that intact speech evoked larger negativity between 280–527-ms than vocoded speech, suggestive of more robust semantic prediction for the intact signal. For visual reliability, we found that greater cross-modal ERP suppression occurred for clear than blurred videos prior to sound onset and for the P2 ERP. Additionally, the later semantic-related negativity tended to be larger for clear than blurred videos. These results suggest that the cross-modal effect is largely confined to suppression of early auditory networks with weak effect on networks associated with semantic prediction. However, the semantic-related visual effect on the late negativity may have been tempered by the vocoded signal’s high-reliability. |
format | Online Article Text |
id | pubmed-7692090 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-76920902020-11-28 The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study Shen, Stanley Kerlin, Jess R. Bortfeld, Heather Shahin, Antoine J. Brain Sci Article The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible degraded (vocoded) words and made binary judgments about word meaning (animate or inanimate). We found that intact speech evoked larger negativity between 280–527-ms than vocoded speech, suggestive of more robust semantic prediction for the intact signal. For visual reliability, we found that greater cross-modal ERP suppression occurred for clear than blurred videos prior to sound onset and for the P2 ERP. Additionally, the later semantic-related negativity tended to be larger for clear than blurred videos. These results suggest that the cross-modal effect is largely confined to suppression of early auditory networks with weak effect on networks associated with semantic prediction. However, the semantic-related visual effect on the late negativity may have been tempered by the vocoded signal’s high-reliability. MDPI 2020-11-02 /pmc/articles/PMC7692090/ /pubmed/33147691 http://dx.doi.org/10.3390/brainsci10110810 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Shen, Stanley Kerlin, Jess R. Bortfeld, Heather Shahin, Antoine J. The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study |
title | The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study |
title_full | The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study |
title_fullStr | The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study |
title_full_unstemmed | The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study |
title_short | The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study |
title_sort | cross-modal suppressive role of visual context on speech intelligibility: an erp study |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7692090/ https://www.ncbi.nlm.nih.gov/pubmed/33147691 http://dx.doi.org/10.3390/brainsci10110810 |
work_keys_str_mv | AT shenstanley thecrossmodalsuppressiveroleofvisualcontextonspeechintelligibilityanerpstudy AT kerlinjessr thecrossmodalsuppressiveroleofvisualcontextonspeechintelligibilityanerpstudy AT bortfeldheather thecrossmodalsuppressiveroleofvisualcontextonspeechintelligibilityanerpstudy AT shahinantoinej thecrossmodalsuppressiveroleofvisualcontextonspeechintelligibilityanerpstudy AT shenstanley crossmodalsuppressiveroleofvisualcontextonspeechintelligibilityanerpstudy AT kerlinjessr crossmodalsuppressiveroleofvisualcontextonspeechintelligibilityanerpstudy AT bortfeldheather crossmodalsuppressiveroleofvisualcontextonspeechintelligibilityanerpstudy AT shahinantoinej crossmodalsuppressiveroleofvisualcontextonspeechintelligibilityanerpstudy |