Cargando…
Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis
Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozaw...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3767908/ https://www.ncbi.nlm.nih.gov/pubmed/24058358 http://dx.doi.org/10.3389/fpsyg.2013.00615 |
_version_ | 1782283724873793536 |
---|---|
author | Altieri, Nicholas Wenger, Michael J. |
author_facet | Altieri, Nicholas Wenger, Michael J. |
author_sort | Altieri, Nicholas |
collection | PubMed |
description | Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. |
format | Online Article Text |
id | pubmed-3767908 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2013 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-37679082013-09-20 Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis Altieri, Nicholas Wenger, Michael J. Front Psychol Psychology Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. Frontiers Media S.A. 2013-09-10 /pmc/articles/PMC3767908/ /pubmed/24058358 http://dx.doi.org/10.3389/fpsyg.2013.00615 Text en Copyright © 2013 Altieri and Wenger. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Altieri, Nicholas Wenger, Michael J. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis |
title | Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis |
title_full | Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis |
title_fullStr | Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis |
title_full_unstemmed | Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis |
title_short | Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis |
title_sort | neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3767908/ https://www.ncbi.nlm.nih.gov/pubmed/24058358 http://dx.doi.org/10.3389/fpsyg.2013.00615 |
work_keys_str_mv | AT altierinicholas neuraldynamicsofaudiovisualspeechintegrationundervariablelisteningconditionsanindividualparticipantanalysis AT wengermichaelj neuraldynamicsofaudiovisualspeechintegrationundervariablelisteningconditionsanindividualparticipantanalysis |