Cargando…

The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment

Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A re...

Descripción completa

Detalles Bibliográficos
Autores principales: Sheffield, Sterling W., Wheeler, Harley J., Brungart, Douglas S., Bernstein, Joshua G. W.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: SAGE Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10331332/
https://www.ncbi.nlm.nih.gov/pubmed/37415497
http://dx.doi.org/10.1177/23312165231186040
_version_ 1785070235003387904
author Sheffield, Sterling W.
Wheeler, Harley J.
Brungart, Douglas S.
Bernstein, Joshua G. W.
author_facet Sheffield, Sterling W.
Wheeler, Harley J.
Brungart, Douglas S.
Bernstein, Joshua G. W.
author_sort Sheffield, Sterling W.
collection PubMed
description Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at −90°, −36°, 36°, or 90° azimuths. For auditory-visual conditions, one target and three masking talker videos (always spatially separated) were rendered virtually in rectangular windows at these locations on a head-mounted display. Auditory-only conditions presented blank windows at these locations. Auditory target speech (always spatially aligned with the target video) was presented in co-located speech-shaped noise (experiment 1) or with three co-located or spatially separated auditory interfering talkers corresponding to the masker videos (experiment 2). In the co-located conditions, the LocDeg algorithm did not affect auditory-only performance but reduced target orientation accuracy, reducing auditory-visual benefit. In the multitalker environment, two spatial-hearing benefits were observed: perceptually separating competing speech based on auditory spatial differences and orienting to the target talker to obtain visual speech cues. These two benefits were additive, and both were diminished by the LocDeg algorithm. Although visual cues always improved performance when the target was accurately localized, there was no strong evidence that they provided additional assistance in perceptually separating co-located competing speech. These results highlight the importance of sound localization in everyday communication.
format Online
Article
Text
id pubmed-10331332
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher SAGE Publications
record_format MEDLINE/PubMed
spelling pubmed-103313322023-07-11 The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment Sheffield, Sterling W. Wheeler, Harley J. Brungart, Douglas S. Bernstein, Joshua G. W. Trends Hear Original Article Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at −90°, −36°, 36°, or 90° azimuths. For auditory-visual conditions, one target and three masking talker videos (always spatially separated) were rendered virtually in rectangular windows at these locations on a head-mounted display. Auditory-only conditions presented blank windows at these locations. Auditory target speech (always spatially aligned with the target video) was presented in co-located speech-shaped noise (experiment 1) or with three co-located or spatially separated auditory interfering talkers corresponding to the masker videos (experiment 2). In the co-located conditions, the LocDeg algorithm did not affect auditory-only performance but reduced target orientation accuracy, reducing auditory-visual benefit. In the multitalker environment, two spatial-hearing benefits were observed: perceptually separating competing speech based on auditory spatial differences and orienting to the target talker to obtain visual speech cues. These two benefits were additive, and both were diminished by the LocDeg algorithm. Although visual cues always improved performance when the target was accurately localized, there was no strong evidence that they provided additional assistance in perceptually separating co-located competing speech. These results highlight the importance of sound localization in everyday communication. SAGE Publications 2023-07-07 /pmc/articles/PMC10331332/ /pubmed/37415497 http://dx.doi.org/10.1177/23312165231186040 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by-nc/4.0/This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
spellingShingle Original Article
Sheffield, Sterling W.
Wheeler, Harley J.
Brungart, Douglas S.
Bernstein, Joshua G. W.
The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment
title The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment
title_full The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment
title_fullStr The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment
title_full_unstemmed The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment
title_short The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment
title_sort effect of sound localization on auditory-only and audiovisual speech recognition in a simulated multitalker environment
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10331332/
https://www.ncbi.nlm.nih.gov/pubmed/37415497
http://dx.doi.org/10.1177/23312165231186040
work_keys_str_mv AT sheffieldsterlingw theeffectofsoundlocalizationonauditoryonlyandaudiovisualspeechrecognitioninasimulatedmultitalkerenvironment
AT wheelerharleyj theeffectofsoundlocalizationonauditoryonlyandaudiovisualspeechrecognitioninasimulatedmultitalkerenvironment
AT brungartdouglass theeffectofsoundlocalizationonauditoryonlyandaudiovisualspeechrecognitioninasimulatedmultitalkerenvironment
AT bernsteinjoshuagw theeffectofsoundlocalizationonauditoryonlyandaudiovisualspeechrecognitioninasimulatedmultitalkerenvironment
AT sheffieldsterlingw effectofsoundlocalizationonauditoryonlyandaudiovisualspeechrecognitioninasimulatedmultitalkerenvironment
AT wheelerharleyj effectofsoundlocalizationonauditoryonlyandaudiovisualspeechrecognitioninasimulatedmultitalkerenvironment
AT brungartdouglass effectofsoundlocalizationonauditoryonlyandaudiovisualspeechrecognitioninasimulatedmultitalkerenvironment
AT bernsteinjoshuagw effectofsoundlocalizationonauditoryonlyandaudiovisualspeechrecognitioninasimulatedmultitalkerenvironment