Cargando…

Perceptual Learning of Noise-Vocoded Speech Under Divided Attention

Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whethe...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Han, Chen, Rongru, Yan, Yu, McGettigan, Carolyn, Rosen, Stuart, Adank, Patti
Formato: Online Artículo Texto
Lenguaje:English
Publicado: SAGE Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10408355/
https://www.ncbi.nlm.nih.gov/pubmed/37547940
http://dx.doi.org/10.1177/23312165231192297
_version_ 1785086169296404480
author Wang, Han
Chen, Rongru
Yan, Yu
McGettigan, Carolyn
Rosen, Stuart
Adank, Patti
author_facet Wang, Han
Chen, Rongru
Yan, Yu
McGettigan, Carolyn
Rosen, Stuart
Adank, Patti
author_sort Wang, Han
collection PubMed
description Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.
format Online
Article
Text
id pubmed-10408355
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher SAGE Publications
record_format MEDLINE/PubMed
spelling pubmed-104083552023-08-09 Perceptual Learning of Noise-Vocoded Speech Under Divided Attention Wang, Han Chen, Rongru Yan, Yu McGettigan, Carolyn Rosen, Stuart Adank, Patti Trends Hear Original Article Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention. SAGE Publications 2023-08-07 /pmc/articles/PMC10408355/ /pubmed/37547940 http://dx.doi.org/10.1177/23312165231192297 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
spellingShingle Original Article
Wang, Han
Chen, Rongru
Yan, Yu
McGettigan, Carolyn
Rosen, Stuart
Adank, Patti
Perceptual Learning of Noise-Vocoded Speech Under Divided Attention
title Perceptual Learning of Noise-Vocoded Speech Under Divided Attention
title_full Perceptual Learning of Noise-Vocoded Speech Under Divided Attention
title_fullStr Perceptual Learning of Noise-Vocoded Speech Under Divided Attention
title_full_unstemmed Perceptual Learning of Noise-Vocoded Speech Under Divided Attention
title_short Perceptual Learning of Noise-Vocoded Speech Under Divided Attention
title_sort perceptual learning of noise-vocoded speech under divided attention
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10408355/
https://www.ncbi.nlm.nih.gov/pubmed/37547940
http://dx.doi.org/10.1177/23312165231192297
work_keys_str_mv AT wanghan perceptuallearningofnoisevocodedspeechunderdividedattention
AT chenrongru perceptuallearningofnoisevocodedspeechunderdividedattention
AT yanyu perceptuallearningofnoisevocodedspeechunderdividedattention
AT mcgettigancarolyn perceptuallearningofnoisevocodedspeechunderdividedattention
AT rosenstuart perceptuallearningofnoisevocodedspeechunderdividedattention
AT adankpatti perceptuallearningofnoisevocodedspeechunderdividedattention