Cargando…

Neural Correlates of Speech Segregation Based on Formant Frequencies of Adjacent Vowels

The neural substrates by which speech sounds are perceptually segregated into distinct streams are poorly understood. Here, we recorded high-density scalp event-related potentials (ERPs) while participants were presented with a cyclic pattern of three vowel sounds (/ee/-/ae/-/ee/). Each trial consis...

Descripción completa

Detalles Bibliográficos
Autores principales: Alain, Claude, Arsenault, Jessica S., Garami, Linda, Bidelman, Gavin M., Snyder, Joel S.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5244401/
https://www.ncbi.nlm.nih.gov/pubmed/28102300
http://dx.doi.org/10.1038/srep40790
_version_ 1782496693278736384
author Alain, Claude
Arsenault, Jessica S.
Garami, Linda
Bidelman, Gavin M.
Snyder, Joel S.
author_facet Alain, Claude
Arsenault, Jessica S.
Garami, Linda
Bidelman, Gavin M.
Snyder, Joel S.
author_sort Alain, Claude
collection PubMed
description The neural substrates by which speech sounds are perceptually segregated into distinct streams are poorly understood. Here, we recorded high-density scalp event-related potentials (ERPs) while participants were presented with a cyclic pattern of three vowel sounds (/ee/-/ae/-/ee/). Each trial consisted of an adaptation sequence, which could have either a small, intermediate, or large difference in first formant (Δf(1)) as well as a test sequence, in which Δf(1) was always intermediate. For the adaptation sequence, participants tended to hear two streams (“streaming”) when Δf(1) was intermediate or large compared to when it was small. For the test sequence, in which Δf(1) was always intermediate, the pattern was usually reversed, with participants hearing a single stream with increasing Δf(1) in the adaptation sequences. During the adaptation sequence, Δf(1)-related brain activity was found between 100–250 ms after the /ae/ vowel over fronto-central and left temporal areas, consistent with generation in auditory cortex. For the test sequence, prior stimulus modulated ERP amplitude between 20–150 ms over left fronto-central scalp region. Our results demonstrate that the proximity of formants between adjacent vowels is an important factor in the perceptual organization of speech, and reveal a widely distributed neural network supporting perceptual grouping of speech sounds.
format Online
Article
Text
id pubmed-5244401
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Nature Publishing Group
record_format MEDLINE/PubMed
spelling pubmed-52444012017-01-23 Neural Correlates of Speech Segregation Based on Formant Frequencies of Adjacent Vowels Alain, Claude Arsenault, Jessica S. Garami, Linda Bidelman, Gavin M. Snyder, Joel S. Sci Rep Article The neural substrates by which speech sounds are perceptually segregated into distinct streams are poorly understood. Here, we recorded high-density scalp event-related potentials (ERPs) while participants were presented with a cyclic pattern of three vowel sounds (/ee/-/ae/-/ee/). Each trial consisted of an adaptation sequence, which could have either a small, intermediate, or large difference in first formant (Δf(1)) as well as a test sequence, in which Δf(1) was always intermediate. For the adaptation sequence, participants tended to hear two streams (“streaming”) when Δf(1) was intermediate or large compared to when it was small. For the test sequence, in which Δf(1) was always intermediate, the pattern was usually reversed, with participants hearing a single stream with increasing Δf(1) in the adaptation sequences. During the adaptation sequence, Δf(1)-related brain activity was found between 100–250 ms after the /ae/ vowel over fronto-central and left temporal areas, consistent with generation in auditory cortex. For the test sequence, prior stimulus modulated ERP amplitude between 20–150 ms over left fronto-central scalp region. Our results demonstrate that the proximity of formants between adjacent vowels is an important factor in the perceptual organization of speech, and reveal a widely distributed neural network supporting perceptual grouping of speech sounds. Nature Publishing Group 2017-01-19 /pmc/articles/PMC5244401/ /pubmed/28102300 http://dx.doi.org/10.1038/srep40790 Text en Copyright © 2017, The Author(s) http://creativecommons.org/licenses/by/4.0/ This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
spellingShingle Article
Alain, Claude
Arsenault, Jessica S.
Garami, Linda
Bidelman, Gavin M.
Snyder, Joel S.
Neural Correlates of Speech Segregation Based on Formant Frequencies of Adjacent Vowels
title Neural Correlates of Speech Segregation Based on Formant Frequencies of Adjacent Vowels
title_full Neural Correlates of Speech Segregation Based on Formant Frequencies of Adjacent Vowels
title_fullStr Neural Correlates of Speech Segregation Based on Formant Frequencies of Adjacent Vowels
title_full_unstemmed Neural Correlates of Speech Segregation Based on Formant Frequencies of Adjacent Vowels
title_short Neural Correlates of Speech Segregation Based on Formant Frequencies of Adjacent Vowels
title_sort neural correlates of speech segregation based on formant frequencies of adjacent vowels
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5244401/
https://www.ncbi.nlm.nih.gov/pubmed/28102300
http://dx.doi.org/10.1038/srep40790
work_keys_str_mv AT alainclaude neuralcorrelatesofspeechsegregationbasedonformantfrequenciesofadjacentvowels
AT arsenaultjessicas neuralcorrelatesofspeechsegregationbasedonformantfrequenciesofadjacentvowels
AT garamilinda neuralcorrelatesofspeechsegregationbasedonformantfrequenciesofadjacentvowels
AT bidelmangavinm neuralcorrelatesofspeechsegregationbasedonformantfrequenciesofadjacentvowels
AT snyderjoels neuralcorrelatesofspeechsegregationbasedonformantfrequenciesofadjacentvowels