Cargando…
Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks
Robust and efficient speech perception relies on the interpretation of acoustically variable phoneme realizations, yet prior neuroimaging studies are inconclusive regarding the degree to which subphonemic detail is maintained over time as categorical representations arise. It is also unknown whether...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MIT Press
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8360503/ https://www.ncbi.nlm.nih.gov/pubmed/34396148 http://dx.doi.org/10.1162/nol_a_00034 |
_version_ | 1783737755295023104 |
---|---|
author | Beach, Sara D. Ozernov-Palchik, Ola May, Sidney C. Centanni, Tracy M. Gabrieli, John D. E. Pantazis, Dimitrios |
author_facet | Beach, Sara D. Ozernov-Palchik, Ola May, Sidney C. Centanni, Tracy M. Gabrieli, John D. E. Pantazis, Dimitrios |
author_sort | Beach, Sara D. |
collection | PubMed |
description | Robust and efficient speech perception relies on the interpretation of acoustically variable phoneme realizations, yet prior neuroimaging studies are inconclusive regarding the degree to which subphonemic detail is maintained over time as categorical representations arise. It is also unknown whether this depends on the demands of the listening task. We addressed these questions by using neural decoding to quantify the (dis)similarity of brain response patterns evoked during two different tasks. We recorded magnetoencephalography (MEG) as adult participants heard isolated, randomized tokens from a /ba/-/da/ speech continuum. In the passive task, their attention was diverted. In the active task, they categorized each token as ba or da. We found that linear classifiers successfully decoded ba vs. da perception from the MEG data. Data from the left hemisphere were sufficient to decode the percept early in the trial, while the right hemisphere was necessary but not sufficient for decoding at later time points. We also decoded stimulus representations and found that they were maintained longer in the active task than in the passive task; however, these representations did not pattern more like discrete phonemes when an active categorical response was required. Instead, in both tasks, early phonemic patterns gave way to a representation of stimulus ambiguity that coincided in time with reliable percept decoding. Our results suggest that the categorization process does not require the loss of subphonemic detail, and that the neural representation of isolated speech sounds includes concurrent phonemic and subphonemic information. |
format | Online Article Text |
id | pubmed-8360503 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MIT Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-83605032021-08-13 Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks Beach, Sara D. Ozernov-Palchik, Ola May, Sidney C. Centanni, Tracy M. Gabrieli, John D. E. Pantazis, Dimitrios Neurobiol Lang (Camb) Research Article Robust and efficient speech perception relies on the interpretation of acoustically variable phoneme realizations, yet prior neuroimaging studies are inconclusive regarding the degree to which subphonemic detail is maintained over time as categorical representations arise. It is also unknown whether this depends on the demands of the listening task. We addressed these questions by using neural decoding to quantify the (dis)similarity of brain response patterns evoked during two different tasks. We recorded magnetoencephalography (MEG) as adult participants heard isolated, randomized tokens from a /ba/-/da/ speech continuum. In the passive task, their attention was diverted. In the active task, they categorized each token as ba or da. We found that linear classifiers successfully decoded ba vs. da perception from the MEG data. Data from the left hemisphere were sufficient to decode the percept early in the trial, while the right hemisphere was necessary but not sufficient for decoding at later time points. We also decoded stimulus representations and found that they were maintained longer in the active task than in the passive task; however, these representations did not pattern more like discrete phonemes when an active categorical response was required. Instead, in both tasks, early phonemic patterns gave way to a representation of stimulus ambiguity that coincided in time with reliable percept decoding. Our results suggest that the categorization process does not require the loss of subphonemic detail, and that the neural representation of isolated speech sounds includes concurrent phonemic and subphonemic information. MIT Press 2021-05-07 /pmc/articles/PMC8360503/ /pubmed/34396148 http://dx.doi.org/10.1162/nol_a_00034 Text en © 2021 Massachusetts Institute of Technology. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0 (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Research Article Beach, Sara D. Ozernov-Palchik, Ola May, Sidney C. Centanni, Tracy M. Gabrieli, John D. E. Pantazis, Dimitrios Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks |
title | Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks |
title_full | Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks |
title_fullStr | Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks |
title_full_unstemmed | Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks |
title_short | Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks |
title_sort | neural decoding reveals concurrent phonemic and subphonemic representations of speech across tasks |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8360503/ https://www.ncbi.nlm.nih.gov/pubmed/34396148 http://dx.doi.org/10.1162/nol_a_00034 |
work_keys_str_mv | AT beachsarad neuraldecodingrevealsconcurrentphonemicandsubphonemicrepresentationsofspeechacrosstasks AT ozernovpalchikola neuraldecodingrevealsconcurrentphonemicandsubphonemicrepresentationsofspeechacrosstasks AT maysidneyc neuraldecodingrevealsconcurrentphonemicandsubphonemicrepresentationsofspeechacrosstasks AT centannitracym neuraldecodingrevealsconcurrentphonemicandsubphonemicrepresentationsofspeechacrosstasks AT gabrielijohnde neuraldecodingrevealsconcurrentphonemicandsubphonemicrepresentationsofspeechacrosstasks AT pantazisdimitrios neuraldecodingrevealsconcurrentphonemicandsubphonemicrepresentationsofspeechacrosstasks |