Cargando…
Decoding Hearing-Related Changes in Older Adults’ Spatiotemporal Neural Processing of Speech Using Machine Learning
Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (wh...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7378401/ https://www.ncbi.nlm.nih.gov/pubmed/32765215 http://dx.doi.org/10.3389/fnins.2020.00748 |
_version_ | 1783562411990581248 |
---|---|
author | Mahmud, Md Sultan Ahmed, Faruk Al-Fahad, Rakib Moinuddin, Kazi Ashraf Yeasin, Mohammed Alain, Claude Bidelman, Gavin M. |
author_facet | Mahmud, Md Sultan Ahmed, Faruk Al-Fahad, Rakib Moinuddin, Kazi Ashraf Yeasin, Mohammed Alain, Claude Bidelman, Gavin M. |
author_sort | Mahmud, Md Sultan |
collection | PubMed |
description | Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the time course and brain regions that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information. |
format | Online Article Text |
id | pubmed-7378401 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-73784012020-08-05 Decoding Hearing-Related Changes in Older Adults’ Spatiotemporal Neural Processing of Speech Using Machine Learning Mahmud, Md Sultan Ahmed, Faruk Al-Fahad, Rakib Moinuddin, Kazi Ashraf Yeasin, Mohammed Alain, Claude Bidelman, Gavin M. Front Neurosci Neuroscience Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the time course and brain regions that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information. Frontiers Media S.A. 2020-07-16 /pmc/articles/PMC7378401/ /pubmed/32765215 http://dx.doi.org/10.3389/fnins.2020.00748 Text en Copyright © 2020 Mahmud, Ahmed, Al-Fahad, Moinuddin, Yeasin, Alain and Bidelman. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Mahmud, Md Sultan Ahmed, Faruk Al-Fahad, Rakib Moinuddin, Kazi Ashraf Yeasin, Mohammed Alain, Claude Bidelman, Gavin M. Decoding Hearing-Related Changes in Older Adults’ Spatiotemporal Neural Processing of Speech Using Machine Learning |
title | Decoding Hearing-Related Changes in Older Adults’ Spatiotemporal Neural Processing of Speech Using Machine Learning |
title_full | Decoding Hearing-Related Changes in Older Adults’ Spatiotemporal Neural Processing of Speech Using Machine Learning |
title_fullStr | Decoding Hearing-Related Changes in Older Adults’ Spatiotemporal Neural Processing of Speech Using Machine Learning |
title_full_unstemmed | Decoding Hearing-Related Changes in Older Adults’ Spatiotemporal Neural Processing of Speech Using Machine Learning |
title_short | Decoding Hearing-Related Changes in Older Adults’ Spatiotemporal Neural Processing of Speech Using Machine Learning |
title_sort | decoding hearing-related changes in older adults’ spatiotemporal neural processing of speech using machine learning |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7378401/ https://www.ncbi.nlm.nih.gov/pubmed/32765215 http://dx.doi.org/10.3389/fnins.2020.00748 |
work_keys_str_mv | AT mahmudmdsultan decodinghearingrelatedchangesinolderadultsspatiotemporalneuralprocessingofspeechusingmachinelearning AT ahmedfaruk decodinghearingrelatedchangesinolderadultsspatiotemporalneuralprocessingofspeechusingmachinelearning AT alfahadrakib decodinghearingrelatedchangesinolderadultsspatiotemporalneuralprocessingofspeechusingmachinelearning AT moinuddinkaziashraf decodinghearingrelatedchangesinolderadultsspatiotemporalneuralprocessingofspeechusingmachinelearning AT yeasinmohammed decodinghearingrelatedchangesinolderadultsspatiotemporalneuralprocessingofspeechusingmachinelearning AT alainclaude decodinghearingrelatedchangesinolderadultsspatiotemporalneuralprocessingofspeechusingmachinelearning AT bidelmangavinm decodinghearingrelatedchangesinolderadultsspatiotemporalneuralprocessingofspeechusingmachinelearning |