Cargando…

Speech Recognition via fNIRS Based Brain Signals

In this paper, we present the first evidence that perceived speech can be identified from the listeners' brain signals measured via functional-near infrared spectroscopy (fNIRS)—a non-invasive, portable, and wearable neuroimaging technique suitable for ecologically valid settings. In this study...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Yichuan, Ayaz, Hasan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6189799/
https://www.ncbi.nlm.nih.gov/pubmed/30356771
http://dx.doi.org/10.3389/fnins.2018.00695
_version_ 1783363433436020736
author Liu, Yichuan
Ayaz, Hasan
author_facet Liu, Yichuan
Ayaz, Hasan
author_sort Liu, Yichuan
collection PubMed
description In this paper, we present the first evidence that perceived speech can be identified from the listeners' brain signals measured via functional-near infrared spectroscopy (fNIRS)—a non-invasive, portable, and wearable neuroimaging technique suitable for ecologically valid settings. In this study, participants listened audio clips containing English stories while prefrontal and parietal cortices were monitored with fNIRS. Machine learning was applied to train predictive models using fNIRS data from a subject pool to predict which part of a story was listened by a new subject not in the pool based on the brain's hemodynamic response as measured by fNIRS. fNIRS signals can vary considerably from subject to subject due to the different head size, head shape, and spatial locations of brain functional regions. To overcome this difficulty, a generalized canonical correlation analysis (GCCA) was adopted to extract latent variables that are shared among the listeners before applying principal component analysis (PCA) for dimension reduction and applying logistic regression for classification. A 74.7% average accuracy has been achieved for differentiating between two 50 s. long story segments and a 43.6% average accuracy has been achieved for differentiating four 25 s. long story segments. These results suggest the potential of an fNIRS based-approach for building a speech decoding brain-computer-interface for developing a new type of neural prosthetic system.
format Online
Article
Text
id pubmed-6189799
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-61897992018-10-23 Speech Recognition via fNIRS Based Brain Signals Liu, Yichuan Ayaz, Hasan Front Neurosci Neuroscience In this paper, we present the first evidence that perceived speech can be identified from the listeners' brain signals measured via functional-near infrared spectroscopy (fNIRS)—a non-invasive, portable, and wearable neuroimaging technique suitable for ecologically valid settings. In this study, participants listened audio clips containing English stories while prefrontal and parietal cortices were monitored with fNIRS. Machine learning was applied to train predictive models using fNIRS data from a subject pool to predict which part of a story was listened by a new subject not in the pool based on the brain's hemodynamic response as measured by fNIRS. fNIRS signals can vary considerably from subject to subject due to the different head size, head shape, and spatial locations of brain functional regions. To overcome this difficulty, a generalized canonical correlation analysis (GCCA) was adopted to extract latent variables that are shared among the listeners before applying principal component analysis (PCA) for dimension reduction and applying logistic regression for classification. A 74.7% average accuracy has been achieved for differentiating between two 50 s. long story segments and a 43.6% average accuracy has been achieved for differentiating four 25 s. long story segments. These results suggest the potential of an fNIRS based-approach for building a speech decoding brain-computer-interface for developing a new type of neural prosthetic system. Frontiers Media S.A. 2018-10-09 /pmc/articles/PMC6189799/ /pubmed/30356771 http://dx.doi.org/10.3389/fnins.2018.00695 Text en Copyright © 2018 Liu and Ayaz. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Liu, Yichuan
Ayaz, Hasan
Speech Recognition via fNIRS Based Brain Signals
title Speech Recognition via fNIRS Based Brain Signals
title_full Speech Recognition via fNIRS Based Brain Signals
title_fullStr Speech Recognition via fNIRS Based Brain Signals
title_full_unstemmed Speech Recognition via fNIRS Based Brain Signals
title_short Speech Recognition via fNIRS Based Brain Signals
title_sort speech recognition via fnirs based brain signals
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6189799/
https://www.ncbi.nlm.nih.gov/pubmed/30356771
http://dx.doi.org/10.3389/fnins.2018.00695
work_keys_str_mv AT liuyichuan speechrecognitionviafnirsbasedbrainsignals
AT ayazhasan speechrecognitionviafnirsbasedbrainsignals