Cargando…

Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex

Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocor...

Descripción completa

Detalles Bibliográficos
Autores principales: Ibayashi, Kenji, Kunii, Naoto, Matsuo, Takeshi, Ishishita, Yohei, Shimada, Seijiro, Kawai, Kensuke, Saito, Nobuhito
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5895763/
https://www.ncbi.nlm.nih.gov/pubmed/29674950
http://dx.doi.org/10.3389/fnins.2018.00221
_version_ 1783313716043841536
author Ibayashi, Kenji
Kunii, Naoto
Matsuo, Takeshi
Ishishita, Yohei
Shimada, Seijiro
Kawai, Kensuke
Saito, Nobuhito
author_facet Ibayashi, Kenji
Kunii, Naoto
Matsuo, Takeshi
Ishishita, Yohei
Shimada, Seijiro
Kawai, Kensuke
Saito, Nobuhito
author_sort Ibayashi, Kenji
collection PubMed
description Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs.
format Online
Article
Text
id pubmed-5895763
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-58957632018-04-19 Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex Ibayashi, Kenji Kunii, Naoto Matsuo, Takeshi Ishishita, Yohei Shimada, Seijiro Kawai, Kensuke Saito, Nobuhito Front Neurosci Neuroscience Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs. Frontiers Media S.A. 2018-04-05 /pmc/articles/PMC5895763/ /pubmed/29674950 http://dx.doi.org/10.3389/fnins.2018.00221 Text en Copyright © 2018 Ibayashi, Kunii, Matsuo, Ishishita, Shimada, Kawai and Saito. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Ibayashi, Kenji
Kunii, Naoto
Matsuo, Takeshi
Ishishita, Yohei
Shimada, Seijiro
Kawai, Kensuke
Saito, Nobuhito
Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex
title Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex
title_full Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex
title_fullStr Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex
title_full_unstemmed Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex
title_short Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex
title_sort decoding speech with integrated hybrid signals recorded from the human ventral motor cortex
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5895763/
https://www.ncbi.nlm.nih.gov/pubmed/29674950
http://dx.doi.org/10.3389/fnins.2018.00221
work_keys_str_mv AT ibayashikenji decodingspeechwithintegratedhybridsignalsrecordedfromthehumanventralmotorcortex
AT kuniinaoto decodingspeechwithintegratedhybridsignalsrecordedfromthehumanventralmotorcortex
AT matsuotakeshi decodingspeechwithintegratedhybridsignalsrecordedfromthehumanventralmotorcortex
AT ishishitayohei decodingspeechwithintegratedhybridsignalsrecordedfromthehumanventralmotorcortex
AT shimadaseijiro decodingspeechwithintegratedhybridsignalsrecordedfromthehumanventralmotorcortex
AT kawaikensuke decodingspeechwithintegratedhybridsignalsrecordedfromthehumanventralmotorcortex
AT saitonobuhito decodingspeechwithintegratedhybridsignalsrecordedfromthehumanventralmotorcortex