Cargando…

Multiclass classification of imagined speech EEG using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network

INTRODUCTION: In this study, we classified electroencephalography (EEG) data of imagined speech using signal decomposition and multireceptive convolutional neural network. The imagined speech EEG with five vowels /a/, /e/, /i/, /o/, and /u/, and mute (rest) sounds were obtained from ten study partic...

Descripción completa

Detalles Bibliográficos
Autores principales: Park, Hyeong-jun, Lee, Boreom
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10461632/
https://www.ncbi.nlm.nih.gov/pubmed/37645689
http://dx.doi.org/10.3389/fnhum.2023.1186594
_version_ 1785097877345796096
author Park, Hyeong-jun
Lee, Boreom
author_facet Park, Hyeong-jun
Lee, Boreom
author_sort Park, Hyeong-jun
collection PubMed
description INTRODUCTION: In this study, we classified electroencephalography (EEG) data of imagined speech using signal decomposition and multireceptive convolutional neural network. The imagined speech EEG with five vowels /a/, /e/, /i/, /o/, and /u/, and mute (rest) sounds were obtained from ten study participants. MATERIALS AND METHODS: First, two different signal decomposition methods were applied for comparison: noise-assisted multivariate empirical mode decomposition and wavelet packet decomposition. Six statistical features were calculated from the decomposed eight sub-frequency bands EEG. Next, all features obtained from each channel of the trial were vectorized and used as the input vector of classifiers. Lastly, EEG was classified using multireceptive field convolutional neural network and several other classifiers for comparison. RESULTS: We achieved an average classification rate of 73.09 and up to 80.41% in a multiclass (six classes) setup (Chance: 16.67%). In comparison with various other classifiers, significant improvements for other classifiers were achieved (p-value < 0.05). From the frequency sub-band analysis, high-frequency band regions and the lowest-frequency band region contain more information about imagined vowel EEG data. The misclassification and classification rate of each vowel imaginary EEG was analyzed through a confusion matrix. DISCUSSION: Imagined speech EEG can be classified successfully using the proposed signal decomposition method and a convolutional neural network. The proposed classification method for imagined speech EEG can contribute to developing a practical imagined speech-based brain-computer interfaces system.
format Online
Article
Text
id pubmed-10461632
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104616322023-08-29 Multiclass classification of imagined speech EEG using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network Park, Hyeong-jun Lee, Boreom Front Hum Neurosci Neuroscience INTRODUCTION: In this study, we classified electroencephalography (EEG) data of imagined speech using signal decomposition and multireceptive convolutional neural network. The imagined speech EEG with five vowels /a/, /e/, /i/, /o/, and /u/, and mute (rest) sounds were obtained from ten study participants. MATERIALS AND METHODS: First, two different signal decomposition methods were applied for comparison: noise-assisted multivariate empirical mode decomposition and wavelet packet decomposition. Six statistical features were calculated from the decomposed eight sub-frequency bands EEG. Next, all features obtained from each channel of the trial were vectorized and used as the input vector of classifiers. Lastly, EEG was classified using multireceptive field convolutional neural network and several other classifiers for comparison. RESULTS: We achieved an average classification rate of 73.09 and up to 80.41% in a multiclass (six classes) setup (Chance: 16.67%). In comparison with various other classifiers, significant improvements for other classifiers were achieved (p-value < 0.05). From the frequency sub-band analysis, high-frequency band regions and the lowest-frequency band region contain more information about imagined vowel EEG data. The misclassification and classification rate of each vowel imaginary EEG was analyzed through a confusion matrix. DISCUSSION: Imagined speech EEG can be classified successfully using the proposed signal decomposition method and a convolutional neural network. The proposed classification method for imagined speech EEG can contribute to developing a practical imagined speech-based brain-computer interfaces system. Frontiers Media S.A. 2023-08-10 /pmc/articles/PMC10461632/ /pubmed/37645689 http://dx.doi.org/10.3389/fnhum.2023.1186594 Text en Copyright © 2023 Park and Lee. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Park, Hyeong-jun
Lee, Boreom
Multiclass classification of imagined speech EEG using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network
title Multiclass classification of imagined speech EEG using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network
title_full Multiclass classification of imagined speech EEG using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network
title_fullStr Multiclass classification of imagined speech EEG using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network
title_full_unstemmed Multiclass classification of imagined speech EEG using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network
title_short Multiclass classification of imagined speech EEG using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network
title_sort multiclass classification of imagined speech eeg using noise-assisted multivariate empirical mode decomposition and multireceptive field convolutional neural network
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10461632/
https://www.ncbi.nlm.nih.gov/pubmed/37645689
http://dx.doi.org/10.3389/fnhum.2023.1186594
work_keys_str_mv AT parkhyeongjun multiclassclassificationofimaginedspeecheegusingnoiseassistedmultivariateempiricalmodedecompositionandmultireceptivefieldconvolutionalneuralnetwork
AT leeboreom multiclassclassificationofimaginedspeecheegusingnoiseassistedmultivariateempiricalmodedecompositionandmultireceptivefieldconvolutionalneuralnetwork