Cargando…

CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition

Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imag...

Descripción completa

Detalles Bibliográficos
Autores principales: Rusnac, Ana-Luiza, Grigore, Ovidiu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9268757/
https://www.ncbi.nlm.nih.gov/pubmed/35808173
http://dx.doi.org/10.3390/s22134679
_version_ 1784744063642107904
author Rusnac, Ana-Luiza
Grigore, Ovidiu
author_facet Rusnac, Ana-Luiza
Grigore, Ovidiu
author_sort Rusnac, Ana-Luiza
collection PubMed
description Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imaginary speech recognition system to obtain the best performance according to the developed method that can be applied to a low-cost system with limited resources. In developing the system, we used signals from the Kara One database containing recordings acquired for seven phonemes and four words. We used in the feature extraction stage a method based on covariance in the frequency domain that performed better compared to the other time-domain methods. Further, we observed the system performance when using different window lengths for the input signal (0.25 s, 0.5 s and 1 s) to highlight the importance of the short-term analysis of the signals for imaginary speech. The final goal being the development of a low-cost system, we studied several architectures of convolutional neural networks (CNN) and showed that a more complex architecture does not necessarily lead to better results. Our study was conducted on eight different subjects, and it is meant to be a subject’s shared system. The best performance reported in this paper is up to 37% accuracy for all 11 different phonemes and words when using cross-covariance computed over the signal spectrum of a 0.25 s window and a CNN containing two convolutional layers with 64 and 128 filters connected to a dense layer with 64 neurons. The final system qualifies as a low-cost system using limited resources for decision-making and having a running time of 1.8 ms tested on an AMD Ryzen 7 4800HS CPU.
format Online
Article
Text
id pubmed-9268757
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-92687572022-07-09 CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition Rusnac, Ana-Luiza Grigore, Ovidiu Sensors (Basel) Article Speech is a complex mechanism allowing us to communicate our needs, desires and thoughts. In some cases of neural dysfunctions, this ability is highly affected, which makes everyday life activities that require communication a challenge. This paper studies different parameters of an intelligent imaginary speech recognition system to obtain the best performance according to the developed method that can be applied to a low-cost system with limited resources. In developing the system, we used signals from the Kara One database containing recordings acquired for seven phonemes and four words. We used in the feature extraction stage a method based on covariance in the frequency domain that performed better compared to the other time-domain methods. Further, we observed the system performance when using different window lengths for the input signal (0.25 s, 0.5 s and 1 s) to highlight the importance of the short-term analysis of the signals for imaginary speech. The final goal being the development of a low-cost system, we studied several architectures of convolutional neural networks (CNN) and showed that a more complex architecture does not necessarily lead to better results. Our study was conducted on eight different subjects, and it is meant to be a subject’s shared system. The best performance reported in this paper is up to 37% accuracy for all 11 different phonemes and words when using cross-covariance computed over the signal spectrum of a 0.25 s window and a CNN containing two convolutional layers with 64 and 128 filters connected to a dense layer with 64 neurons. The final system qualifies as a low-cost system using limited resources for decision-making and having a running time of 1.8 ms tested on an AMD Ryzen 7 4800HS CPU. MDPI 2022-06-21 /pmc/articles/PMC9268757/ /pubmed/35808173 http://dx.doi.org/10.3390/s22134679 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Rusnac, Ana-Luiza
Grigore, Ovidiu
CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition
title CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition
title_full CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition
title_fullStr CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition
title_full_unstemmed CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition
title_short CNN Architectures and Feature Extraction Methods for EEG Imaginary Speech Recognition
title_sort cnn architectures and feature extraction methods for eeg imaginary speech recognition
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9268757/
https://www.ncbi.nlm.nih.gov/pubmed/35808173
http://dx.doi.org/10.3390/s22134679
work_keys_str_mv AT rusnacanaluiza cnnarchitecturesandfeatureextractionmethodsforeegimaginaryspeechrecognition
AT grigoreovidiu cnnarchitecturesandfeatureextractionmethodsforeegimaginaryspeechrecognition