Cargando…

Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition

Surface electromyographic (sEMG) signals are weak physiological electrical signals, which are highly susceptible to coupling external noise and cause major difficulties in signal acquisition and processing. The study of using sEMG signals to analyze human motion intention mainly involves data prepro...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Wansu, Lu, Biao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8927293/
https://www.ncbi.nlm.nih.gov/pubmed/35310001
http://dx.doi.org/10.3389/fbioe.2022.833793
_version_ 1784670415204909056
author Liu, Wansu
Lu, Biao
author_facet Liu, Wansu
Lu, Biao
author_sort Liu, Wansu
collection PubMed
description Surface electromyographic (sEMG) signals are weak physiological electrical signals, which are highly susceptible to coupling external noise and cause major difficulties in signal acquisition and processing. The study of using sEMG signals to analyze human motion intention mainly involves data preprocessing, feature extraction, and model classification. Feature extraction is an extremely critical part; however, this often involves many manually designed features with specialized domain knowledge, so the experimenter will spend time and effort on feature extraction. To address this problem, deep learning methods that can automatically extract features are applied to the sEMG-based gesture recognition problem, drawing on the success of deep learning for image classification. In this paper, sEMG is captured using a wearable, flexible bionic device, which is simple to operate and highly secure. A multi-stream convolutional neural network algorithm is proposed to enhance the ability of sEMG to characterize hand actions in gesture recognition. The algorithm virtually augments the signal channels by reconstructing the sample structure of the sEMG to provide richer input information for gesture recognition. The methods for noise processing, active segment detection, and feature extraction are investigated, and a basic method for gesture recognition based on the combination of multichannel sEMG signals and inertial signals is proposed. Suitable filters are designed for the common noise in the signal. An improved moving average method based on the valve domain is used to reduce the segmentation error rate caused by the short resting signal time in continuous gesture signals. In this paper, three machine learning algorithms, K-nearest neighbor, linear discriminant method, and multi-stream convolutional neural network, are used for hand action classification experiments, and the effectiveness of the multi-stream convolutional neural network algorithm is demonstrated by comparison of the results. To improve the accuracy of hand action recognition, a final 10 gesture classification accuracy of up to 93.69% was obtained. The separability analysis showed significant differences in the signals of the two cognitive-behavioral tasks when the optimal electrode combination was used. A cross-subject analysis of the test set subjects illustrated that the average correct classification rate using the pervasive electrode combination could reach 93.18%.
format Online
Article
Text
id pubmed-8927293
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-89272932022-03-18 Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition Liu, Wansu Lu, Biao Front Bioeng Biotechnol Bioengineering and Biotechnology Surface electromyographic (sEMG) signals are weak physiological electrical signals, which are highly susceptible to coupling external noise and cause major difficulties in signal acquisition and processing. The study of using sEMG signals to analyze human motion intention mainly involves data preprocessing, feature extraction, and model classification. Feature extraction is an extremely critical part; however, this often involves many manually designed features with specialized domain knowledge, so the experimenter will spend time and effort on feature extraction. To address this problem, deep learning methods that can automatically extract features are applied to the sEMG-based gesture recognition problem, drawing on the success of deep learning for image classification. In this paper, sEMG is captured using a wearable, flexible bionic device, which is simple to operate and highly secure. A multi-stream convolutional neural network algorithm is proposed to enhance the ability of sEMG to characterize hand actions in gesture recognition. The algorithm virtually augments the signal channels by reconstructing the sample structure of the sEMG to provide richer input information for gesture recognition. The methods for noise processing, active segment detection, and feature extraction are investigated, and a basic method for gesture recognition based on the combination of multichannel sEMG signals and inertial signals is proposed. Suitable filters are designed for the common noise in the signal. An improved moving average method based on the valve domain is used to reduce the segmentation error rate caused by the short resting signal time in continuous gesture signals. In this paper, three machine learning algorithms, K-nearest neighbor, linear discriminant method, and multi-stream convolutional neural network, are used for hand action classification experiments, and the effectiveness of the multi-stream convolutional neural network algorithm is demonstrated by comparison of the results. To improve the accuracy of hand action recognition, a final 10 gesture classification accuracy of up to 93.69% was obtained. The separability analysis showed significant differences in the signals of the two cognitive-behavioral tasks when the optimal electrode combination was used. A cross-subject analysis of the test set subjects illustrated that the average correct classification rate using the pervasive electrode combination could reach 93.18%. Frontiers Media S.A. 2022-03-03 /pmc/articles/PMC8927293/ /pubmed/35310001 http://dx.doi.org/10.3389/fbioe.2022.833793 Text en Copyright © 2022 Liu and Lu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Bioengineering and Biotechnology
Liu, Wansu
Lu, Biao
Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition
title Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition
title_full Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition
title_fullStr Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition
title_full_unstemmed Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition
title_short Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition
title_sort multi-stream convolutional neural network-based wearable, flexible bionic gesture surface muscle feature extraction and recognition
topic Bioengineering and Biotechnology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8927293/
https://www.ncbi.nlm.nih.gov/pubmed/35310001
http://dx.doi.org/10.3389/fbioe.2022.833793
work_keys_str_mv AT liuwansu multistreamconvolutionalneuralnetworkbasedwearableflexiblebionicgesturesurfacemusclefeatureextractionandrecognition
AT lubiao multistreamconvolutionalneuralnetworkbasedwearableflexiblebionicgesturesurfacemusclefeatureextractionandrecognition