Cargando…

Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography

Motion classification can be performed using biometric signals recorded by electroencephalography (EEG) or electromyography (EMG) with noninvasive surface electrodes for the control of prosthetic arms. However, current single-modal EEG and EMG based motion classification techniques are limited owing...

Descripción completa

Detalles Bibliográficos
Autores principales: Kim, Sehyeon, Shin, Dae Youp, Kim, Taekyung, Lee, Sangsook, Hyun, Jung Keun, Park, Sung-Min
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8778369/
https://www.ncbi.nlm.nih.gov/pubmed/35062641
http://dx.doi.org/10.3390/s22020680
_version_ 1784637304605769728
author Kim, Sehyeon
Shin, Dae Youp
Kim, Taekyung
Lee, Sangsook
Hyun, Jung Keun
Park, Sung-Min
author_facet Kim, Sehyeon
Shin, Dae Youp
Kim, Taekyung
Lee, Sangsook
Hyun, Jung Keun
Park, Sung-Min
author_sort Kim, Sehyeon
collection PubMed
description Motion classification can be performed using biometric signals recorded by electroencephalography (EEG) or electromyography (EMG) with noninvasive surface electrodes for the control of prosthetic arms. However, current single-modal EEG and EMG based motion classification techniques are limited owing to the complexity and noise of EEG signals, and the electrode placement bias, and low-resolution of EMG signals. We herein propose a novel system of two-dimensional (2D) input image feature multimodal fusion based on an EEG/EMG-signal transfer learning (TL) paradigm for detection of hand movements in transforearm amputees. A feature extraction method in the frequency domain of the EEG and EMG signals was adopted to establish a 2D image. The input images were used for training on a model based on the convolutional neural network algorithm and TL, which requires 2D images as input data. For the purpose of data acquisition, five transforearm amputees and nine healthy controls were recruited. Compared with the conventional single-modal EEG signal trained models, the proposed multimodal fusion method significantly improved classification accuracy in both the control and patient groups. When the two signals were combined and used in the pretrained model for EEG TL, the classification accuracy increased by 4.18–4.35% in the control group, and by 2.51–3.00% in the patient group.
format Online
Article
Text
id pubmed-8778369
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-87783692022-01-22 Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography Kim, Sehyeon Shin, Dae Youp Kim, Taekyung Lee, Sangsook Hyun, Jung Keun Park, Sung-Min Sensors (Basel) Article Motion classification can be performed using biometric signals recorded by electroencephalography (EEG) or electromyography (EMG) with noninvasive surface electrodes for the control of prosthetic arms. However, current single-modal EEG and EMG based motion classification techniques are limited owing to the complexity and noise of EEG signals, and the electrode placement bias, and low-resolution of EMG signals. We herein propose a novel system of two-dimensional (2D) input image feature multimodal fusion based on an EEG/EMG-signal transfer learning (TL) paradigm for detection of hand movements in transforearm amputees. A feature extraction method in the frequency domain of the EEG and EMG signals was adopted to establish a 2D image. The input images were used for training on a model based on the convolutional neural network algorithm and TL, which requires 2D images as input data. For the purpose of data acquisition, five transforearm amputees and nine healthy controls were recruited. Compared with the conventional single-modal EEG signal trained models, the proposed multimodal fusion method significantly improved classification accuracy in both the control and patient groups. When the two signals were combined and used in the pretrained model for EEG TL, the classification accuracy increased by 4.18–4.35% in the control group, and by 2.51–3.00% in the patient group. MDPI 2022-01-16 /pmc/articles/PMC8778369/ /pubmed/35062641 http://dx.doi.org/10.3390/s22020680 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kim, Sehyeon
Shin, Dae Youp
Kim, Taekyung
Lee, Sangsook
Hyun, Jung Keun
Park, Sung-Min
Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography
title Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography
title_full Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography
title_fullStr Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography
title_full_unstemmed Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography
title_short Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography
title_sort enhanced recognition of amputated wrist and hand movements by deep learning method using multimodal fusion of electromyography and electroencephalography
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8778369/
https://www.ncbi.nlm.nih.gov/pubmed/35062641
http://dx.doi.org/10.3390/s22020680
work_keys_str_mv AT kimsehyeon enhancedrecognitionofamputatedwristandhandmovementsbydeeplearningmethodusingmultimodalfusionofelectromyographyandelectroencephalography
AT shindaeyoup enhancedrecognitionofamputatedwristandhandmovementsbydeeplearningmethodusingmultimodalfusionofelectromyographyandelectroencephalography
AT kimtaekyung enhancedrecognitionofamputatedwristandhandmovementsbydeeplearningmethodusingmultimodalfusionofelectromyographyandelectroencephalography
AT leesangsook enhancedrecognitionofamputatedwristandhandmovementsbydeeplearningmethodusingmultimodalfusionofelectromyographyandelectroencephalography
AT hyunjungkeun enhancedrecognitionofamputatedwristandhandmovementsbydeeplearningmethodusingmultimodalfusionofelectromyographyandelectroencephalography
AT parksungmin enhancedrecognitionofamputatedwristandhandmovementsbydeeplearningmethodusingmultimodalfusionofelectromyographyandelectroencephalography