Cargando…
Multimodal Human-Exoskeleton Interface for Lower Limb Movement Prediction Through a Dense Co-Attention Symmetric Mechanism
A challenging task for the biological neural signal-based human-exoskeleton interface is to achieve accurate lower limb movement prediction of patients with hemiplegia in rehabilitation training scenarios. The human-exoskeleton interface based on single-modal biological signals such as electroenceph...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9082753/ https://www.ncbi.nlm.nih.gov/pubmed/35546887 http://dx.doi.org/10.3389/fnins.2022.796290 |
Sumario: | A challenging task for the biological neural signal-based human-exoskeleton interface is to achieve accurate lower limb movement prediction of patients with hemiplegia in rehabilitation training scenarios. The human-exoskeleton interface based on single-modal biological signals such as electroencephalogram (EEG) is currently not mature in predicting movements, due to its unreliability. The multimodal human-exoskeleton interface is a very novel solution to this problem. This kind of interface normally combines the EEG signal with surface electromyography (sEMG) signal. However, their use for the lower limb movement prediction is still limited—the connection between sEMG and EEG signals and the deep feature fusion between them are ignored. In this article, a Dense con-attention mechanism-based Multimodal Enhance Fusion Network (DMEFNet) is proposed for predicting lower limb movement of patients with hemiplegia. The DMEFNet introduces the con-attention structure to extract the common attention between sEMG and EEG signal features. To verify the effectiveness of DMEFNet, an sEMG and EEG data acquisition experiment and an incomplete asynchronous data collection paradigm are designed. The experimental results show that DMEFNet has a good movement prediction performance in both within-subject and cross-subject situations, reaching an accuracy of 82.96 and 88.44%, respectively. |
---|