Cargando…
MSATNet: multi-scale adaptive transformer network for motor imagery classification
Motor imagery brain-computer interface (MI-BCI) can parse user motor imagery to achieve wheelchair control or motion control for smart prostheses. However, problems of poor feature extraction and low cross-subject performance exist in the model for motor imagery classification tasks. To address thes...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10303110/ https://www.ncbi.nlm.nih.gov/pubmed/37389361 http://dx.doi.org/10.3389/fnins.2023.1173778 |
_version_ | 1785065201124507648 |
---|---|
author | Hu, Lingyan Hong, Weijie Liu, Lingyu |
author_facet | Hu, Lingyan Hong, Weijie Liu, Lingyu |
author_sort | Hu, Lingyan |
collection | PubMed |
description | Motor imagery brain-computer interface (MI-BCI) can parse user motor imagery to achieve wheelchair control or motion control for smart prostheses. However, problems of poor feature extraction and low cross-subject performance exist in the model for motor imagery classification tasks. To address these problems, we propose a multi-scale adaptive transformer network (MSATNet) for motor imagery classification. Therein, we design a multi-scale feature extraction (MSFE) module to extract multi-band highly-discriminative features. Through the adaptive temporal transformer (ATT) module, the temporal decoder and multi-head attention unit are used to adaptively extract temporal dependencies. Efficient transfer learning is achieved by fine-tuning target subject data through the subject adapter (SA) module. Within-subject and cross-subject experiments are performed to evaluate the classification performance of the model on the BCI Competition IV 2a and 2b datasets. The MSATNet outperforms benchmark models in classification performance, reaching 81.75 and 89.34% accuracies for the within-subject experiments and 81.33 and 86.23% accuracies for the cross-subject experiments. The experimental results demonstrate that the proposed method can help build a more accurate MI-BCI system. |
format | Online Article Text |
id | pubmed-10303110 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-103031102023-06-29 MSATNet: multi-scale adaptive transformer network for motor imagery classification Hu, Lingyan Hong, Weijie Liu, Lingyu Front Neurosci Neuroscience Motor imagery brain-computer interface (MI-BCI) can parse user motor imagery to achieve wheelchair control or motion control for smart prostheses. However, problems of poor feature extraction and low cross-subject performance exist in the model for motor imagery classification tasks. To address these problems, we propose a multi-scale adaptive transformer network (MSATNet) for motor imagery classification. Therein, we design a multi-scale feature extraction (MSFE) module to extract multi-band highly-discriminative features. Through the adaptive temporal transformer (ATT) module, the temporal decoder and multi-head attention unit are used to adaptively extract temporal dependencies. Efficient transfer learning is achieved by fine-tuning target subject data through the subject adapter (SA) module. Within-subject and cross-subject experiments are performed to evaluate the classification performance of the model on the BCI Competition IV 2a and 2b datasets. The MSATNet outperforms benchmark models in classification performance, reaching 81.75 and 89.34% accuracies for the within-subject experiments and 81.33 and 86.23% accuracies for the cross-subject experiments. The experimental results demonstrate that the proposed method can help build a more accurate MI-BCI system. Frontiers Media S.A. 2023-06-14 /pmc/articles/PMC10303110/ /pubmed/37389361 http://dx.doi.org/10.3389/fnins.2023.1173778 Text en Copyright © 2023 Hu and Hong. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Hu, Lingyan Hong, Weijie Liu, Lingyu MSATNet: multi-scale adaptive transformer network for motor imagery classification |
title | MSATNet: multi-scale adaptive transformer network for motor imagery classification |
title_full | MSATNet: multi-scale adaptive transformer network for motor imagery classification |
title_fullStr | MSATNet: multi-scale adaptive transformer network for motor imagery classification |
title_full_unstemmed | MSATNet: multi-scale adaptive transformer network for motor imagery classification |
title_short | MSATNet: multi-scale adaptive transformer network for motor imagery classification |
title_sort | msatnet: multi-scale adaptive transformer network for motor imagery classification |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10303110/ https://www.ncbi.nlm.nih.gov/pubmed/37389361 http://dx.doi.org/10.3389/fnins.2023.1173778 |
work_keys_str_mv | AT hulingyan msatnetmultiscaleadaptivetransformernetworkformotorimageryclassification AT hongweijie msatnetmultiscaleadaptivetransformernetworkformotorimageryclassification AT liulingyu msatnetmultiscaleadaptivetransformernetworkformotorimageryclassification |