Cargando…

A novel strategy for driving car brain–computer interfaces: Discrimination of EEG-based visual-motor imagery

A brain–computer interface (BCI) based on kinesthetic motor imagery has a potential of becoming a groundbreaking technology in a clinical setting. However, few studies focus on a visual-motor imagery (VMI) paradigm driving BCI. The VMI-BCI feature extraction methods are yet to be explored in depth....

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Zhouzhou, Gong, Anmin, Qian, Qian, Su, Lei, Zhao, Lei, Fu, Yunfa
Formato: Online Artículo Texto
Lenguaje:English
Publicado: De Gruyter 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8633586/
https://www.ncbi.nlm.nih.gov/pubmed/34900346
http://dx.doi.org/10.1515/tnsci-2020-0199
_version_ 1784607961416466432
author Zhou, Zhouzhou
Gong, Anmin
Qian, Qian
Su, Lei
Zhao, Lei
Fu, Yunfa
author_facet Zhou, Zhouzhou
Gong, Anmin
Qian, Qian
Su, Lei
Zhao, Lei
Fu, Yunfa
author_sort Zhou, Zhouzhou
collection PubMed
description A brain–computer interface (BCI) based on kinesthetic motor imagery has a potential of becoming a groundbreaking technology in a clinical setting. However, few studies focus on a visual-motor imagery (VMI) paradigm driving BCI. The VMI-BCI feature extraction methods are yet to be explored in depth. In this study, a novel VMI-BCI paradigm is proposed to execute four VMI tasks: imagining a car moving forward, reversing, turning left, and turning right. These mental strategies can naturally control a car or robot to move forward, backward, left, and right. Electroencephalogram (EEG) data from 25 subjects were collected. After the raw EEG signal baseline was corrected, the alpha band was extracted using bandpass filtering. The artifacts were removed by independent component analysis. Then, the EEG average instantaneous energy induced by VMI (VMI-EEG) was calculated using the Hilbert–Huang transform (HHT). The autoregressive model was extracted to construct a 12-dimensional feature vector to a support vector machine suitable for small sample classification. This was classified into two-class tasks: visual imagination of driving the car forward versus reversing, driving forward versus turning left, driving forward versus turning right, reversing versus turning left, reversing versus turning right, and turning left versus turning right. The results showed that the average classification accuracy of these two-class tasks was 62.68 ± 5.08%, and the highest classification accuracy was 73.66 ± 6.80%. The study showed that EEG features of O1 and O2 electrodes in the occipital region extracted by HHT were separable for these VMI tasks.
format Online
Article
Text
id pubmed-8633586
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher De Gruyter
record_format MEDLINE/PubMed
spelling pubmed-86335862021-12-09 A novel strategy for driving car brain–computer interfaces: Discrimination of EEG-based visual-motor imagery Zhou, Zhouzhou Gong, Anmin Qian, Qian Su, Lei Zhao, Lei Fu, Yunfa Transl Neurosci Research Article A brain–computer interface (BCI) based on kinesthetic motor imagery has a potential of becoming a groundbreaking technology in a clinical setting. However, few studies focus on a visual-motor imagery (VMI) paradigm driving BCI. The VMI-BCI feature extraction methods are yet to be explored in depth. In this study, a novel VMI-BCI paradigm is proposed to execute four VMI tasks: imagining a car moving forward, reversing, turning left, and turning right. These mental strategies can naturally control a car or robot to move forward, backward, left, and right. Electroencephalogram (EEG) data from 25 subjects were collected. After the raw EEG signal baseline was corrected, the alpha band was extracted using bandpass filtering. The artifacts were removed by independent component analysis. Then, the EEG average instantaneous energy induced by VMI (VMI-EEG) was calculated using the Hilbert–Huang transform (HHT). The autoregressive model was extracted to construct a 12-dimensional feature vector to a support vector machine suitable for small sample classification. This was classified into two-class tasks: visual imagination of driving the car forward versus reversing, driving forward versus turning left, driving forward versus turning right, reversing versus turning left, reversing versus turning right, and turning left versus turning right. The results showed that the average classification accuracy of these two-class tasks was 62.68 ± 5.08%, and the highest classification accuracy was 73.66 ± 6.80%. The study showed that EEG features of O1 and O2 electrodes in the occipital region extracted by HHT were separable for these VMI tasks. De Gruyter 2021-11-30 /pmc/articles/PMC8633586/ /pubmed/34900346 http://dx.doi.org/10.1515/tnsci-2020-0199 Text en © 2021 Zhou Zhouzhou et al., published by De Gruyter https://creativecommons.org/licenses/by/4.0/This work is licensed under the Creative Commons Attribution 4.0 International License.
spellingShingle Research Article
Zhou, Zhouzhou
Gong, Anmin
Qian, Qian
Su, Lei
Zhao, Lei
Fu, Yunfa
A novel strategy for driving car brain–computer interfaces: Discrimination of EEG-based visual-motor imagery
title A novel strategy for driving car brain–computer interfaces: Discrimination of EEG-based visual-motor imagery
title_full A novel strategy for driving car brain–computer interfaces: Discrimination of EEG-based visual-motor imagery
title_fullStr A novel strategy for driving car brain–computer interfaces: Discrimination of EEG-based visual-motor imagery
title_full_unstemmed A novel strategy for driving car brain–computer interfaces: Discrimination of EEG-based visual-motor imagery
title_short A novel strategy for driving car brain–computer interfaces: Discrimination of EEG-based visual-motor imagery
title_sort novel strategy for driving car brain–computer interfaces: discrimination of eeg-based visual-motor imagery
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8633586/
https://www.ncbi.nlm.nih.gov/pubmed/34900346
http://dx.doi.org/10.1515/tnsci-2020-0199
work_keys_str_mv AT zhouzhouzhou anovelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT gonganmin anovelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT qianqian anovelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT sulei anovelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT zhaolei anovelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT fuyunfa anovelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT zhouzhouzhou novelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT gonganmin novelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT qianqian novelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT sulei novelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT zhaolei novelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery
AT fuyunfa novelstrategyfordrivingcarbraincomputerinterfacesdiscriminationofeegbasedvisualmotorimagery