Cargando…

A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN

OBJECTIVE: Brain-computer interface (BCI) can translate intentions directly into instructions and greatly improve the interaction experience for disabled people or some specific interactive applications. To improve the efficiency of BCI, the objective of this study is to explore the feasibility of a...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Guijun, Zhang, Xueying, Zhang, Jing, Li, Fenglian, Duan, Shufei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9561921/
https://www.ncbi.nlm.nih.gov/pubmed/36247357
http://dx.doi.org/10.3389/fnbot.2022.995552
_version_ 1784808054985850880
author Chen, Guijun
Zhang, Xueying
Zhang, Jing
Li, Fenglian
Duan, Shufei
author_facet Chen, Guijun
Zhang, Xueying
Zhang, Jing
Li, Fenglian
Duan, Shufei
author_sort Chen, Guijun
collection PubMed
description OBJECTIVE: Brain-computer interface (BCI) can translate intentions directly into instructions and greatly improve the interaction experience for disabled people or some specific interactive applications. To improve the efficiency of BCI, the objective of this study is to explore the feasibility of an audio-assisted visual BCI speller and a deep learning-based single-trial event related potentials (ERP) decoding strategy. APPROACH: In this study, a two-stage BCI speller combining the motion-onset visual evoked potential (mVEP) and semantically congruent audio evoked ERP was designed to output the target characters. In the first stage, the different group of characters were presented in the different locations of visual field simultaneously and the stimuli were coded to the mVEP based on a new space division multiple access scheme. And then, the target character can be output based on the audio-assisted mVEP in the second stage. Meanwhile, a spatial-temporal attention-based convolutional neural network (STA-CNN) was proposed to recognize the single-trial ERP components. The CNN can learn 2-dimentional features including the spatial information of different activated channels and time dependence among ERP components. In addition, the STA mechanism can enhance the discriminative event-related features by adaptively learning probability weights. MAIN RESULTS: The performance of the proposed two-stage audio-assisted visual BCI paradigm and STA-CNN model was evaluated using the Electroencephalogram (EEG) recorded from 10 subjects. The average classification accuracy of proposed STA-CNN can reach 59.6 and 77.7% for the first and second stages, which were always significantly higher than those of the comparison methods (p < 0.05). SIGNIFICANCE: The proposed two-stage audio-assisted visual paradigm showed a great potential to be used to BCI speller. Moreover, through the analysis of the attention weights from time sequence and spatial topographies, it was proved that STA-CNN could effectively extract interpretable spatiotemporal EEG features.
format Online
Article
Text
id pubmed-9561921
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-95619212022-10-15 A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN Chen, Guijun Zhang, Xueying Zhang, Jing Li, Fenglian Duan, Shufei Front Neurorobot Neuroscience OBJECTIVE: Brain-computer interface (BCI) can translate intentions directly into instructions and greatly improve the interaction experience for disabled people or some specific interactive applications. To improve the efficiency of BCI, the objective of this study is to explore the feasibility of an audio-assisted visual BCI speller and a deep learning-based single-trial event related potentials (ERP) decoding strategy. APPROACH: In this study, a two-stage BCI speller combining the motion-onset visual evoked potential (mVEP) and semantically congruent audio evoked ERP was designed to output the target characters. In the first stage, the different group of characters were presented in the different locations of visual field simultaneously and the stimuli were coded to the mVEP based on a new space division multiple access scheme. And then, the target character can be output based on the audio-assisted mVEP in the second stage. Meanwhile, a spatial-temporal attention-based convolutional neural network (STA-CNN) was proposed to recognize the single-trial ERP components. The CNN can learn 2-dimentional features including the spatial information of different activated channels and time dependence among ERP components. In addition, the STA mechanism can enhance the discriminative event-related features by adaptively learning probability weights. MAIN RESULTS: The performance of the proposed two-stage audio-assisted visual BCI paradigm and STA-CNN model was evaluated using the Electroencephalogram (EEG) recorded from 10 subjects. The average classification accuracy of proposed STA-CNN can reach 59.6 and 77.7% for the first and second stages, which were always significantly higher than those of the comparison methods (p < 0.05). SIGNIFICANCE: The proposed two-stage audio-assisted visual paradigm showed a great potential to be used to BCI speller. Moreover, through the analysis of the attention weights from time sequence and spatial topographies, it was proved that STA-CNN could effectively extract interpretable spatiotemporal EEG features. Frontiers Media S.A. 2022-09-30 /pmc/articles/PMC9561921/ /pubmed/36247357 http://dx.doi.org/10.3389/fnbot.2022.995552 Text en Copyright © 2022 Chen, Zhang, Zhang, Li and Duan. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Chen, Guijun
Zhang, Xueying
Zhang, Jing
Li, Fenglian
Duan, Shufei
A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN
title A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN
title_full A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN
title_fullStr A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN
title_full_unstemmed A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN
title_short A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN
title_sort novel brain-computer interface based on audio-assisted visual evoked eeg and spatial-temporal attention cnn
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9561921/
https://www.ncbi.nlm.nih.gov/pubmed/36247357
http://dx.doi.org/10.3389/fnbot.2022.995552
work_keys_str_mv AT chenguijun anovelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT zhangxueying anovelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT zhangjing anovelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT lifenglian anovelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT duanshufei anovelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT chenguijun novelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT zhangxueying novelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT zhangjing novelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT lifenglian novelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn
AT duanshufei novelbraincomputerinterfacebasedonaudioassistedvisualevokedeegandspatialtemporalattentioncnn