Cargando…
Identification of Visual Imagery by Electroencephalography Based on Empirical Mode Decomposition and an Autoregressive Model
The traditional imagery task for brain–computer interfaces (BCIs) consists of motor imagery (MI) in which subjects are instructed to imagine moving certain parts of their body. This kind of imagery task is difficult for subjects. In this study, we used a less studied yet more easily performed type o...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8818430/ https://www.ncbi.nlm.nih.gov/pubmed/35140763 http://dx.doi.org/10.1155/2022/1038901 |
_version_ | 1784645827572006912 |
---|---|
author | Fu, Yunfa Li, Zhaoyang Gong, Anmin Qian, Qian Su, Lei Zhao, Lei |
author_facet | Fu, Yunfa Li, Zhaoyang Gong, Anmin Qian, Qian Su, Lei Zhao, Lei |
author_sort | Fu, Yunfa |
collection | PubMed |
description | The traditional imagery task for brain–computer interfaces (BCIs) consists of motor imagery (MI) in which subjects are instructed to imagine moving certain parts of their body. This kind of imagery task is difficult for subjects. In this study, we used a less studied yet more easily performed type of mental imagery—visual imagery (VI)—in which subjects are instructed to visualize a picture in their brain to implement a BCI. In this study, 18 subjects were recruited and instructed to observe one of two visual-cued pictures (one was static, while the other was moving) and then imagine the cued picture in each trial. Simultaneously, electroencephalography (EEG) signals were collected. Hilbert–Huang Transform (HHT), autoregressive (AR) models, and a combination of empirical mode decomposition (EMD) and AR were used to extract features, respectively. A support vector machine (SVM) was used to classify the two kinds of VI tasks. The average, highest, and lowest classification accuracies of HHT were 68.14 ± 3.06%, 78.33%, and 53.3%, respectively. The values of the AR model were 56.29 ± 2.73%, 71.67%, and 30%, respectively. The values obtained by the combination of the EMD and the AR model were 78.40 ± 2.07%, 87%, and 48.33%, respectively. The results indicate that multiple VI tasks were separable based on EEG and that the combination of EMD and an AR model used in VI feature extraction was better than an HHT or AR model alone. Our work may provide ideas for the construction of a new online VI-BCI. |
format | Online Article Text |
id | pubmed-8818430 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-88184302022-02-08 Identification of Visual Imagery by Electroencephalography Based on Empirical Mode Decomposition and an Autoregressive Model Fu, Yunfa Li, Zhaoyang Gong, Anmin Qian, Qian Su, Lei Zhao, Lei Comput Intell Neurosci Research Article The traditional imagery task for brain–computer interfaces (BCIs) consists of motor imagery (MI) in which subjects are instructed to imagine moving certain parts of their body. This kind of imagery task is difficult for subjects. In this study, we used a less studied yet more easily performed type of mental imagery—visual imagery (VI)—in which subjects are instructed to visualize a picture in their brain to implement a BCI. In this study, 18 subjects were recruited and instructed to observe one of two visual-cued pictures (one was static, while the other was moving) and then imagine the cued picture in each trial. Simultaneously, electroencephalography (EEG) signals were collected. Hilbert–Huang Transform (HHT), autoregressive (AR) models, and a combination of empirical mode decomposition (EMD) and AR were used to extract features, respectively. A support vector machine (SVM) was used to classify the two kinds of VI tasks. The average, highest, and lowest classification accuracies of HHT were 68.14 ± 3.06%, 78.33%, and 53.3%, respectively. The values of the AR model were 56.29 ± 2.73%, 71.67%, and 30%, respectively. The values obtained by the combination of the EMD and the AR model were 78.40 ± 2.07%, 87%, and 48.33%, respectively. The results indicate that multiple VI tasks were separable based on EEG and that the combination of EMD and an AR model used in VI feature extraction was better than an HHT or AR model alone. Our work may provide ideas for the construction of a new online VI-BCI. Hindawi 2022-01-30 /pmc/articles/PMC8818430/ /pubmed/35140763 http://dx.doi.org/10.1155/2022/1038901 Text en Copyright © 2022 Yunfa Fu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Fu, Yunfa Li, Zhaoyang Gong, Anmin Qian, Qian Su, Lei Zhao, Lei Identification of Visual Imagery by Electroencephalography Based on Empirical Mode Decomposition and an Autoregressive Model |
title | Identification of Visual Imagery by Electroencephalography Based on Empirical Mode Decomposition and an Autoregressive Model |
title_full | Identification of Visual Imagery by Electroencephalography Based on Empirical Mode Decomposition and an Autoregressive Model |
title_fullStr | Identification of Visual Imagery by Electroencephalography Based on Empirical Mode Decomposition and an Autoregressive Model |
title_full_unstemmed | Identification of Visual Imagery by Electroencephalography Based on Empirical Mode Decomposition and an Autoregressive Model |
title_short | Identification of Visual Imagery by Electroencephalography Based on Empirical Mode Decomposition and an Autoregressive Model |
title_sort | identification of visual imagery by electroencephalography based on empirical mode decomposition and an autoregressive model |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8818430/ https://www.ncbi.nlm.nih.gov/pubmed/35140763 http://dx.doi.org/10.1155/2022/1038901 |
work_keys_str_mv | AT fuyunfa identificationofvisualimagerybyelectroencephalographybasedonempiricalmodedecompositionandanautoregressivemodel AT lizhaoyang identificationofvisualimagerybyelectroencephalographybasedonempiricalmodedecompositionandanautoregressivemodel AT gonganmin identificationofvisualimagerybyelectroencephalographybasedonempiricalmodedecompositionandanautoregressivemodel AT qianqian identificationofvisualimagerybyelectroencephalographybasedonempiricalmodedecompositionandanautoregressivemodel AT sulei identificationofvisualimagerybyelectroencephalographybasedonempiricalmodedecompositionandanautoregressivemodel AT zhaolei identificationofvisualimagerybyelectroencephalographybasedonempiricalmodedecompositionandanautoregressivemodel |