Cargando…
Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces
From allowing basic communication to move through an environment, several attempts are being made in the field of brain-computer interfaces (BCI) to assist people that somehow find it difficult or impossible to perform certain activities. Focusing on these people as potential users of BCI, we obtain...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5698603/ https://www.ncbi.nlm.nih.gov/pubmed/29250108 http://dx.doi.org/10.1155/2017/8163949 |
_version_ | 1783280794252345344 |
---|---|
author | Carabez, Eduardo Sugi, Miho Nambu, Isao Wada, Yasuhiro |
author_facet | Carabez, Eduardo Sugi, Miho Nambu, Isao Wada, Yasuhiro |
author_sort | Carabez, Eduardo |
collection | PubMed |
description | From allowing basic communication to move through an environment, several attempts are being made in the field of brain-computer interfaces (BCI) to assist people that somehow find it difficult or impossible to perform certain activities. Focusing on these people as potential users of BCI, we obtained electroencephalogram (EEG) readings from nine healthy subjects who were presented with auditory stimuli via earphones from six different virtual directions. We presented the stimuli following the oddball paradigm to elicit P300 waves within the subject's brain activity for later identification and classification using convolutional neural networks (CNN). The CNN models are given a novel single trial three-dimensional (3D) representation of the EEG data as an input, maintaining temporal and spatial information as close to the experimental setup as possible, a relevant characteristic as eliciting P300 has been shown to cause stronger activity in certain brain regions. Here, we present the results of CNN models using the proposed 3D input for three different stimuli presentation time intervals (500, 400, and 300 ms) and compare them to previous studies and other common classifiers. Our results show >80% accuracy for all the CNN models using the proposed 3D input in single trial P300 classification. |
format | Online Article Text |
id | pubmed-5698603 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-56986032017-12-17 Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces Carabez, Eduardo Sugi, Miho Nambu, Isao Wada, Yasuhiro Comput Intell Neurosci Research Article From allowing basic communication to move through an environment, several attempts are being made in the field of brain-computer interfaces (BCI) to assist people that somehow find it difficult or impossible to perform certain activities. Focusing on these people as potential users of BCI, we obtained electroencephalogram (EEG) readings from nine healthy subjects who were presented with auditory stimuli via earphones from six different virtual directions. We presented the stimuli following the oddball paradigm to elicit P300 waves within the subject's brain activity for later identification and classification using convolutional neural networks (CNN). The CNN models are given a novel single trial three-dimensional (3D) representation of the EEG data as an input, maintaining temporal and spatial information as close to the experimental setup as possible, a relevant characteristic as eliciting P300 has been shown to cause stronger activity in certain brain regions. Here, we present the results of CNN models using the proposed 3D input for three different stimuli presentation time intervals (500, 400, and 300 ms) and compare them to previous studies and other common classifiers. Our results show >80% accuracy for all the CNN models using the proposed 3D input in single trial P300 classification. Hindawi 2017 2017-11-07 /pmc/articles/PMC5698603/ /pubmed/29250108 http://dx.doi.org/10.1155/2017/8163949 Text en Copyright © 2017 Eduardo Carabez et al. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Carabez, Eduardo Sugi, Miho Nambu, Isao Wada, Yasuhiro Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces |
title | Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces |
title_full | Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces |
title_fullStr | Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces |
title_full_unstemmed | Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces |
title_short | Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces |
title_sort | convolutional neural networks with 3d input for p300 identification in auditory brain-computer interfaces |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5698603/ https://www.ncbi.nlm.nih.gov/pubmed/29250108 http://dx.doi.org/10.1155/2017/8163949 |
work_keys_str_mv | AT carabezeduardo convolutionalneuralnetworkswith3dinputforp300identificationinauditorybraincomputerinterfaces AT sugimiho convolutionalneuralnetworkswith3dinputforp300identificationinauditorybraincomputerinterfaces AT nambuisao convolutionalneuralnetworkswith3dinputforp300identificationinauditorybraincomputerinterfaces AT wadayasuhiro convolutionalneuralnetworkswith3dinputforp300identificationinauditorybraincomputerinterfaces |