Cargando…
Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces
For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potential...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4211702/ https://www.ncbi.nlm.nih.gov/pubmed/25350547 http://dx.doi.org/10.1371/journal.pone.0111070 |
_version_ | 1782341615505899520 |
---|---|
author | An, Xingwei Höhne, Johannes Ming, Dong Blankertz, Benjamin |
author_facet | An, Xingwei Höhne, Johannes Ming, Dong Blankertz, Benjamin |
author_sort | An, Xingwei |
collection | PubMed |
description | For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP) speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller) and interleaved independent streams (Parallel-Speller). Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3%) showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms. |
format | Online Article Text |
id | pubmed-4211702 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-42117022014-11-05 Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces An, Xingwei Höhne, Johannes Ming, Dong Blankertz, Benjamin PLoS One Research Article For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP) speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller) and interleaved independent streams (Parallel-Speller). Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3%) showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms. Public Library of Science 2014-10-28 /pmc/articles/PMC4211702/ /pubmed/25350547 http://dx.doi.org/10.1371/journal.pone.0111070 Text en © 2014 An et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article An, Xingwei Höhne, Johannes Ming, Dong Blankertz, Benjamin Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces |
title | Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces |
title_full | Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces |
title_fullStr | Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces |
title_full_unstemmed | Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces |
title_short | Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces |
title_sort | exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4211702/ https://www.ncbi.nlm.nih.gov/pubmed/25350547 http://dx.doi.org/10.1371/journal.pone.0111070 |
work_keys_str_mv | AT anxingwei exploringcombinationsofauditoryandvisualstimuliforgazeindependentbraincomputerinterfaces AT hohnejohannes exploringcombinationsofauditoryandvisualstimuliforgazeindependentbraincomputerinterfaces AT mingdong exploringcombinationsofauditoryandvisualstimuliforgazeindependentbraincomputerinterfaces AT blankertzbenjamin exploringcombinationsofauditoryandvisualstimuliforgazeindependentbraincomputerinterfaces |