Cargando…
Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition
The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition....
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10264396/ https://www.ncbi.nlm.nih.gov/pubmed/37311807 http://dx.doi.org/10.1038/s41597-023-02286-w |
_version_ | 1785058313865527296 |
---|---|
author | Simistira Liwicki, Foteini Gupta, Vibha Saini, Rajkumar De, Kanjar Abid, Nosheen Rakesh, Sumit Wellington, Scott Wilson, Holly Liwicki, Marcus Eriksson, Johan |
author_facet | Simistira Liwicki, Foteini Gupta, Vibha Saini, Rajkumar De, Kanjar Abid, Nosheen Rakesh, Sumit Wellington, Scott Wilson, Holly Liwicki, Marcus Eriksson, Johan |
author_sort | Simistira Liwicki, Foteini |
collection | PubMed |
description | The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses. |
format | Online Article Text |
id | pubmed-10264396 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-102643962023-06-15 Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition Simistira Liwicki, Foteini Gupta, Vibha Saini, Rajkumar De, Kanjar Abid, Nosheen Rakesh, Sumit Wellington, Scott Wilson, Holly Liwicki, Marcus Eriksson, Johan Sci Data Data Descriptor The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses. Nature Publishing Group UK 2023-06-13 /pmc/articles/PMC10264396/ /pubmed/37311807 http://dx.doi.org/10.1038/s41597-023-02286-w Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Data Descriptor Simistira Liwicki, Foteini Gupta, Vibha Saini, Rajkumar De, Kanjar Abid, Nosheen Rakesh, Sumit Wellington, Scott Wilson, Holly Liwicki, Marcus Eriksson, Johan Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition |
title | Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition |
title_full | Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition |
title_fullStr | Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition |
title_full_unstemmed | Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition |
title_short | Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition |
title_sort | bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition |
topic | Data Descriptor |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10264396/ https://www.ncbi.nlm.nih.gov/pubmed/37311807 http://dx.doi.org/10.1038/s41597-023-02286-w |
work_keys_str_mv | AT simistiraliwickifoteini bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT guptavibha bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT sainirajkumar bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT dekanjar bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT abidnosheen bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT rakeshsumit bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT wellingtonscott bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT wilsonholly bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT liwickimarcus bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition AT erikssonjohan bimodalelectroencephalographyfunctionalmagneticresonanceimagingdatasetforinnerspeechrecognition |