Cargando…
Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study
BACKGROUND: Functional near-infrared spectroscopy (fNIRS) studies have demonstrated associations between hearing outcomes after cochlear implantation and plastic brain changes. However, inconsistent results make it difficult to draw conclusions. A major problem is that many variables need to be cont...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9239541/ https://www.ncbi.nlm.nih.gov/pubmed/35727624 http://dx.doi.org/10.2196/38407 |
_version_ | 1784737320510947328 |
---|---|
author | Bálint, András Wimmer, Wilhelm Caversaccio, Marco Weder, Stefan |
author_facet | Bálint, András Wimmer, Wilhelm Caversaccio, Marco Weder, Stefan |
author_sort | Bálint, András |
collection | PubMed |
description | BACKGROUND: Functional near-infrared spectroscopy (fNIRS) studies have demonstrated associations between hearing outcomes after cochlear implantation and plastic brain changes. However, inconsistent results make it difficult to draw conclusions. A major problem is that many variables need to be controlled. To gain further understanding, a careful preparation and planning of such a functional neuroimaging task is key. OBJECTIVE: Using fNIRS, our main objective is to develop a well-controlled audiovisual speech comprehension task to study brain activation in individuals with normal hearing and hearing impairment (including cochlear implant users). The task should be deductible from clinically established tests, induce maximal cortical activation, use optimal coverage of relevant brain regions, and be reproducible by other research groups. METHODS: The protocol will consist of a 5-minute resting state and 2 stimulation periods that are 12 minutes each. During the stimulation periods, 13-second video recordings of the clinically established Oldenburg Sentence Test (OLSA) will be presented. Stimuli will be presented in 4 different modalities: (1) speech in quiet, (2) speech in noise, (3) visual only (ie, lipreading), and (4) audiovisual speech. Each stimulus type will be repeated 10 times in a counterbalanced block design. Interactive question windows will monitor speech comprehension during the task. After the measurement, we will perform a 3D scan to digitize optode positions and verify the covered anatomical locations. RESULTS: This paper reports the study protocol. Enrollment for the study started in August 2021. We expect to publish our first results by the end of 2022. CONCLUSIONS: The proposed audiovisual speech comprehension task will help elucidate neural correlates to speech understanding. The comprehensive study will have the potential to provide additional information beyond the conventional clinical standards about the underlying plastic brain changes of a hearing-impaired person. It will facilitate more precise indication criteria for cochlear implantation and better planning of rehabilitation. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/38407 |
format | Online Article Text |
id | pubmed-9239541 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | JMIR Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-92395412022-06-29 Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study Bálint, András Wimmer, Wilhelm Caversaccio, Marco Weder, Stefan JMIR Res Protoc Protocol BACKGROUND: Functional near-infrared spectroscopy (fNIRS) studies have demonstrated associations between hearing outcomes after cochlear implantation and plastic brain changes. However, inconsistent results make it difficult to draw conclusions. A major problem is that many variables need to be controlled. To gain further understanding, a careful preparation and planning of such a functional neuroimaging task is key. OBJECTIVE: Using fNIRS, our main objective is to develop a well-controlled audiovisual speech comprehension task to study brain activation in individuals with normal hearing and hearing impairment (including cochlear implant users). The task should be deductible from clinically established tests, induce maximal cortical activation, use optimal coverage of relevant brain regions, and be reproducible by other research groups. METHODS: The protocol will consist of a 5-minute resting state and 2 stimulation periods that are 12 minutes each. During the stimulation periods, 13-second video recordings of the clinically established Oldenburg Sentence Test (OLSA) will be presented. Stimuli will be presented in 4 different modalities: (1) speech in quiet, (2) speech in noise, (3) visual only (ie, lipreading), and (4) audiovisual speech. Each stimulus type will be repeated 10 times in a counterbalanced block design. Interactive question windows will monitor speech comprehension during the task. After the measurement, we will perform a 3D scan to digitize optode positions and verify the covered anatomical locations. RESULTS: This paper reports the study protocol. Enrollment for the study started in August 2021. We expect to publish our first results by the end of 2022. CONCLUSIONS: The proposed audiovisual speech comprehension task will help elucidate neural correlates to speech understanding. The comprehensive study will have the potential to provide additional information beyond the conventional clinical standards about the underlying plastic brain changes of a hearing-impaired person. It will facilitate more precise indication criteria for cochlear implantation and better planning of rehabilitation. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/38407 JMIR Publications 2022-06-21 /pmc/articles/PMC9239541/ /pubmed/35727624 http://dx.doi.org/10.2196/38407 Text en ©András Bálint, Wilhelm Wimmer, Marco Caversaccio, Stefan Weder. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 21.06.2022. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included. |
spellingShingle | Protocol Bálint, András Wimmer, Wilhelm Caversaccio, Marco Weder, Stefan Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study |
title | Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study |
title_full | Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study |
title_fullStr | Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study |
title_full_unstemmed | Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study |
title_short | Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study |
title_sort | neural activity during audiovisual speech processing: protocol for a functional neuroimaging study |
topic | Protocol |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9239541/ https://www.ncbi.nlm.nih.gov/pubmed/35727624 http://dx.doi.org/10.2196/38407 |
work_keys_str_mv | AT balintandras neuralactivityduringaudiovisualspeechprocessingprotocolforafunctionalneuroimagingstudy AT wimmerwilhelm neuralactivityduringaudiovisualspeechprocessingprotocolforafunctionalneuroimagingstudy AT caversacciomarco neuralactivityduringaudiovisualspeechprocessingprotocolforafunctionalneuroimagingstudy AT wederstefan neuralactivityduringaudiovisualspeechprocessingprotocolforafunctionalneuroimagingstudy |