Cargando…

Objectification of intracochlear electrocochleography using machine learning

INTRODUCTION: Electrocochleography (ECochG) measures inner ear potentials in response to acoustic stimulation. In patients with cochlear implant (CI), the technique is increasingly used to monitor residual inner ear function. So far, when analyzing ECochG potentials, the visual assessment has been t...

Descripción completa

Detalles Bibliográficos
Autores principales: Schuerch, Klaus, Wimmer, Wilhelm, Dalbert, Adrian, Rummel, Christian, Caversaccio, Marco, Mantokoudis, Georgios, Weder, Stefan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9465334/
https://www.ncbi.nlm.nih.gov/pubmed/36105773
http://dx.doi.org/10.3389/fneur.2022.943816
_version_ 1784787773557833728
author Schuerch, Klaus
Wimmer, Wilhelm
Dalbert, Adrian
Rummel, Christian
Caversaccio, Marco
Mantokoudis, Georgios
Weder, Stefan
author_facet Schuerch, Klaus
Wimmer, Wilhelm
Dalbert, Adrian
Rummel, Christian
Caversaccio, Marco
Mantokoudis, Georgios
Weder, Stefan
author_sort Schuerch, Klaus
collection PubMed
description INTRODUCTION: Electrocochleography (ECochG) measures inner ear potentials in response to acoustic stimulation. In patients with cochlear implant (CI), the technique is increasingly used to monitor residual inner ear function. So far, when analyzing ECochG potentials, the visual assessment has been the gold standard. However, visual assessment requires a high level of experience to interpret the signals. Furthermore, expert-dependent assessment leads to inconsistency and a lack of reproducibility. The aim of this study was to automate and objectify the analysis of cochlear microphonic (CM) signals in ECochG recordings. METHODS: Prospective cohort study including 41 implanted ears with residual hearing. We measured ECochG potentials at four different electrodes and only at stable electrode positions (after full insertion or postoperatively). When stimulating acoustically, depending on the individual residual hearing, we used three different intensity levels of pure tones (i.e., supra-, near-, and sub-threshold stimulation; 250–2,000 Hz). Our aim was to obtain ECochG potentials with differing SNRs. To objectify the detection of CM signals, we compared three different methods: correlation analysis, Hotelling's T(2) test, and deep learning. We benchmarked these methods against the visual analysis of three ECochG experts. RESULTS: For the visual analysis of ECochG recordings, the Fleiss' kappa value demonstrated a substantial to almost perfect agreement among the three examiners. We used the labels as ground truth to train our objectification methods. Thereby, the deep learning algorithm performed best (area under curve = 0.97, accuracy = 0.92), closely followed by Hotelling's T(2) test. The correlation method slightly underperformed due to its susceptibility to noise interference. CONCLUSIONS: Objectification of ECochG signals is possible with the presented methods. Deep learning and Hotelling's T(2) methods achieved excellent discrimination performance. Objective automatic analysis of CM signals enables standardized, fast, accurate, and examiner-independent evaluation of ECochG measurements.
format Online
Article
Text
id pubmed-9465334
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-94653342022-09-13 Objectification of intracochlear electrocochleography using machine learning Schuerch, Klaus Wimmer, Wilhelm Dalbert, Adrian Rummel, Christian Caversaccio, Marco Mantokoudis, Georgios Weder, Stefan Front Neurol Neurology INTRODUCTION: Electrocochleography (ECochG) measures inner ear potentials in response to acoustic stimulation. In patients with cochlear implant (CI), the technique is increasingly used to monitor residual inner ear function. So far, when analyzing ECochG potentials, the visual assessment has been the gold standard. However, visual assessment requires a high level of experience to interpret the signals. Furthermore, expert-dependent assessment leads to inconsistency and a lack of reproducibility. The aim of this study was to automate and objectify the analysis of cochlear microphonic (CM) signals in ECochG recordings. METHODS: Prospective cohort study including 41 implanted ears with residual hearing. We measured ECochG potentials at four different electrodes and only at stable electrode positions (after full insertion or postoperatively). When stimulating acoustically, depending on the individual residual hearing, we used three different intensity levels of pure tones (i.e., supra-, near-, and sub-threshold stimulation; 250–2,000 Hz). Our aim was to obtain ECochG potentials with differing SNRs. To objectify the detection of CM signals, we compared three different methods: correlation analysis, Hotelling's T(2) test, and deep learning. We benchmarked these methods against the visual analysis of three ECochG experts. RESULTS: For the visual analysis of ECochG recordings, the Fleiss' kappa value demonstrated a substantial to almost perfect agreement among the three examiners. We used the labels as ground truth to train our objectification methods. Thereby, the deep learning algorithm performed best (area under curve = 0.97, accuracy = 0.92), closely followed by Hotelling's T(2) test. The correlation method slightly underperformed due to its susceptibility to noise interference. CONCLUSIONS: Objectification of ECochG signals is possible with the presented methods. Deep learning and Hotelling's T(2) methods achieved excellent discrimination performance. Objective automatic analysis of CM signals enables standardized, fast, accurate, and examiner-independent evaluation of ECochG measurements. Frontiers Media S.A. 2022-08-29 /pmc/articles/PMC9465334/ /pubmed/36105773 http://dx.doi.org/10.3389/fneur.2022.943816 Text en Copyright © 2022 Schuerch, Wimmer, Dalbert, Rummel, Caversaccio, Mantokoudis and Weder. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neurology
Schuerch, Klaus
Wimmer, Wilhelm
Dalbert, Adrian
Rummel, Christian
Caversaccio, Marco
Mantokoudis, Georgios
Weder, Stefan
Objectification of intracochlear electrocochleography using machine learning
title Objectification of intracochlear electrocochleography using machine learning
title_full Objectification of intracochlear electrocochleography using machine learning
title_fullStr Objectification of intracochlear electrocochleography using machine learning
title_full_unstemmed Objectification of intracochlear electrocochleography using machine learning
title_short Objectification of intracochlear electrocochleography using machine learning
title_sort objectification of intracochlear electrocochleography using machine learning
topic Neurology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9465334/
https://www.ncbi.nlm.nih.gov/pubmed/36105773
http://dx.doi.org/10.3389/fneur.2022.943816
work_keys_str_mv AT schuerchklaus objectificationofintracochlearelectrocochleographyusingmachinelearning
AT wimmerwilhelm objectificationofintracochlearelectrocochleographyusingmachinelearning
AT dalbertadrian objectificationofintracochlearelectrocochleographyusingmachinelearning
AT rummelchristian objectificationofintracochlearelectrocochleographyusingmachinelearning
AT caversacciomarco objectificationofintracochlearelectrocochleographyusingmachinelearning
AT mantokoudisgeorgios objectificationofintracochlearelectrocochleographyusingmachinelearning
AT wederstefan objectificationofintracochlearelectrocochleographyusingmachinelearning