Cargando…

Smartphone video nystagmography using convolutional neural networks: ConVNG

BACKGROUND: Eye movement abnormalities are commonplace in neurological disorders. However, unaided eye movement assessments lack granularity. Although videooculography (VOG) improves diagnostic accuracy, resource intensiveness precludes its broad use. To bridge this care gap, we here validate a fram...

Descripción completa

Detalles Bibliográficos
Autores principales: Friedrich, Maximilian U., Schneider, Erich, Buerklein, Miriam, Taeger, Johannes, Hartig, Johannes, Volkmann, Jens, Peach, Robert, Zeller, Daniel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10129923/
https://www.ncbi.nlm.nih.gov/pubmed/36422668
http://dx.doi.org/10.1007/s00415-022-11493-1
_version_ 1785030861287063552
author Friedrich, Maximilian U.
Schneider, Erich
Buerklein, Miriam
Taeger, Johannes
Hartig, Johannes
Volkmann, Jens
Peach, Robert
Zeller, Daniel
author_facet Friedrich, Maximilian U.
Schneider, Erich
Buerklein, Miriam
Taeger, Johannes
Hartig, Johannes
Volkmann, Jens
Peach, Robert
Zeller, Daniel
author_sort Friedrich, Maximilian U.
collection PubMed
description BACKGROUND: Eye movement abnormalities are commonplace in neurological disorders. However, unaided eye movement assessments lack granularity. Although videooculography (VOG) improves diagnostic accuracy, resource intensiveness precludes its broad use. To bridge this care gap, we here validate a framework for smartphone video-based nystagmography capitalizing on recent computer vision advances. METHODS: A convolutional neural network was fine-tuned for pupil tracking using > 550 annotated frames: ConVNG. In a cross-sectional approach, slow-phase velocity of optokinetic nystagmus was calculated in 10 subjects using ConVNG and VOG. Equivalence of accuracy and precision was assessed using the “two one-sample t-test” (TOST) and Bayesian interval-null approaches. ConVNG was systematically compared to OpenFace and MediaPipe as computer vision (CV) benchmarks for gaze estimation. RESULTS: ConVNG tracking accuracy reached 9–15% of an average pupil diameter. In a fully independent clinical video dataset, ConVNG robustly detected pupil keypoints (median prediction confidence 0.85). SPV measurement accuracy was equivalent to VOG (TOST p < 0.017; Bayes factors (BF) > 24). ConVNG, but not MediaPipe, achieved equivalence to VOG in all SPV calculations. Median precision was 0.30°/s for ConVNG, 0.7°/s for MediaPipe and 0.12°/s for VOG. ConVNG precision was significantly higher than MediaPipe in vertical planes, but both algorithms’ precision was inferior to VOG. CONCLUSIONS: ConVNG enables offline smartphone video nystagmography with an accuracy comparable to VOG and significantly higher precision than MediaPipe, a benchmark computer vision application for gaze estimation. This serves as a blueprint for highly accessible tools with potential to accelerate progress toward precise and personalized Medicine. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00415-022-11493-1.
format Online
Article
Text
id pubmed-10129923
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-101299232023-04-27 Smartphone video nystagmography using convolutional neural networks: ConVNG Friedrich, Maximilian U. Schneider, Erich Buerklein, Miriam Taeger, Johannes Hartig, Johannes Volkmann, Jens Peach, Robert Zeller, Daniel J Neurol Original Communication BACKGROUND: Eye movement abnormalities are commonplace in neurological disorders. However, unaided eye movement assessments lack granularity. Although videooculography (VOG) improves diagnostic accuracy, resource intensiveness precludes its broad use. To bridge this care gap, we here validate a framework for smartphone video-based nystagmography capitalizing on recent computer vision advances. METHODS: A convolutional neural network was fine-tuned for pupil tracking using > 550 annotated frames: ConVNG. In a cross-sectional approach, slow-phase velocity of optokinetic nystagmus was calculated in 10 subjects using ConVNG and VOG. Equivalence of accuracy and precision was assessed using the “two one-sample t-test” (TOST) and Bayesian interval-null approaches. ConVNG was systematically compared to OpenFace and MediaPipe as computer vision (CV) benchmarks for gaze estimation. RESULTS: ConVNG tracking accuracy reached 9–15% of an average pupil diameter. In a fully independent clinical video dataset, ConVNG robustly detected pupil keypoints (median prediction confidence 0.85). SPV measurement accuracy was equivalent to VOG (TOST p < 0.017; Bayes factors (BF) > 24). ConVNG, but not MediaPipe, achieved equivalence to VOG in all SPV calculations. Median precision was 0.30°/s for ConVNG, 0.7°/s for MediaPipe and 0.12°/s for VOG. ConVNG precision was significantly higher than MediaPipe in vertical planes, but both algorithms’ precision was inferior to VOG. CONCLUSIONS: ConVNG enables offline smartphone video nystagmography with an accuracy comparable to VOG and significantly higher precision than MediaPipe, a benchmark computer vision application for gaze estimation. This serves as a blueprint for highly accessible tools with potential to accelerate progress toward precise and personalized Medicine. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00415-022-11493-1. Springer Berlin Heidelberg 2022-11-23 2023 /pmc/articles/PMC10129923/ /pubmed/36422668 http://dx.doi.org/10.1007/s00415-022-11493-1 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Communication
Friedrich, Maximilian U.
Schneider, Erich
Buerklein, Miriam
Taeger, Johannes
Hartig, Johannes
Volkmann, Jens
Peach, Robert
Zeller, Daniel
Smartphone video nystagmography using convolutional neural networks: ConVNG
title Smartphone video nystagmography using convolutional neural networks: ConVNG
title_full Smartphone video nystagmography using convolutional neural networks: ConVNG
title_fullStr Smartphone video nystagmography using convolutional neural networks: ConVNG
title_full_unstemmed Smartphone video nystagmography using convolutional neural networks: ConVNG
title_short Smartphone video nystagmography using convolutional neural networks: ConVNG
title_sort smartphone video nystagmography using convolutional neural networks: convng
topic Original Communication
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10129923/
https://www.ncbi.nlm.nih.gov/pubmed/36422668
http://dx.doi.org/10.1007/s00415-022-11493-1
work_keys_str_mv AT friedrichmaximilianu smartphonevideonystagmographyusingconvolutionalneuralnetworksconvng
AT schneidererich smartphonevideonystagmographyusingconvolutionalneuralnetworksconvng
AT buerkleinmiriam smartphonevideonystagmographyusingconvolutionalneuralnetworksconvng
AT taegerjohannes smartphonevideonystagmographyusingconvolutionalneuralnetworksconvng
AT hartigjohannes smartphonevideonystagmographyusingconvolutionalneuralnetworksconvng
AT volkmannjens smartphonevideonystagmographyusingconvolutionalneuralnetworksconvng
AT peachrobert smartphonevideonystagmographyusingconvolutionalneuralnetworksconvng
AT zellerdaniel smartphonevideonystagmographyusingconvolutionalneuralnetworksconvng