Cargando…

Automatic classification of informative laryngoscopic images using deep learning

OBJECTIVE: This study aims to develop and validate a convolutional neural network (CNN)‐based algorithm for automatic selection of informative frames in flexible laryngoscopic videos. The classifier has the potential to aid in the development of computer‐aided diagnosis systems and reduce data proce...

Descripción completa

Detalles Bibliográficos
Autores principales: Yao, Peter, Witte, Dan, Gimonet, Hortense, German, Alexander, Andreadis, Katerina, Cheng, Michael, Sulica, Lucian, Elemento, Olivier, Barnes, Josue, Rameau, Anaïs
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley & Sons, Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9008155/
https://www.ncbi.nlm.nih.gov/pubmed/35434326
http://dx.doi.org/10.1002/lio2.754
_version_ 1784686986223681536
author Yao, Peter
Witte, Dan
Gimonet, Hortense
German, Alexander
Andreadis, Katerina
Cheng, Michael
Sulica, Lucian
Elemento, Olivier
Barnes, Josue
Rameau, Anaïs
author_facet Yao, Peter
Witte, Dan
Gimonet, Hortense
German, Alexander
Andreadis, Katerina
Cheng, Michael
Sulica, Lucian
Elemento, Olivier
Barnes, Josue
Rameau, Anaïs
author_sort Yao, Peter
collection PubMed
description OBJECTIVE: This study aims to develop and validate a convolutional neural network (CNN)‐based algorithm for automatic selection of informative frames in flexible laryngoscopic videos. The classifier has the potential to aid in the development of computer‐aided diagnosis systems and reduce data processing time for clinician‐computer scientist teams. METHODS: A dataset of 22,132 laryngoscopic frames was extracted from 137 flexible laryngostroboscopic videos from 115 patients. 55 videos were from healthy patients with no laryngeal pathology and 82 videos were from patients with vocal fold polyps. The extracted frames were manually labeled as informative or uninformative by two independent reviewers based on vocal fold visibility, lighting, focus, and camera distance, resulting in 18,114 informative frames and 4018 uninformative frames. The dataset was split into training and test sets. A pre‐trained ResNet‐18 model was trained using transfer learning to classify frames as informative or uninformative. Hyperparameters were set using cross‐validation. The primary outcome was precision for the informative class and secondary outcomes were precision, recall, and F1‐score for all classes. The processing rate for frames between the model and a human annotator were compared. RESULTS: The automated classifier achieved an informative frame precision, recall, and F1‐score of 94.4%, 90.2%, and 92.3%, respectively, when evaluated on a hold‐out test set of 4438 frames. The model processed frames 16 times faster than a human annotator. CONCLUSION: The CNN‐based classifier demonstrates high precision for classifying informative frames in flexible laryngostroboscopic videos. This model has the potential to aid researchers with dataset creation for computer‐aided diagnosis systems by automatically extracting relevant frames from laryngoscopic videos.
format Online
Article
Text
id pubmed-9008155
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher John Wiley & Sons, Inc.
record_format MEDLINE/PubMed
spelling pubmed-90081552022-04-15 Automatic classification of informative laryngoscopic images using deep learning Yao, Peter Witte, Dan Gimonet, Hortense German, Alexander Andreadis, Katerina Cheng, Michael Sulica, Lucian Elemento, Olivier Barnes, Josue Rameau, Anaïs Laryngoscope Investig Otolaryngol Laryngology, Speech and Language Science OBJECTIVE: This study aims to develop and validate a convolutional neural network (CNN)‐based algorithm for automatic selection of informative frames in flexible laryngoscopic videos. The classifier has the potential to aid in the development of computer‐aided diagnosis systems and reduce data processing time for clinician‐computer scientist teams. METHODS: A dataset of 22,132 laryngoscopic frames was extracted from 137 flexible laryngostroboscopic videos from 115 patients. 55 videos were from healthy patients with no laryngeal pathology and 82 videos were from patients with vocal fold polyps. The extracted frames were manually labeled as informative or uninformative by two independent reviewers based on vocal fold visibility, lighting, focus, and camera distance, resulting in 18,114 informative frames and 4018 uninformative frames. The dataset was split into training and test sets. A pre‐trained ResNet‐18 model was trained using transfer learning to classify frames as informative or uninformative. Hyperparameters were set using cross‐validation. The primary outcome was precision for the informative class and secondary outcomes were precision, recall, and F1‐score for all classes. The processing rate for frames between the model and a human annotator were compared. RESULTS: The automated classifier achieved an informative frame precision, recall, and F1‐score of 94.4%, 90.2%, and 92.3%, respectively, when evaluated on a hold‐out test set of 4438 frames. The model processed frames 16 times faster than a human annotator. CONCLUSION: The CNN‐based classifier demonstrates high precision for classifying informative frames in flexible laryngostroboscopic videos. This model has the potential to aid researchers with dataset creation for computer‐aided diagnosis systems by automatically extracting relevant frames from laryngoscopic videos. John Wiley & Sons, Inc. 2022-02-08 /pmc/articles/PMC9008155/ /pubmed/35434326 http://dx.doi.org/10.1002/lio2.754 Text en © 2022 The Authors. Laryngoscope Investigative Otolaryngology published by Wiley Periodicals LLC on behalf of The Triological Society. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc-nd/4.0/ (https://creativecommons.org/licenses/by-nc-nd/4.0/) License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made.
spellingShingle Laryngology, Speech and Language Science
Yao, Peter
Witte, Dan
Gimonet, Hortense
German, Alexander
Andreadis, Katerina
Cheng, Michael
Sulica, Lucian
Elemento, Olivier
Barnes, Josue
Rameau, Anaïs
Automatic classification of informative laryngoscopic images using deep learning
title Automatic classification of informative laryngoscopic images using deep learning
title_full Automatic classification of informative laryngoscopic images using deep learning
title_fullStr Automatic classification of informative laryngoscopic images using deep learning
title_full_unstemmed Automatic classification of informative laryngoscopic images using deep learning
title_short Automatic classification of informative laryngoscopic images using deep learning
title_sort automatic classification of informative laryngoscopic images using deep learning
topic Laryngology, Speech and Language Science
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9008155/
https://www.ncbi.nlm.nih.gov/pubmed/35434326
http://dx.doi.org/10.1002/lio2.754
work_keys_str_mv AT yaopeter automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT wittedan automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT gimonethortense automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT germanalexander automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT andreadiskaterina automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT chengmichael automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT sulicalucian automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT elementoolivier automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT barnesjosue automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning
AT rameauanais automaticclassificationofinformativelaryngoscopicimagesusingdeeplearning