Cargando…
Automatic classification of informative laryngoscopic images using deep learning
OBJECTIVE: This study aims to develop and validate a convolutional neural network (CNN)‐based algorithm for automatic selection of informative frames in flexible laryngoscopic videos. The classifier has the potential to aid in the development of computer‐aided diagnosis systems and reduce data proce...
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley & Sons, Inc.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9008155/ https://www.ncbi.nlm.nih.gov/pubmed/35434326 http://dx.doi.org/10.1002/lio2.754 |
Sumario: | OBJECTIVE: This study aims to develop and validate a convolutional neural network (CNN)‐based algorithm for automatic selection of informative frames in flexible laryngoscopic videos. The classifier has the potential to aid in the development of computer‐aided diagnosis systems and reduce data processing time for clinician‐computer scientist teams. METHODS: A dataset of 22,132 laryngoscopic frames was extracted from 137 flexible laryngostroboscopic videos from 115 patients. 55 videos were from healthy patients with no laryngeal pathology and 82 videos were from patients with vocal fold polyps. The extracted frames were manually labeled as informative or uninformative by two independent reviewers based on vocal fold visibility, lighting, focus, and camera distance, resulting in 18,114 informative frames and 4018 uninformative frames. The dataset was split into training and test sets. A pre‐trained ResNet‐18 model was trained using transfer learning to classify frames as informative or uninformative. Hyperparameters were set using cross‐validation. The primary outcome was precision for the informative class and secondary outcomes were precision, recall, and F1‐score for all classes. The processing rate for frames between the model and a human annotator were compared. RESULTS: The automated classifier achieved an informative frame precision, recall, and F1‐score of 94.4%, 90.2%, and 92.3%, respectively, when evaluated on a hold‐out test set of 4438 frames. The model processed frames 16 times faster than a human annotator. CONCLUSION: The CNN‐based classifier demonstrates high precision for classifying informative frames in flexible laryngostroboscopic videos. This model has the potential to aid researchers with dataset creation for computer‐aided diagnosis systems by automatically extracting relevant frames from laryngoscopic videos. |
---|