Cargando…

PhacoTrainer: Deep Learning for Cataract Surgical Videos to Track Surgical Tools

PURPOSE: The purpose of this study was to build a deep-learning model that automatically analyzes cataract surgical videos for the locations of surgical landmarks, and to derive skill-related motion metrics. METHODS: The locations of the pupil, limbus, and 8 classes of surgical instruments were iden...

Descripción completa

Detalles Bibliográficos
Autores principales: Yeh, Hsu-Hang, Jain, Anjal M., Fox, Olivia, Sebov, Kostya, Wang, Sophia Y.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Association for Research in Vision and Ophthalmology 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10050900/
https://www.ncbi.nlm.nih.gov/pubmed/36947046
http://dx.doi.org/10.1167/tvst.12.3.23
Descripción
Sumario:PURPOSE: The purpose of this study was to build a deep-learning model that automatically analyzes cataract surgical videos for the locations of surgical landmarks, and to derive skill-related motion metrics. METHODS: The locations of the pupil, limbus, and 8 classes of surgical instruments were identified by a 2-step algorithm: (1) mask segmentation and (2) landmark identification from the masks. To perform mask segmentation, we trained the YOLACT model on 1156 frames sampled from 268 videos and the public Cataract Dataset for Image Segmentation (CaDIS) dataset. Landmark identification was performed by fitting ellipses or lines to the contours of the masks and deriving locations of interest, including surgical tooltips and the pupil center. Landmark identification was evaluated by the distance between the predicted and true positions in 5853 frames of 10 phacoemulsification video clips. We derived the total path length, maximal speed, and covered area using the tip positions and examined the correlation with human-rated surgical performance. RESULTS: The mean average precision score and intersection-over-union for mask detection were 0.78 and 0.82. The average distance between the predicted and true positions of the pupil center, phaco tip, and second instrument tip was 5.8, 9.1, and 17.1 pixels. The total path length and covered areas of these landmarks were negatively correlated with surgical performance. CONCLUSIONS: We developed a deep-learning method to localize key anatomical portions of the eye and cataract surgical tools, which can be used to automatically derive metrics correlated with surgical skill. TRANSLATIONAL RELEVANCE: Our system could form the basis of an automated feedback system that helps cataract surgeons evaluate their performance.