Cargando…

Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment

PURPOSE: To compare the internal computer-based scoring with human-based video scoring of cataract modules in the Eyesi virtual reality intraocular surgical simulator, a comparative case series was conducted at the Department of Clinical Sciences – Ophthalmology, Lund University, Skåne University Ho...

Descripción completa

Detalles Bibliográficos
Autores principales: Selvander, Madeleine, Åsman, Peter
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Dove Medical Press 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3794851/
https://www.ncbi.nlm.nih.gov/pubmed/24124350
http://dx.doi.org/10.2147/OPTH.S48374
_version_ 1782287275275583488
author Selvander, Madeleine
Åsman, Peter
author_facet Selvander, Madeleine
Åsman, Peter
author_sort Selvander, Madeleine
collection PubMed
description PURPOSE: To compare the internal computer-based scoring with human-based video scoring of cataract modules in the Eyesi virtual reality intraocular surgical simulator, a comparative case series was conducted at the Department of Clinical Sciences – Ophthalmology, Lund University, Skåne University Hospital, Malmö, Sweden. METHODS: Seven cataract surgeons and 17 medical students performed one video-recorded trial with each of the capsulorhexis, hydromaneuvers, and phacoemulsification divide-and-conquer modules. For each module, the simulator calculated an overall score for the performance ranging from 0 to 100. Two experienced masked cataract surgeons analyzed each video using the Objective Structured Assessment of Cataract Surgical Skill (OSACSS) for individual models and modified Objective Structured Assessment of Surgical Skills (OSATS) for all three modules together. The average of the two assessors’ scores for each tool was used as the video-based performance score. The ability to discriminate surgeons from naïve individuals using the simulator score and the video score, respectively, was compared using receiver operating characteristic (ROC) curves. RESULTS: The ROC areas for simulator score did not differ from 0.5 (random) for hydromaneuvers and phacoemulsification modules, yielding unacceptably poor discrimination. OSACSS video scores all showed good ROC areas significantly different from 0.5. The OSACSS video score was also superior compared to the simulator score for the phacoemulsification procedure: ROC area 0.945 vs 0.664 for simulator score (P = 0.010). Corresponding values for capsulorhexis were 0.887 vs 0.761 (P = 0.056) and for hydromaneuvers 0.817 vs 0.571 (P = 0.052) for the video scores and simulator scores, respectively. The ROC area for the combined procedure was 0.938 for OSATS video score and 0.799 for simulator score (P=0.072). CONCLUSION: Video-based scoring of the phacoemulsification procedure was superior to the innate simulator scoring system in distinguishing cataract surgical skills. Simulator scoring rendered unacceptably poor discrimination for both the hydromaneuvers and the phacoemulsification divide-and-conquer module. Our results indicate a potential for improvement in Eyesi internal computer-based scoring.
format Online
Article
Text
id pubmed-3794851
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Dove Medical Press
record_format MEDLINE/PubMed
spelling pubmed-37948512013-10-11 Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment Selvander, Madeleine Åsman, Peter Clin Ophthalmol Case Series PURPOSE: To compare the internal computer-based scoring with human-based video scoring of cataract modules in the Eyesi virtual reality intraocular surgical simulator, a comparative case series was conducted at the Department of Clinical Sciences – Ophthalmology, Lund University, Skåne University Hospital, Malmö, Sweden. METHODS: Seven cataract surgeons and 17 medical students performed one video-recorded trial with each of the capsulorhexis, hydromaneuvers, and phacoemulsification divide-and-conquer modules. For each module, the simulator calculated an overall score for the performance ranging from 0 to 100. Two experienced masked cataract surgeons analyzed each video using the Objective Structured Assessment of Cataract Surgical Skill (OSACSS) for individual models and modified Objective Structured Assessment of Surgical Skills (OSATS) for all three modules together. The average of the two assessors’ scores for each tool was used as the video-based performance score. The ability to discriminate surgeons from naïve individuals using the simulator score and the video score, respectively, was compared using receiver operating characteristic (ROC) curves. RESULTS: The ROC areas for simulator score did not differ from 0.5 (random) for hydromaneuvers and phacoemulsification modules, yielding unacceptably poor discrimination. OSACSS video scores all showed good ROC areas significantly different from 0.5. The OSACSS video score was also superior compared to the simulator score for the phacoemulsification procedure: ROC area 0.945 vs 0.664 for simulator score (P = 0.010). Corresponding values for capsulorhexis were 0.887 vs 0.761 (P = 0.056) and for hydromaneuvers 0.817 vs 0.571 (P = 0.052) for the video scores and simulator scores, respectively. The ROC area for the combined procedure was 0.938 for OSATS video score and 0.799 for simulator score (P=0.072). CONCLUSION: Video-based scoring of the phacoemulsification procedure was superior to the innate simulator scoring system in distinguishing cataract surgical skills. Simulator scoring rendered unacceptably poor discrimination for both the hydromaneuvers and the phacoemulsification divide-and-conquer module. Our results indicate a potential for improvement in Eyesi internal computer-based scoring. Dove Medical Press 2013 2013-10-03 /pmc/articles/PMC3794851/ /pubmed/24124350 http://dx.doi.org/10.2147/OPTH.S48374 Text en © 2013 Selvander and Åsman. This work is published by Dove Medical Press Ltd, and licensed under Creative Commons Attribution – Non Commercial (unported, v3.0) License The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Ltd, provided the work is properly attributed.
spellingShingle Case Series
Selvander, Madeleine
Åsman, Peter
Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment
title Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment
title_full Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment
title_fullStr Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment
title_full_unstemmed Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment
title_short Ready for OR or not? Human reader supplements Eyesi scoring in cataract surgical skills assessment
title_sort ready for or or not? human reader supplements eyesi scoring in cataract surgical skills assessment
topic Case Series
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3794851/
https://www.ncbi.nlm.nih.gov/pubmed/24124350
http://dx.doi.org/10.2147/OPTH.S48374
work_keys_str_mv AT selvandermadeleine readyfororornothumanreadersupplementseyesiscoringincataractsurgicalskillsassessment
AT asmanpeter readyfororornothumanreadersupplementseyesiscoringincataractsurgicalskillsassessment