Cargando…

Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery

OBJECTIVE: To test whether crowdsourced lay raters can accurately assess cataract surgical skills. DESIGN: Two-armed study: independent cross-sectional and longitudinal cohorts. SETTING: Washington University Department of Ophthalmology. PARTICIPANTS AND METHODS: Sixteen cataract surgeons with varyi...

Descripción completa

Detalles Bibliográficos
Autores principales: Paley, Grace L., Grove, Rebecca, Sekhar, Tejas C., Pruett, Jack, Stock, Michael V., Pira, Tony N., Shields, Steven M., Waxman, Evan L., Wilson, Bradley S., Gordon, Mae O., Culican, Susan M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8217126/
https://www.ncbi.nlm.nih.gov/pubmed/33640326
http://dx.doi.org/10.1016/j.jsurg.2021.02.004
_version_ 1783710554908524544
author Paley, Grace L.
Grove, Rebecca
Sekhar, Tejas C.
Pruett, Jack
Stock, Michael V.
Pira, Tony N.
Shields, Steven M.
Waxman, Evan L.
Wilson, Bradley S.
Gordon, Mae O.
Culican, Susan M.
author_facet Paley, Grace L.
Grove, Rebecca
Sekhar, Tejas C.
Pruett, Jack
Stock, Michael V.
Pira, Tony N.
Shields, Steven M.
Waxman, Evan L.
Wilson, Bradley S.
Gordon, Mae O.
Culican, Susan M.
author_sort Paley, Grace L.
collection PubMed
description OBJECTIVE: To test whether crowdsourced lay raters can accurately assess cataract surgical skills. DESIGN: Two-armed study: independent cross-sectional and longitudinal cohorts. SETTING: Washington University Department of Ophthalmology. PARTICIPANTS AND METHODS: Sixteen cataract surgeons with varying experience levels submitted cataract surgery videos to be graded by 5 experts and 300+ crowdworkers masked to surgeon experience. Cross-sectional study: 50 videos from surgeons ranging from first-year resident to attending physician, pooled by years of training. Longitudinal study: 28 videos obtained at regular intervals as residents progressed through 180 cases. Surgical skill was graded using the modified Objective Structured Assessment of Technical Skill (mOSATS). Main outcome measures were overall technical performance, reliability indices, and correlation between expert and crowd mean scores. RESULTS: Experts demonstrated high interrater reliability and accurately predicted training level, establishing construct validity for the modified OSATS. Crowd scores were correlated with (r = 0.865, p < 0.0001) but consistently higher than expert scores for first, second, and third-year residents (p < 0.0001, paired t-test). Longer surgery duration negatively correlated with training level (r = −0.855, p < 0.0001) and expert score (r = −0.927, p < 0.0001). The longitudinal dataset reproduced cross-sectional study findings for crowd and expert comparisons. A regression equation transforming crowd score plus video length into expert score was derived from the cross-sectional dataset (r(2) = 0.92) and demonstrated excellent predictive modeling when applied to the independent longitudinal dataset (r(2) = 0.80). A group of student raters who had edited the cataract videos also graded them, producing scores that more closely approximated experts than the crowd. CONCLUSIONS: Crowdsourced rankings correlated with expert scores, but were not equivalent; crowd scores overestimated technical competency, especially for novice surgeons. A novel approach of adjusting crowd scores with surgery duration generated a more accurate predictive model for surgical skill. More studies are needed before crowdsourcing can be reliably used for assessing surgical proficiency.
format Online
Article
Text
id pubmed-8217126
institution National Center for Biotechnology Information
language English
publishDate 2021
record_format MEDLINE/PubMed
spelling pubmed-82171262021-07-01 Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery Paley, Grace L. Grove, Rebecca Sekhar, Tejas C. Pruett, Jack Stock, Michael V. Pira, Tony N. Shields, Steven M. Waxman, Evan L. Wilson, Bradley S. Gordon, Mae O. Culican, Susan M. J Surg Educ Article OBJECTIVE: To test whether crowdsourced lay raters can accurately assess cataract surgical skills. DESIGN: Two-armed study: independent cross-sectional and longitudinal cohorts. SETTING: Washington University Department of Ophthalmology. PARTICIPANTS AND METHODS: Sixteen cataract surgeons with varying experience levels submitted cataract surgery videos to be graded by 5 experts and 300+ crowdworkers masked to surgeon experience. Cross-sectional study: 50 videos from surgeons ranging from first-year resident to attending physician, pooled by years of training. Longitudinal study: 28 videos obtained at regular intervals as residents progressed through 180 cases. Surgical skill was graded using the modified Objective Structured Assessment of Technical Skill (mOSATS). Main outcome measures were overall technical performance, reliability indices, and correlation between expert and crowd mean scores. RESULTS: Experts demonstrated high interrater reliability and accurately predicted training level, establishing construct validity for the modified OSATS. Crowd scores were correlated with (r = 0.865, p < 0.0001) but consistently higher than expert scores for first, second, and third-year residents (p < 0.0001, paired t-test). Longer surgery duration negatively correlated with training level (r = −0.855, p < 0.0001) and expert score (r = −0.927, p < 0.0001). The longitudinal dataset reproduced cross-sectional study findings for crowd and expert comparisons. A regression equation transforming crowd score plus video length into expert score was derived from the cross-sectional dataset (r(2) = 0.92) and demonstrated excellent predictive modeling when applied to the independent longitudinal dataset (r(2) = 0.80). A group of student raters who had edited the cataract videos also graded them, producing scores that more closely approximated experts than the crowd. CONCLUSIONS: Crowdsourced rankings correlated with expert scores, but were not equivalent; crowd scores overestimated technical competency, especially for novice surgeons. A novel approach of adjusting crowd scores with surgery duration generated a more accurate predictive model for surgical skill. More studies are needed before crowdsourcing can be reliably used for assessing surgical proficiency. 2021-02-25 2021 /pmc/articles/PMC8217126/ /pubmed/33640326 http://dx.doi.org/10.1016/j.jsurg.2021.02.004 Text en https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/ (https://creativecommons.org/licenses/by-nc-nd/4.0/) ))
spellingShingle Article
Paley, Grace L.
Grove, Rebecca
Sekhar, Tejas C.
Pruett, Jack
Stock, Michael V.
Pira, Tony N.
Shields, Steven M.
Waxman, Evan L.
Wilson, Bradley S.
Gordon, Mae O.
Culican, Susan M.
Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery
title Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery
title_full Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery
title_fullStr Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery
title_full_unstemmed Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery
title_short Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery
title_sort crowdsourced assessment of surgical skill proficiency in cataract surgery
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8217126/
https://www.ncbi.nlm.nih.gov/pubmed/33640326
http://dx.doi.org/10.1016/j.jsurg.2021.02.004
work_keys_str_mv AT paleygracel crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT groverebecca crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT sekhartejasc crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT pruettjack crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT stockmichaelv crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT piratonyn crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT shieldsstevenm crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT waxmanevanl crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT wilsonbradleys crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT gordonmaeo crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery
AT culicansusanm crowdsourcedassessmentofsurgicalskillproficiencyincataractsurgery