Cargando…

Computational Metrics Can Provide Quantitative Values to Characterize Arthroscopic Field of View

PURPOSE: The purpose of this study was to determine the inter-rater reliability of arthroscopic video quality, determine correlation between surgeon rating and computational image metrics, and facilitate a quantitative methodology for assessing video quality. METHODS: Five orthopaedic surgeons revie...

Descripción completa

Detalles Bibliográficos
Autores principales: Barnes, Ryan H., Golden, M. Leslie, Borland, David, Heckert, Reed, Richardson, Meghan, Creighton, R. Alexander, Spang, Jeffrey T., Kamath, Ganesh V.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9042744/
https://www.ncbi.nlm.nih.gov/pubmed/35494292
http://dx.doi.org/10.1016/j.asmr.2021.10.017
_version_ 1784694729187786752
author Barnes, Ryan H.
Golden, M. Leslie
Borland, David
Heckert, Reed
Richardson, Meghan
Creighton, R. Alexander
Spang, Jeffrey T.
Kamath, Ganesh V.
author_facet Barnes, Ryan H.
Golden, M. Leslie
Borland, David
Heckert, Reed
Richardson, Meghan
Creighton, R. Alexander
Spang, Jeffrey T.
Kamath, Ganesh V.
author_sort Barnes, Ryan H.
collection PubMed
description PURPOSE: The purpose of this study was to determine the inter-rater reliability of arthroscopic video quality, determine correlation between surgeon rating and computational image metrics, and facilitate a quantitative methodology for assessing video quality. METHODS: Five orthopaedic surgeons reviewed 60 clips from deidentified arthroscopic shoulder videos and rated each on a four-point Likert scale from poor to excellent view. The videos were randomized, and the process was completed a total of three times. Each user rating was averaged to provide a user rating per clip. Each video frame was processed to calculate brightness, local contrast, redness (used to represent bleeding), and image entropy. Each metric was then averaged over each frame per video clip, providing four image quality metrics per clip. RESULTS: Inter-rater reliability for grading video quality had an intraclass correlation of .974. Improved image quality rating was positively correlated with increased entropy (.8142; P < .001), contrast (.8013; P < .001), and brightness (.6120; P < .001), and negatively correlated with redness (−.8626; P < .001). A multiple linear regression model was calculated with the image metrics used as predictors for the image quality ranking, with an R-squared value of .775 and root mean square error of .42. CONCLUSIONS: Our study demonstrates strong inter-rater reliability between surgeons when describing image quality and strong correlations between image quality and the computed image metrics. A model based on these metrics enables automatic quantification of image quality. CLINICAL RELEVANCE: Video quality during arthroscopic cases can impact the ease and duration of the case which could contribute to swelling and complication risk. This pilot study provides a quantitative method to assess video quality. Future works can objectively determine factors that affect visualization during arthroscopy and identify options for improvement.
format Online
Article
Text
id pubmed-9042744
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-90427442022-04-28 Computational Metrics Can Provide Quantitative Values to Characterize Arthroscopic Field of View Barnes, Ryan H. Golden, M. Leslie Borland, David Heckert, Reed Richardson, Meghan Creighton, R. Alexander Spang, Jeffrey T. Kamath, Ganesh V. Arthrosc Sports Med Rehabil Original Article PURPOSE: The purpose of this study was to determine the inter-rater reliability of arthroscopic video quality, determine correlation between surgeon rating and computational image metrics, and facilitate a quantitative methodology for assessing video quality. METHODS: Five orthopaedic surgeons reviewed 60 clips from deidentified arthroscopic shoulder videos and rated each on a four-point Likert scale from poor to excellent view. The videos were randomized, and the process was completed a total of three times. Each user rating was averaged to provide a user rating per clip. Each video frame was processed to calculate brightness, local contrast, redness (used to represent bleeding), and image entropy. Each metric was then averaged over each frame per video clip, providing four image quality metrics per clip. RESULTS: Inter-rater reliability for grading video quality had an intraclass correlation of .974. Improved image quality rating was positively correlated with increased entropy (.8142; P < .001), contrast (.8013; P < .001), and brightness (.6120; P < .001), and negatively correlated with redness (−.8626; P < .001). A multiple linear regression model was calculated with the image metrics used as predictors for the image quality ranking, with an R-squared value of .775 and root mean square error of .42. CONCLUSIONS: Our study demonstrates strong inter-rater reliability between surgeons when describing image quality and strong correlations between image quality and the computed image metrics. A model based on these metrics enables automatic quantification of image quality. CLINICAL RELEVANCE: Video quality during arthroscopic cases can impact the ease and duration of the case which could contribute to swelling and complication risk. This pilot study provides a quantitative method to assess video quality. Future works can objectively determine factors that affect visualization during arthroscopy and identify options for improvement. Elsevier 2021-12-07 /pmc/articles/PMC9042744/ /pubmed/35494292 http://dx.doi.org/10.1016/j.asmr.2021.10.017 Text en © 2021 The Authors https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Original Article
Barnes, Ryan H.
Golden, M. Leslie
Borland, David
Heckert, Reed
Richardson, Meghan
Creighton, R. Alexander
Spang, Jeffrey T.
Kamath, Ganesh V.
Computational Metrics Can Provide Quantitative Values to Characterize Arthroscopic Field of View
title Computational Metrics Can Provide Quantitative Values to Characterize Arthroscopic Field of View
title_full Computational Metrics Can Provide Quantitative Values to Characterize Arthroscopic Field of View
title_fullStr Computational Metrics Can Provide Quantitative Values to Characterize Arthroscopic Field of View
title_full_unstemmed Computational Metrics Can Provide Quantitative Values to Characterize Arthroscopic Field of View
title_short Computational Metrics Can Provide Quantitative Values to Characterize Arthroscopic Field of View
title_sort computational metrics can provide quantitative values to characterize arthroscopic field of view
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9042744/
https://www.ncbi.nlm.nih.gov/pubmed/35494292
http://dx.doi.org/10.1016/j.asmr.2021.10.017
work_keys_str_mv AT barnesryanh computationalmetricscanprovidequantitativevaluestocharacterizearthroscopicfieldofview
AT goldenmleslie computationalmetricscanprovidequantitativevaluestocharacterizearthroscopicfieldofview
AT borlanddavid computationalmetricscanprovidequantitativevaluestocharacterizearthroscopicfieldofview
AT heckertreed computationalmetricscanprovidequantitativevaluestocharacterizearthroscopicfieldofview
AT richardsonmeghan computationalmetricscanprovidequantitativevaluestocharacterizearthroscopicfieldofview
AT creightonralexander computationalmetricscanprovidequantitativevaluestocharacterizearthroscopicfieldofview
AT spangjeffreyt computationalmetricscanprovidequantitativevaluestocharacterizearthroscopicfieldofview
AT kamathganeshv computationalmetricscanprovidequantitativevaluestocharacterizearthroscopicfieldofview