Cargando…

A quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques

BACKGROUND: Despite extensive literature supporting the use of computerized tomography (CT) scans in evaluating scaphoid fractures, there has not been a consensus on the methodology for defining and quantifying union. The purpose of this study was to test the inter-observer reliability of two method...

Descripción completa

Detalles Bibliográficos
Autores principales: Grewal, Ruby, Frakash, Uri, Osman, Said, McMurtry, Robert Y
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3765287/
https://www.ncbi.nlm.nih.gov/pubmed/23961919
http://dx.doi.org/10.1186/1749-799X-8-28
_version_ 1782283275855724544
author Grewal, Ruby
Frakash, Uri
Osman, Said
McMurtry, Robert Y
author_facet Grewal, Ruby
Frakash, Uri
Osman, Said
McMurtry, Robert Y
author_sort Grewal, Ruby
collection PubMed
description BACKGROUND: Despite extensive literature supporting the use of computerized tomography (CT) scans in evaluating scaphoid fractures, there has not been a consensus on the methodology for defining and quantifying union. The purpose of this study was to test the inter-observer reliability of two methods of quantifying scaphoid union. METHODS: The CT scans of 50 non-operatively treated scaphoid fractures were reviewed by four blinded observers. Each was asked to classify union into one of three categories, united, partially united, or tenuously united, based on their general impression. Each reviewer then carefully analyzed each CT slice and quantified union based on two methods, the mean percentage union and the weighted mean percentage union. The estimated percentage of scaphoid union for each scan was recorded, and inter-observer reliability for both methods was assessed using a Bland-Altman plot to calculate the 95% limits of agreement. Kappa statistic was used to measure the degree of agreement for the categorical assessment of union. RESULTS: There was very little difference in the percentage of union calculated between the two methods (mean difference between the two methods was 1.2 ± 4.1%), with each reviewer demonstrating excellent agreement between the two methods based on the Bland-Altman plot. The kappa score indicated very good agreement (Ƙ = 0.80) between the consultant hand surgeon and the musculoskeletal radiologist, and good agreement (Ƙ = 0.62) between the consultant hand surgeon and the hand fellow for the categorical assessment of union. CONCLUSIONS: This study describes two methods of quantifying and defining scaphoid union, both with a high inter-rater reliability. This indicates that either method can be reliably used, making it an important tool for both for clinical use and research purposes in future studies of scaphoid fractures, particularly those which are using union or time to union as their endpoint. LEVEL OF EVIDENCE: Diagnostic, level III
format Online
Article
Text
id pubmed-3765287
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-37652872013-09-10 A quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques Grewal, Ruby Frakash, Uri Osman, Said McMurtry, Robert Y J Orthop Surg Res Research Article BACKGROUND: Despite extensive literature supporting the use of computerized tomography (CT) scans in evaluating scaphoid fractures, there has not been a consensus on the methodology for defining and quantifying union. The purpose of this study was to test the inter-observer reliability of two methods of quantifying scaphoid union. METHODS: The CT scans of 50 non-operatively treated scaphoid fractures were reviewed by four blinded observers. Each was asked to classify union into one of three categories, united, partially united, or tenuously united, based on their general impression. Each reviewer then carefully analyzed each CT slice and quantified union based on two methods, the mean percentage union and the weighted mean percentage union. The estimated percentage of scaphoid union for each scan was recorded, and inter-observer reliability for both methods was assessed using a Bland-Altman plot to calculate the 95% limits of agreement. Kappa statistic was used to measure the degree of agreement for the categorical assessment of union. RESULTS: There was very little difference in the percentage of union calculated between the two methods (mean difference between the two methods was 1.2 ± 4.1%), with each reviewer demonstrating excellent agreement between the two methods based on the Bland-Altman plot. The kappa score indicated very good agreement (Ƙ = 0.80) between the consultant hand surgeon and the musculoskeletal radiologist, and good agreement (Ƙ = 0.62) between the consultant hand surgeon and the hand fellow for the categorical assessment of union. CONCLUSIONS: This study describes two methods of quantifying and defining scaphoid union, both with a high inter-rater reliability. This indicates that either method can be reliably used, making it an important tool for both for clinical use and research purposes in future studies of scaphoid fractures, particularly those which are using union or time to union as their endpoint. LEVEL OF EVIDENCE: Diagnostic, level III BioMed Central 2013-08-21 /pmc/articles/PMC3765287/ /pubmed/23961919 http://dx.doi.org/10.1186/1749-799X-8-28 Text en Copyright © 2013 Grewal et al.; licensee BioMed Central Ltd. http://creativecommons.org/licenses/by/2.0 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Grewal, Ruby
Frakash, Uri
Osman, Said
McMurtry, Robert Y
A quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques
title A quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques
title_full A quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques
title_fullStr A quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques
title_full_unstemmed A quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques
title_short A quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques
title_sort quantitative definition of scaphoid union: determining the inter-rater reliability of two techniques
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3765287/
https://www.ncbi.nlm.nih.gov/pubmed/23961919
http://dx.doi.org/10.1186/1749-799X-8-28
work_keys_str_mv AT grewalruby aquantitativedefinitionofscaphoiduniondeterminingtheinterraterreliabilityoftwotechniques
AT frakashuri aquantitativedefinitionofscaphoiduniondeterminingtheinterraterreliabilityoftwotechniques
AT osmansaid aquantitativedefinitionofscaphoiduniondeterminingtheinterraterreliabilityoftwotechniques
AT mcmurtryroberty aquantitativedefinitionofscaphoiduniondeterminingtheinterraterreliabilityoftwotechniques
AT grewalruby quantitativedefinitionofscaphoiduniondeterminingtheinterraterreliabilityoftwotechniques
AT frakashuri quantitativedefinitionofscaphoiduniondeterminingtheinterraterreliabilityoftwotechniques
AT osmansaid quantitativedefinitionofscaphoiduniondeterminingtheinterraterreliabilityoftwotechniques
AT mcmurtryroberty quantitativedefinitionofscaphoiduniondeterminingtheinterraterreliabilityoftwotechniques