Cargando…
Poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures
BACKGROUND AND PURPOSE: Classification of fractures can be valuable for research purposes but also in clinical work. Especially with rare fractures, such as distal ulna fractures, a treatment algorithm based on a classification can be helpful. We compared 3 different classification systems of distal...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Medical Journals Sweden, on behalf of the Nordic Orthopedic Federation
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9016747/ https://www.ncbi.nlm.nih.gov/pubmed/35438183 http://dx.doi.org/10.2340/17453674.2022.2509 |
_version_ | 1784688593406525440 |
---|---|
author | MOLONEY, Maria KÅREDAL, Jan PERSSON, Tomas FARNEBO, Simon ADOLFSSON, Lars |
author_facet | MOLONEY, Maria KÅREDAL, Jan PERSSON, Tomas FARNEBO, Simon ADOLFSSON, Lars |
author_sort | MOLONEY, Maria |
collection | PubMed |
description | BACKGROUND AND PURPOSE: Classification of fractures can be valuable for research purposes but also in clinical work. Especially with rare fractures, such as distal ulna fractures, a treatment algorithm based on a classification can be helpful. We compared 3 different classification systems of distal ulna fractures and investigated their reliability and reproducibility. PATIENTS AND METHODS: patients with 97 fractures of the distal ulna, excluding the ulnar styloid, were included. All fractures were independently classified by 3 observers according to the classification by Biyani, AO/OTA 2007, and AO/OTA 2018. The classification process was repeated after a minimum of 3 weeks. We used Kappa value analysis to determine inter- and intra-rater agreement. RESULTS: The inter-rater agreement of the AO/OTA 2007 classification was judged as fair, ĸ 0.40, whereas the agreement of AO/OTA 2018 and Biyani was moderate at ĸ 0.42 and 0.43 respectively. The intra-rater agreement was judged as moderate for all classifications. INTERPRETATION: The differences between the classifications were small and the overall impression was that neither of them was good enough to be of substantial clinical value. The Biyani classification, being developed specifically for distal ulna fractures, was the easiest and most fitting for the fracture patterns seen in our material, but lacking options for fractures of the distal diaphysis. Standard radiographs were considered insufficient for an accurate classification. A better radiographic method combined with a revised classification might improve accuracy, reliability, and reproducibility. |
format | Online Article Text |
id | pubmed-9016747 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Medical Journals Sweden, on behalf of the Nordic Orthopedic Federation |
record_format | MEDLINE/PubMed |
spelling | pubmed-90167472022-04-20 Poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures MOLONEY, Maria KÅREDAL, Jan PERSSON, Tomas FARNEBO, Simon ADOLFSSON, Lars Acta Orthop Article BACKGROUND AND PURPOSE: Classification of fractures can be valuable for research purposes but also in clinical work. Especially with rare fractures, such as distal ulna fractures, a treatment algorithm based on a classification can be helpful. We compared 3 different classification systems of distal ulna fractures and investigated their reliability and reproducibility. PATIENTS AND METHODS: patients with 97 fractures of the distal ulna, excluding the ulnar styloid, were included. All fractures were independently classified by 3 observers according to the classification by Biyani, AO/OTA 2007, and AO/OTA 2018. The classification process was repeated after a minimum of 3 weeks. We used Kappa value analysis to determine inter- and intra-rater agreement. RESULTS: The inter-rater agreement of the AO/OTA 2007 classification was judged as fair, ĸ 0.40, whereas the agreement of AO/OTA 2018 and Biyani was moderate at ĸ 0.42 and 0.43 respectively. The intra-rater agreement was judged as moderate for all classifications. INTERPRETATION: The differences between the classifications were small and the overall impression was that neither of them was good enough to be of substantial clinical value. The Biyani classification, being developed specifically for distal ulna fractures, was the easiest and most fitting for the fracture patterns seen in our material, but lacking options for fractures of the distal diaphysis. Standard radiographs were considered insufficient for an accurate classification. A better radiographic method combined with a revised classification might improve accuracy, reliability, and reproducibility. Medical Journals Sweden, on behalf of the Nordic Orthopedic Federation 2022-04-18 /pmc/articles/PMC9016747/ /pubmed/35438183 http://dx.doi.org/10.2340/17453674.2022.2509 Text en © 2022 The Author(s) https://creativecommons.org/licenses/by-nc/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for non-commercial purposes, provided proper attribution to the original work. |
spellingShingle | Article MOLONEY, Maria KÅREDAL, Jan PERSSON, Tomas FARNEBO, Simon ADOLFSSON, Lars Poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures |
title | Poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures |
title_full | Poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures |
title_fullStr | Poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures |
title_full_unstemmed | Poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures |
title_short | Poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures |
title_sort | poor reliability and reproducibility of 3 different radio-graphical classification systems for distal ulna fractures |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9016747/ https://www.ncbi.nlm.nih.gov/pubmed/35438183 http://dx.doi.org/10.2340/17453674.2022.2509 |
work_keys_str_mv | AT moloneymaria poorreliabilityandreproducibilityof3differentradiographicalclassificationsystemsfordistalulnafractures AT karedaljan poorreliabilityandreproducibilityof3differentradiographicalclassificationsystemsfordistalulnafractures AT perssontomas poorreliabilityandreproducibilityof3differentradiographicalclassificationsystemsfordistalulnafractures AT farnebosimon poorreliabilityandreproducibilityof3differentradiographicalclassificationsystemsfordistalulnafractures AT adolfssonlars poorreliabilityandreproducibilityof3differentradiographicalclassificationsystemsfordistalulnafractures |