Cargando…

Reproducibility of the Lauge-Hansen, Danis-Weber, and AO classifications for ankle fractures()

OBJECTIVE: This study evaluated the reproducibility of the three main classifications of ankle fractures most commonly used in emergency clinical practice: Lauge-Hansen, Danis-Weber, and AO-OTA. The secondary objective was to assess whether the level of professional experience influenced the interob...

Descripción completa

Detalles Bibliográficos
Autores principales: Fonseca, Lucas Lopes da, Nunes, Icaro Gusmão, Nogueira, Rodrigo Reis, Martins, Gustavo Eduardo Vieira, Mesencio, Antônio Cesar, Kobata, Sílvia Iovine
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5771788/
https://www.ncbi.nlm.nih.gov/pubmed/29367914
http://dx.doi.org/10.1016/j.rboe.2017.11.013
Descripción
Sumario:OBJECTIVE: This study evaluated the reproducibility of the three main classifications of ankle fractures most commonly used in emergency clinical practice: Lauge-Hansen, Danis-Weber, and AO-OTA. The secondary objective was to assess whether the level of professional experience influenced the interobserver agreement for the classification of this pathology. METHODS: The study included 83 digitized preoperative radiographic images of ankle fractures, in anteroposterior and lateral views, of different adults that had occurred between January and December 2013. For sample calculation, the estimated accuracy was approximately 15%, with a sampling error of 5% and a sampling power of 80%. The images were analyzed and classified by six different observers: two foot and ankle surgeons, two general orthopedic surgeons, and two-second-year residents in orthopedics and traumatology. The Kappa statistical method of multiple variances was used to assess the variations. RESULTS: The Danis-Weber classification indicated that 40% of the agreements among all observers were good or excellent, whereas only 20% of good and excellent agreements were obtained using the AO and Lauge Hansen classifications. The Kappa index was 0.49 for the Danis-Weber classification, 0.32 for Lauge Hansen, and 0.38 for AO. CONCLUSION: The Hansen-Lauge classification presented the poorest interobserver agreement among the three systems. The AO classification demonstrated a moderate agreement and the Danis-Weber classification presented an excellent interobserver agreement index, regardless of professional experience.