Cargando…

Evaluation of the Reliability and Reproducibility of the Roussouly Classification for Lumbar Lordosis Types

Objective  The present study aims to determine the intra- and inter-rater reliability and reproducibility of the Roussouly classification for lumbar lordosis types. Methods  A database of 104 panoramic, lateral radiographs of the spine of male individuals aged between 18 and 40 years old was used. S...

Descripción completa

Detalles Bibliográficos
Autores principales: Yamazato, Camila Oda, Ribeiro, Gustavo, Paula, Fabio Chaud de, Soares, Ramon Oliveira, Cruz, Paulo Santa, Kanas, Michel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Thieme Revinter Publicações Ltda. 2021
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9142261/
https://www.ncbi.nlm.nih.gov/pubmed/35652032
http://dx.doi.org/10.1055/s-0041-1729581
Descripción
Sumario:Objective  The present study aims to determine the intra- and inter-rater reliability and reproducibility of the Roussouly classification for lumbar lordosis types. Methods  A database of 104 panoramic, lateral radiographs of the spine of male individuals aged between 18 and 40 years old was used. Six examiners with different expertise levels measured spinopelvic angles and classified lordosis types according to the Roussouly classification using the Surgimap software (Nemaris Inc., New York, NY, USA). After a 1-month interval, the measurements were repeated, and the intra- and inter-rater agreement were calculated using the Fleiss Kappa test. Results  The study revealed positive evidence regarding the reproducibility of the Roussouly classification, with reasonable to virtually perfect (0.307–0.827) intra-rater agreement, and moderate (0.43) to reasonable (0.369) inter-rater agreement according to the Fleiss kappa test. The most experienced examiners showed greater inter-rater agreement, ranging from substantial (0.619) to moderate (0.439). Conclusion  The Roussouly classification demonstrated good reliability and reproducibility, with intra- and inter-rater agreements at least reasonable, and reaching substantial to virtually perfect levels in some situations. Evaluators with highest expertise levels showed greater intra and inter-rater agreement.