Cargando…

Inter- and Intra-Observer Agreement When Using a Diagnostic Labeling Scheme for Annotating Findings on Chest X-rays—An Early Step in the Development of a Deep Learning-Based Decision Support System

Consistent annotation of data is a prerequisite for the successful training and testing of artificial intelligence-based decision support systems in radiology. This can be obtained by standardizing terminology when annotating diagnostic images. The purpose of this study was to evaluate the annotatio...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Dana, Pehrson, Lea Marie, Tøttrup, Lea, Fraccaro, Marco, Bonnevie, Rasmus, Thrane, Jakob, Sørensen, Peter Jagd, Rykkje, Alexander, Andersen, Tobias Thostrup, Steglich-Arnholm, Henrik, Stærk, Dorte Marianne Rohde, Borgwardt, Lotte, Hansen, Kristoffer Lindskov, Darkner, Sune, Carlsen, Jonathan Frederik, Nielsen, Michael Bachmann
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9776917/
https://www.ncbi.nlm.nih.gov/pubmed/36553118
http://dx.doi.org/10.3390/diagnostics12123112
Descripción
Sumario:Consistent annotation of data is a prerequisite for the successful training and testing of artificial intelligence-based decision support systems in radiology. This can be obtained by standardizing terminology when annotating diagnostic images. The purpose of this study was to evaluate the annotation consistency among radiologists when using a novel diagnostic labeling scheme for chest X-rays. Six radiologists with experience ranging from one to sixteen years, annotated a set of 100 fully anonymized chest X-rays. The blinded radiologists annotated on two separate occasions. Statistical analyses were done using Randolph’s kappa and PABAK, and the proportions of specific agreements were calculated. Fair-to-excellent agreement was found for all labels among the annotators (Randolph’s Kappa, 0.40–0.99). The PABAK ranged from 0.12 to 1 for the two-reader inter-rater agreement and 0.26 to 1 for the intra-rater agreement. Descriptive and broad labels achieved the highest proportion of positive agreement in both the inter- and intra-reader analyses. Annotating findings with specific, interpretive labels were found to be difficult for less experienced radiologists. Annotating images with descriptive labels may increase agreement between radiologists with different experience levels compared to annotation with interpretive labels.