Cargando…
Variability of Grading DR Screening Images among Non-Trained Retina Specialists
Poland has never had a widespread diabetic retinopathy (DR) screening program and subsequently has no purpose-trained graders and no established grader training scheme. Herein, we compare the performance and variability of three retinal specialists with no additional DR grading training in assessing...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9180965/ https://www.ncbi.nlm.nih.gov/pubmed/35683522 http://dx.doi.org/10.3390/jcm11113125 |
_version_ | 1784723650240315392 |
---|---|
author | Grzybowski, Andrzej Brona, Piotr Krzywicki, Tomasz Gaca-Wysocka, Magdalena Berlińska, Arleta Święch, Anna |
author_facet | Grzybowski, Andrzej Brona, Piotr Krzywicki, Tomasz Gaca-Wysocka, Magdalena Berlińska, Arleta Święch, Anna |
author_sort | Grzybowski, Andrzej |
collection | PubMed |
description | Poland has never had a widespread diabetic retinopathy (DR) screening program and subsequently has no purpose-trained graders and no established grader training scheme. Herein, we compare the performance and variability of three retinal specialists with no additional DR grading training in assessing images from 335 real-life screening encounters and contrast their performance against IDx-DR, a US Food and Drug Administration (FDA) approved DR screening suite. A total of 1501 fundus images from 670 eyes were assessed by each grader with a final grade on a per-eye level. Unanimous agreement between all graders was achieved for 385 eyes, and 110 patients, out of which 98% had a final grade of no DR. Thirty-six patients had final grades higher than mild DR, out of which only two had no grader disagreements regarding severity. A total of 28 eyes underwent adjudication due to complete grader disagreement. Four patients had discordant grades ranging from no DR to severe DR between the human graders and IDx-DR. Retina specialists achieved kappa scores of 0.52, 0.78, and 0.61. Retina specialists had relatively high grader variability and only a modest concordance with IDx-DR results. Focused training and verification are recommended for any potential DR graders before assessing DR screening images. |
format | Online Article Text |
id | pubmed-9180965 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-91809652022-06-10 Variability of Grading DR Screening Images among Non-Trained Retina Specialists Grzybowski, Andrzej Brona, Piotr Krzywicki, Tomasz Gaca-Wysocka, Magdalena Berlińska, Arleta Święch, Anna J Clin Med Article Poland has never had a widespread diabetic retinopathy (DR) screening program and subsequently has no purpose-trained graders and no established grader training scheme. Herein, we compare the performance and variability of three retinal specialists with no additional DR grading training in assessing images from 335 real-life screening encounters and contrast their performance against IDx-DR, a US Food and Drug Administration (FDA) approved DR screening suite. A total of 1501 fundus images from 670 eyes were assessed by each grader with a final grade on a per-eye level. Unanimous agreement between all graders was achieved for 385 eyes, and 110 patients, out of which 98% had a final grade of no DR. Thirty-six patients had final grades higher than mild DR, out of which only two had no grader disagreements regarding severity. A total of 28 eyes underwent adjudication due to complete grader disagreement. Four patients had discordant grades ranging from no DR to severe DR between the human graders and IDx-DR. Retina specialists achieved kappa scores of 0.52, 0.78, and 0.61. Retina specialists had relatively high grader variability and only a modest concordance with IDx-DR results. Focused training and verification are recommended for any potential DR graders before assessing DR screening images. MDPI 2022-05-31 /pmc/articles/PMC9180965/ /pubmed/35683522 http://dx.doi.org/10.3390/jcm11113125 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Grzybowski, Andrzej Brona, Piotr Krzywicki, Tomasz Gaca-Wysocka, Magdalena Berlińska, Arleta Święch, Anna Variability of Grading DR Screening Images among Non-Trained Retina Specialists |
title | Variability of Grading DR Screening Images among Non-Trained Retina Specialists |
title_full | Variability of Grading DR Screening Images among Non-Trained Retina Specialists |
title_fullStr | Variability of Grading DR Screening Images among Non-Trained Retina Specialists |
title_full_unstemmed | Variability of Grading DR Screening Images among Non-Trained Retina Specialists |
title_short | Variability of Grading DR Screening Images among Non-Trained Retina Specialists |
title_sort | variability of grading dr screening images among non-trained retina specialists |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9180965/ https://www.ncbi.nlm.nih.gov/pubmed/35683522 http://dx.doi.org/10.3390/jcm11113125 |
work_keys_str_mv | AT grzybowskiandrzej variabilityofgradingdrscreeningimagesamongnontrainedretinaspecialists AT bronapiotr variabilityofgradingdrscreeningimagesamongnontrainedretinaspecialists AT krzywickitomasz variabilityofgradingdrscreeningimagesamongnontrainedretinaspecialists AT gacawysockamagdalena variabilityofgradingdrscreeningimagesamongnontrainedretinaspecialists AT berlinskaarleta variabilityofgradingdrscreeningimagesamongnontrainedretinaspecialists AT swiechanna variabilityofgradingdrscreeningimagesamongnontrainedretinaspecialists |