Cargando…

Addressing Artificial Intelligence Bias in Retinal Diagnostics

PURPOSE: This study evaluated generative methods to potentially mitigate artificial intelligence (AI) bias when diagnosing diabetic retinopathy (DR) resulting from training data imbalance or domain generalization, which occurs when deep learning systems (DLSs) face concepts at test/inference time th...

Descripción completa

Detalles Bibliográficos
Autores principales: Burlina, Philippe, Joshi, Neil, Paul, William, Pacheco, Katia D., Bressler, Neil M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Association for Research in Vision and Ophthalmology 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7884292/
https://www.ncbi.nlm.nih.gov/pubmed/34003898
http://dx.doi.org/10.1167/tvst.10.2.13
_version_ 1783651381991702528
author Burlina, Philippe
Joshi, Neil
Paul, William
Pacheco, Katia D.
Bressler, Neil M.
author_facet Burlina, Philippe
Joshi, Neil
Paul, William
Pacheco, Katia D.
Bressler, Neil M.
author_sort Burlina, Philippe
collection PubMed
description PURPOSE: This study evaluated generative methods to potentially mitigate artificial intelligence (AI) bias when diagnosing diabetic retinopathy (DR) resulting from training data imbalance or domain generalization, which occurs when deep learning systems (DLSs) face concepts at test/inference time they were not initially trained on. METHODS: The public domain Kaggle EyePACS dataset (88,692 fundi and 44,346 individuals, originally diverse for ethnicity) was modified by adding clinician-annotated labels and constructing an artificial scenario of data imbalance and domain generalization by disallowing training (but not testing) exemplars for images of retinas with DR warranting referral (DR-referable) from darker-skin individuals, who presumably have greater concentration of melanin within uveal melanocytes, on average, contributing to retinal image pigmentation. A traditional/baseline diagnostic DLS was compared against new DLSs that would use training data augmented via generative models for debiasing. RESULTS: Accuracy (95% confidence intervals [CIs]) of the baseline diagnostics DLS for fundus images of lighter-skin individuals was 73.0% (66.9% to 79.2%) versus darker-skin of 60.5% (53.5% to 67.3%), demonstrating bias/disparity (delta = 12.5%; Welch t-test t = 2.670, P = 0.008) in AI performance across protected subpopulations. Using novel generative methods for addressing missing subpopulation training data (DR-referable darker-skin) achieved instead accuracy, for lighter-skin, of 72.0% (65.8% to 78.2%), and for darker-skin, of 71.5% (65.2% to 77.8%), demonstrating closer parity (delta = 0.5%) in accuracy across subpopulations (Welch t-test t = 0.111, P = 0.912). CONCLUSIONS: Findings illustrate how data imbalance and domain generalization can lead to disparity of accuracy across subpopulations, and show that novel generative methods of synthetic fundus images may play a role for debiasing AI. TRANSLATIONAL RELEVANCE: New AI methods have possible applications to address potential AI bias in DR diagnostics from fundus pigmentation, and potentially other ophthalmic DLSs too.
format Online
Article
Text
id pubmed-7884292
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher The Association for Research in Vision and Ophthalmology
record_format MEDLINE/PubMed
spelling pubmed-78842922021-02-22 Addressing Artificial Intelligence Bias in Retinal Diagnostics Burlina, Philippe Joshi, Neil Paul, William Pacheco, Katia D. Bressler, Neil M. Transl Vis Sci Technol Article PURPOSE: This study evaluated generative methods to potentially mitigate artificial intelligence (AI) bias when diagnosing diabetic retinopathy (DR) resulting from training data imbalance or domain generalization, which occurs when deep learning systems (DLSs) face concepts at test/inference time they were not initially trained on. METHODS: The public domain Kaggle EyePACS dataset (88,692 fundi and 44,346 individuals, originally diverse for ethnicity) was modified by adding clinician-annotated labels and constructing an artificial scenario of data imbalance and domain generalization by disallowing training (but not testing) exemplars for images of retinas with DR warranting referral (DR-referable) from darker-skin individuals, who presumably have greater concentration of melanin within uveal melanocytes, on average, contributing to retinal image pigmentation. A traditional/baseline diagnostic DLS was compared against new DLSs that would use training data augmented via generative models for debiasing. RESULTS: Accuracy (95% confidence intervals [CIs]) of the baseline diagnostics DLS for fundus images of lighter-skin individuals was 73.0% (66.9% to 79.2%) versus darker-skin of 60.5% (53.5% to 67.3%), demonstrating bias/disparity (delta = 12.5%; Welch t-test t = 2.670, P = 0.008) in AI performance across protected subpopulations. Using novel generative methods for addressing missing subpopulation training data (DR-referable darker-skin) achieved instead accuracy, for lighter-skin, of 72.0% (65.8% to 78.2%), and for darker-skin, of 71.5% (65.2% to 77.8%), demonstrating closer parity (delta = 0.5%) in accuracy across subpopulations (Welch t-test t = 0.111, P = 0.912). CONCLUSIONS: Findings illustrate how data imbalance and domain generalization can lead to disparity of accuracy across subpopulations, and show that novel generative methods of synthetic fundus images may play a role for debiasing AI. TRANSLATIONAL RELEVANCE: New AI methods have possible applications to address potential AI bias in DR diagnostics from fundus pigmentation, and potentially other ophthalmic DLSs too. The Association for Research in Vision and Ophthalmology 2021-02-11 /pmc/articles/PMC7884292/ /pubmed/34003898 http://dx.doi.org/10.1167/tvst.10.2.13 Text en Copyright 2021 The Authors http://creativecommons.org/licenses/by-nc-nd/4.0/ This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
spellingShingle Article
Burlina, Philippe
Joshi, Neil
Paul, William
Pacheco, Katia D.
Bressler, Neil M.
Addressing Artificial Intelligence Bias in Retinal Diagnostics
title Addressing Artificial Intelligence Bias in Retinal Diagnostics
title_full Addressing Artificial Intelligence Bias in Retinal Diagnostics
title_fullStr Addressing Artificial Intelligence Bias in Retinal Diagnostics
title_full_unstemmed Addressing Artificial Intelligence Bias in Retinal Diagnostics
title_short Addressing Artificial Intelligence Bias in Retinal Diagnostics
title_sort addressing artificial intelligence bias in retinal diagnostics
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7884292/
https://www.ncbi.nlm.nih.gov/pubmed/34003898
http://dx.doi.org/10.1167/tvst.10.2.13
work_keys_str_mv AT burlinaphilippe addressingartificialintelligencebiasinretinaldiagnostics
AT joshineil addressingartificialintelligencebiasinretinaldiagnostics
AT paulwilliam addressingartificialintelligencebiasinretinaldiagnostics
AT pachecokatiad addressingartificialintelligencebiasinretinaldiagnostics
AT bresslerneilm addressingartificialintelligencebiasinretinaldiagnostics