Cargando…
Racial Equity in Healthcare Machine Learning: Illustrating Bias in Models With Minimal Bias Mitigation
Background and objective While the potential of machine learning (ML) in healthcare to positively impact human health continues to grow, the potential for inequity in these methods must be assessed. In this study, we aimed to evaluate the presence of racial bias when five of the most common ML algor...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cureus
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10023594/ https://www.ncbi.nlm.nih.gov/pubmed/36942183 http://dx.doi.org/10.7759/cureus.35037 |
_version_ | 1784908916495220736 |
---|---|
author | Barton, Michael Hamza, Mahmoud Guevel, Borna |
author_facet | Barton, Michael Hamza, Mahmoud Guevel, Borna |
author_sort | Barton, Michael |
collection | PubMed |
description | Background and objective While the potential of machine learning (ML) in healthcare to positively impact human health continues to grow, the potential for inequity in these methods must be assessed. In this study, we aimed to evaluate the presence of racial bias when five of the most common ML algorithms are used to create models with minimal processing to reduce racial bias. Methods By utilizing a CDC public database, we constructed models for the prediction of healthcare access (binary variable). Using area under the curve (AUC) as our performance metric, we calculated race-specific performance comparisons for each ML algorithm. We bootstrapped our entire analysis 20 times to produce confidence intervals for our AUC performance metrics. Results With the exception of only a few cases, we found that the performance for the White group was, in general, significantly higher than that of the other racial groups across all ML algorithms. Additionally, we found that the most accurate algorithm in our modeling was Extreme Gradient Boosting (XGBoost) followed by random forest, naive Bayes, support vector machine (SVM), and k-nearest neighbors (KNN). Conclusion Our study illustrates the predictive perils of incorporating minimal racial bias mitigation in ML models, resulting in predictive disparities by race. This is particularly concerning in the setting of evidence for limited bias mitigation in healthcare-related ML. There needs to be more conversation, research, and guidelines surrounding methods for racial bias assessment and mitigation in healthcare-related ML models, both those currently used and those in development. |
format | Online Article Text |
id | pubmed-10023594 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cureus |
record_format | MEDLINE/PubMed |
spelling | pubmed-100235942023-03-19 Racial Equity in Healthcare Machine Learning: Illustrating Bias in Models With Minimal Bias Mitigation Barton, Michael Hamza, Mahmoud Guevel, Borna Cureus Public Health Background and objective While the potential of machine learning (ML) in healthcare to positively impact human health continues to grow, the potential for inequity in these methods must be assessed. In this study, we aimed to evaluate the presence of racial bias when five of the most common ML algorithms are used to create models with minimal processing to reduce racial bias. Methods By utilizing a CDC public database, we constructed models for the prediction of healthcare access (binary variable). Using area under the curve (AUC) as our performance metric, we calculated race-specific performance comparisons for each ML algorithm. We bootstrapped our entire analysis 20 times to produce confidence intervals for our AUC performance metrics. Results With the exception of only a few cases, we found that the performance for the White group was, in general, significantly higher than that of the other racial groups across all ML algorithms. Additionally, we found that the most accurate algorithm in our modeling was Extreme Gradient Boosting (XGBoost) followed by random forest, naive Bayes, support vector machine (SVM), and k-nearest neighbors (KNN). Conclusion Our study illustrates the predictive perils of incorporating minimal racial bias mitigation in ML models, resulting in predictive disparities by race. This is particularly concerning in the setting of evidence for limited bias mitigation in healthcare-related ML. There needs to be more conversation, research, and guidelines surrounding methods for racial bias assessment and mitigation in healthcare-related ML models, both those currently used and those in development. Cureus 2023-02-15 /pmc/articles/PMC10023594/ /pubmed/36942183 http://dx.doi.org/10.7759/cureus.35037 Text en Copyright © 2023, Barton et al. https://creativecommons.org/licenses/by/3.0/This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Public Health Barton, Michael Hamza, Mahmoud Guevel, Borna Racial Equity in Healthcare Machine Learning: Illustrating Bias in Models With Minimal Bias Mitigation |
title | Racial Equity in Healthcare Machine Learning: Illustrating Bias in Models With Minimal Bias Mitigation |
title_full | Racial Equity in Healthcare Machine Learning: Illustrating Bias in Models With Minimal Bias Mitigation |
title_fullStr | Racial Equity in Healthcare Machine Learning: Illustrating Bias in Models With Minimal Bias Mitigation |
title_full_unstemmed | Racial Equity in Healthcare Machine Learning: Illustrating Bias in Models With Minimal Bias Mitigation |
title_short | Racial Equity in Healthcare Machine Learning: Illustrating Bias in Models With Minimal Bias Mitigation |
title_sort | racial equity in healthcare machine learning: illustrating bias in models with minimal bias mitigation |
topic | Public Health |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10023594/ https://www.ncbi.nlm.nih.gov/pubmed/36942183 http://dx.doi.org/10.7759/cureus.35037 |
work_keys_str_mv | AT bartonmichael racialequityinhealthcaremachinelearningillustratingbiasinmodelswithminimalbiasmitigation AT hamzamahmoud racialequityinhealthcaremachinelearningillustratingbiasinmodelswithminimalbiasmitigation AT guevelborna racialequityinhealthcaremachinelearningillustratingbiasinmodelswithminimalbiasmitigation |