Cargando…

Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19

Artificial intelligence (AI) technologies have been applied in various medical domains to predict patient outcomes with high accuracy. As AI becomes more widely adopted, the problem of model bias is increasingly apparent. In this study, we investigate the model bias that can occur when training a mo...

Descripción completa

Detalles Bibliográficos
Autores principales: Chung, Heewon, Park, Chul, Kang, Wu Seong, Lee, Jinseok
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8667070/
https://www.ncbi.nlm.nih.gov/pubmed/34912242
http://dx.doi.org/10.3389/fphys.2021.778720
_version_ 1784614323437436928
author Chung, Heewon
Park, Chul
Kang, Wu Seong
Lee, Jinseok
author_facet Chung, Heewon
Park, Chul
Kang, Wu Seong
Lee, Jinseok
author_sort Chung, Heewon
collection PubMed
description Artificial intelligence (AI) technologies have been applied in various medical domains to predict patient outcomes with high accuracy. As AI becomes more widely adopted, the problem of model bias is increasingly apparent. In this study, we investigate the model bias that can occur when training a model using datasets for only one particular gender and aim to present new insights into the bias issue. For the investigation, we considered an AI model that predicts severity at an early stage based on the medical records of coronavirus disease (COVID-19) patients. For 5,601 confirmed COVID-19 patients, we used 37 medical records, namely, basic patient information, physical index, initial examination findings, clinical findings, comorbidity diseases, and general blood test results at an early stage. To investigate the gender-based AI model bias, we trained and evaluated two separate models—one that was trained using only the male group, and the other using only the female group. When the model trained by the male-group data was applied to the female testing data, the overall accuracy decreased—sensitivity from 0.93 to 0.86, specificity from 0.92 to 0.86, accuracy from 0.92 to 0.86, balanced accuracy from 0.93 to 0.86, and area under the curve (AUC) from 0.97 to 0.94. Similarly, when the model trained by the female-group data was applied to the male testing data, once again, the overall accuracy decreased—sensitivity from 0.97 to 0.90, specificity from 0.96 to 0.91, accuracy from 0.96 to 0.91, balanced accuracy from 0.96 to 0.90, and AUC from 0.97 to 0.95. Furthermore, when we evaluated each gender-dependent model with the test data from the same gender used for training, the resultant accuracy was also lower than that from the unbiased model.
format Online
Article
Text
id pubmed-8667070
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-86670702021-12-14 Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19 Chung, Heewon Park, Chul Kang, Wu Seong Lee, Jinseok Front Physiol Physiology Artificial intelligence (AI) technologies have been applied in various medical domains to predict patient outcomes with high accuracy. As AI becomes more widely adopted, the problem of model bias is increasingly apparent. In this study, we investigate the model bias that can occur when training a model using datasets for only one particular gender and aim to present new insights into the bias issue. For the investigation, we considered an AI model that predicts severity at an early stage based on the medical records of coronavirus disease (COVID-19) patients. For 5,601 confirmed COVID-19 patients, we used 37 medical records, namely, basic patient information, physical index, initial examination findings, clinical findings, comorbidity diseases, and general blood test results at an early stage. To investigate the gender-based AI model bias, we trained and evaluated two separate models—one that was trained using only the male group, and the other using only the female group. When the model trained by the male-group data was applied to the female testing data, the overall accuracy decreased—sensitivity from 0.93 to 0.86, specificity from 0.92 to 0.86, accuracy from 0.92 to 0.86, balanced accuracy from 0.93 to 0.86, and area under the curve (AUC) from 0.97 to 0.94. Similarly, when the model trained by the female-group data was applied to the male testing data, once again, the overall accuracy decreased—sensitivity from 0.97 to 0.90, specificity from 0.96 to 0.91, accuracy from 0.96 to 0.91, balanced accuracy from 0.96 to 0.90, and AUC from 0.97 to 0.95. Furthermore, when we evaluated each gender-dependent model with the test data from the same gender used for training, the resultant accuracy was also lower than that from the unbiased model. Frontiers Media S.A. 2021-11-29 /pmc/articles/PMC8667070/ /pubmed/34912242 http://dx.doi.org/10.3389/fphys.2021.778720 Text en Copyright © 2021 Chung, Park, Kang and Lee. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Physiology
Chung, Heewon
Park, Chul
Kang, Wu Seong
Lee, Jinseok
Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19
title Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19
title_full Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19
title_fullStr Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19
title_full_unstemmed Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19
title_short Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19
title_sort gender bias in artificial intelligence: severity prediction at an early stage of covid-19
topic Physiology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8667070/
https://www.ncbi.nlm.nih.gov/pubmed/34912242
http://dx.doi.org/10.3389/fphys.2021.778720
work_keys_str_mv AT chungheewon genderbiasinartificialintelligenceseveritypredictionatanearlystageofcovid19
AT parkchul genderbiasinartificialintelligenceseveritypredictionatanearlystageofcovid19
AT kangwuseong genderbiasinartificialintelligenceseveritypredictionatanearlystageofcovid19
AT leejinseok genderbiasinartificialintelligenceseveritypredictionatanearlystageofcovid19