Cargando…

Mitigating the impact of biased artificial intelligence in emergency decision-making

BACKGROUND: Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine. METHODS: In this study, we experi...

Descripción completa

Detalles Bibliográficos
Autores principales: Adam, Hammaad, Balagopalan, Aparna, Alsentzer, Emily, Christia, Fotini, Ghassemi, Marzyeh
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9681767/
https://www.ncbi.nlm.nih.gov/pubmed/36414774
http://dx.doi.org/10.1038/s43856-022-00214-4
_version_ 1784834695780892672
author Adam, Hammaad
Balagopalan, Aparna
Alsentzer, Emily
Christia, Fotini
Ghassemi, Marzyeh
author_facet Adam, Hammaad
Balagopalan, Aparna
Alsentzer, Emily
Christia, Fotini
Ghassemi, Marzyeh
author_sort Adam, Hammaad
collection PubMed
description BACKGROUND: Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine. METHODS: In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags. RESULTS: Participant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making. CONCLUSIONS: Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions.
format Online
Article
Text
id pubmed-9681767
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-96817672022-11-24 Mitigating the impact of biased artificial intelligence in emergency decision-making Adam, Hammaad Balagopalan, Aparna Alsentzer, Emily Christia, Fotini Ghassemi, Marzyeh Commun Med (Lond) Article BACKGROUND: Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine. METHODS: In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags. RESULTS: Participant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making. CONCLUSIONS: Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions. Nature Publishing Group UK 2022-11-21 /pmc/articles/PMC9681767/ /pubmed/36414774 http://dx.doi.org/10.1038/s43856-022-00214-4 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Adam, Hammaad
Balagopalan, Aparna
Alsentzer, Emily
Christia, Fotini
Ghassemi, Marzyeh
Mitigating the impact of biased artificial intelligence in emergency decision-making
title Mitigating the impact of biased artificial intelligence in emergency decision-making
title_full Mitigating the impact of biased artificial intelligence in emergency decision-making
title_fullStr Mitigating the impact of biased artificial intelligence in emergency decision-making
title_full_unstemmed Mitigating the impact of biased artificial intelligence in emergency decision-making
title_short Mitigating the impact of biased artificial intelligence in emergency decision-making
title_sort mitigating the impact of biased artificial intelligence in emergency decision-making
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9681767/
https://www.ncbi.nlm.nih.gov/pubmed/36414774
http://dx.doi.org/10.1038/s43856-022-00214-4
work_keys_str_mv AT adamhammaad mitigatingtheimpactofbiasedartificialintelligenceinemergencydecisionmaking
AT balagopalanaparna mitigatingtheimpactofbiasedartificialintelligenceinemergencydecisionmaking
AT alsentzeremily mitigatingtheimpactofbiasedartificialintelligenceinemergencydecisionmaking
AT christiafotini mitigatingtheimpactofbiasedartificialintelligenceinemergencydecisionmaking
AT ghassemimarzyeh mitigatingtheimpactofbiasedartificialintelligenceinemergencydecisionmaking