Cargando…

Considerations for addressing bias in artificial intelligence for health equity

Health equity is a primary goal of healthcare stakeholders: patients and their advocacy groups, clinicians, other providers and their professional societies, bioethicists, payors and value based care organizations, regulatory agencies, legislators, and creators of artificial intelligence/machine lea...

Descripción completa

Detalles Bibliográficos
Autores principales: Abràmoff, Michael D., Tarver, Michelle E., Loyo-Berrios, Nilsa, Trujillo, Sylvia, Char, Danton, Obermeyer, Ziad, Eydelman, Malvina B., Maisel, William H.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10497548/
https://www.ncbi.nlm.nih.gov/pubmed/37700029
http://dx.doi.org/10.1038/s41746-023-00913-9
_version_ 1785105325348618240
author Abràmoff, Michael D.
Tarver, Michelle E.
Loyo-Berrios, Nilsa
Trujillo, Sylvia
Char, Danton
Obermeyer, Ziad
Eydelman, Malvina B.
Maisel, William H.
author_facet Abràmoff, Michael D.
Tarver, Michelle E.
Loyo-Berrios, Nilsa
Trujillo, Sylvia
Char, Danton
Obermeyer, Ziad
Eydelman, Malvina B.
Maisel, William H.
author_sort Abràmoff, Michael D.
collection PubMed
description Health equity is a primary goal of healthcare stakeholders: patients and their advocacy groups, clinicians, other providers and their professional societies, bioethicists, payors and value based care organizations, regulatory agencies, legislators, and creators of artificial intelligence/machine learning (AI/ML)-enabled medical devices. Lack of equitable access to diagnosis and treatment may be improved through new digital health technologies, especially AI/ML, but these may also exacerbate disparities, depending on how bias is addressed. We propose an expanded Total Product Lifecycle (TPLC) framework for healthcare AI/ML, describing the sources and impacts of undesirable bias in AI/ML systems in each phase, how these can be analyzed using appropriate metrics, and how they can be potentially mitigated. The goal of these “Considerations” is to educate stakeholders on how potential AI/ML bias may impact healthcare outcomes and how to identify and mitigate inequities; to initiate a discussion between stakeholders on these issues, in order to ensure health equity along the expanded AI/ML TPLC framework, and ultimately, better health outcomes for all.
format Online
Article
Text
id pubmed-10497548
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-104975482023-09-14 Considerations for addressing bias in artificial intelligence for health equity Abràmoff, Michael D. Tarver, Michelle E. Loyo-Berrios, Nilsa Trujillo, Sylvia Char, Danton Obermeyer, Ziad Eydelman, Malvina B. Maisel, William H. NPJ Digit Med Perspective Health equity is a primary goal of healthcare stakeholders: patients and their advocacy groups, clinicians, other providers and their professional societies, bioethicists, payors and value based care organizations, regulatory agencies, legislators, and creators of artificial intelligence/machine learning (AI/ML)-enabled medical devices. Lack of equitable access to diagnosis and treatment may be improved through new digital health technologies, especially AI/ML, but these may also exacerbate disparities, depending on how bias is addressed. We propose an expanded Total Product Lifecycle (TPLC) framework for healthcare AI/ML, describing the sources and impacts of undesirable bias in AI/ML systems in each phase, how these can be analyzed using appropriate metrics, and how they can be potentially mitigated. The goal of these “Considerations” is to educate stakeholders on how potential AI/ML bias may impact healthcare outcomes and how to identify and mitigate inequities; to initiate a discussion between stakeholders on these issues, in order to ensure health equity along the expanded AI/ML TPLC framework, and ultimately, better health outcomes for all. Nature Publishing Group UK 2023-09-12 /pmc/articles/PMC10497548/ /pubmed/37700029 http://dx.doi.org/10.1038/s41746-023-00913-9 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Perspective
Abràmoff, Michael D.
Tarver, Michelle E.
Loyo-Berrios, Nilsa
Trujillo, Sylvia
Char, Danton
Obermeyer, Ziad
Eydelman, Malvina B.
Maisel, William H.
Considerations for addressing bias in artificial intelligence for health equity
title Considerations for addressing bias in artificial intelligence for health equity
title_full Considerations for addressing bias in artificial intelligence for health equity
title_fullStr Considerations for addressing bias in artificial intelligence for health equity
title_full_unstemmed Considerations for addressing bias in artificial intelligence for health equity
title_short Considerations for addressing bias in artificial intelligence for health equity
title_sort considerations for addressing bias in artificial intelligence for health equity
topic Perspective
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10497548/
https://www.ncbi.nlm.nih.gov/pubmed/37700029
http://dx.doi.org/10.1038/s41746-023-00913-9
work_keys_str_mv AT abramoffmichaeld considerationsforaddressingbiasinartificialintelligenceforhealthequity
AT tarvermichellee considerationsforaddressingbiasinartificialintelligenceforhealthequity
AT loyoberriosnilsa considerationsforaddressingbiasinartificialintelligenceforhealthequity
AT trujillosylvia considerationsforaddressingbiasinartificialintelligenceforhealthequity
AT chardanton considerationsforaddressingbiasinartificialintelligenceforhealthequity
AT obermeyerziad considerationsforaddressingbiasinartificialintelligenceforhealthequity
AT eydelmanmalvinab considerationsforaddressingbiasinartificialintelligenceforhealthequity
AT considerationsforaddressingbiasinartificialintelligenceforhealthequity
AT maiselwilliamh considerationsforaddressingbiasinartificialintelligenceforhealthequity