Cargando…

A translational perspective towards clinical AI fairness

Artificial intelligence (AI) has demonstrated the ability to extract insights from data, but the fairness of such data-driven insights remains a concern in high-stakes fields. Despite extensive developments, issues of AI fairness in clinical contexts have not been adequately addressed. A fair model...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Mingxuan, Ning, Yilin, Teixayavong, Salinelat, Mertens, Mayli, Xu, Jie, Ting, Daniel Shu Wei, Cheng, Lionel Tim-Ee, Ong, Jasmine Chiat Ling, Teo, Zhen Ling, Tan, Ting Fang, RaviChandran, Narrendar, Wang, Fei, Celi, Leo Anthony, Ong, Marcus Eng Hock, Liu, Nan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10502051/
https://www.ncbi.nlm.nih.gov/pubmed/37709945
http://dx.doi.org/10.1038/s41746-023-00918-4
_version_ 1785106235356348416
author Liu, Mingxuan
Ning, Yilin
Teixayavong, Salinelat
Mertens, Mayli
Xu, Jie
Ting, Daniel Shu Wei
Cheng, Lionel Tim-Ee
Ong, Jasmine Chiat Ling
Teo, Zhen Ling
Tan, Ting Fang
RaviChandran, Narrendar
Wang, Fei
Celi, Leo Anthony
Ong, Marcus Eng Hock
Liu, Nan
author_facet Liu, Mingxuan
Ning, Yilin
Teixayavong, Salinelat
Mertens, Mayli
Xu, Jie
Ting, Daniel Shu Wei
Cheng, Lionel Tim-Ee
Ong, Jasmine Chiat Ling
Teo, Zhen Ling
Tan, Ting Fang
RaviChandran, Narrendar
Wang, Fei
Celi, Leo Anthony
Ong, Marcus Eng Hock
Liu, Nan
author_sort Liu, Mingxuan
collection PubMed
description Artificial intelligence (AI) has demonstrated the ability to extract insights from data, but the fairness of such data-driven insights remains a concern in high-stakes fields. Despite extensive developments, issues of AI fairness in clinical contexts have not been adequately addressed. A fair model is normally expected to perform equally across subgroups defined by sensitive variables (e.g., age, gender/sex, race/ethnicity, socio-economic status, etc.). Various fairness measurements have been developed to detect differences between subgroups as evidence of bias, and bias mitigation methods are designed to reduce the differences detected. This perspective of fairness, however, is misaligned with some key considerations in clinical contexts. The set of sensitive variables used in healthcare applications must be carefully examined for relevance and justified by clear clinical motivations. In addition, clinical AI fairness should closely investigate the ethical implications of fairness measurements (e.g., potential conflicts between group- and individual-level fairness) to select suitable and objective metrics. Generally defining AI fairness as “equality” is not necessarily reasonable in clinical settings, as differences may have clinical justifications and do not indicate biases. Instead, “equity” would be an appropriate objective of clinical AI fairness. Moreover, clinical feedback is essential to developing fair and well-performing AI models, and efforts should be made to actively involve clinicians in the process. The adaptation of AI fairness towards healthcare is not self-evident due to misalignments between technical developments and clinical considerations. Multidisciplinary collaboration between AI researchers, clinicians, and ethicists is necessary to bridge the gap and translate AI fairness into real-life benefits.
format Online
Article
Text
id pubmed-10502051
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-105020512023-09-16 A translational perspective towards clinical AI fairness Liu, Mingxuan Ning, Yilin Teixayavong, Salinelat Mertens, Mayli Xu, Jie Ting, Daniel Shu Wei Cheng, Lionel Tim-Ee Ong, Jasmine Chiat Ling Teo, Zhen Ling Tan, Ting Fang RaviChandran, Narrendar Wang, Fei Celi, Leo Anthony Ong, Marcus Eng Hock Liu, Nan NPJ Digit Med Perspective Artificial intelligence (AI) has demonstrated the ability to extract insights from data, but the fairness of such data-driven insights remains a concern in high-stakes fields. Despite extensive developments, issues of AI fairness in clinical contexts have not been adequately addressed. A fair model is normally expected to perform equally across subgroups defined by sensitive variables (e.g., age, gender/sex, race/ethnicity, socio-economic status, etc.). Various fairness measurements have been developed to detect differences between subgroups as evidence of bias, and bias mitigation methods are designed to reduce the differences detected. This perspective of fairness, however, is misaligned with some key considerations in clinical contexts. The set of sensitive variables used in healthcare applications must be carefully examined for relevance and justified by clear clinical motivations. In addition, clinical AI fairness should closely investigate the ethical implications of fairness measurements (e.g., potential conflicts between group- and individual-level fairness) to select suitable and objective metrics. Generally defining AI fairness as “equality” is not necessarily reasonable in clinical settings, as differences may have clinical justifications and do not indicate biases. Instead, “equity” would be an appropriate objective of clinical AI fairness. Moreover, clinical feedback is essential to developing fair and well-performing AI models, and efforts should be made to actively involve clinicians in the process. The adaptation of AI fairness towards healthcare is not self-evident due to misalignments between technical developments and clinical considerations. Multidisciplinary collaboration between AI researchers, clinicians, and ethicists is necessary to bridge the gap and translate AI fairness into real-life benefits. Nature Publishing Group UK 2023-09-14 /pmc/articles/PMC10502051/ /pubmed/37709945 http://dx.doi.org/10.1038/s41746-023-00918-4 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Perspective
Liu, Mingxuan
Ning, Yilin
Teixayavong, Salinelat
Mertens, Mayli
Xu, Jie
Ting, Daniel Shu Wei
Cheng, Lionel Tim-Ee
Ong, Jasmine Chiat Ling
Teo, Zhen Ling
Tan, Ting Fang
RaviChandran, Narrendar
Wang, Fei
Celi, Leo Anthony
Ong, Marcus Eng Hock
Liu, Nan
A translational perspective towards clinical AI fairness
title A translational perspective towards clinical AI fairness
title_full A translational perspective towards clinical AI fairness
title_fullStr A translational perspective towards clinical AI fairness
title_full_unstemmed A translational perspective towards clinical AI fairness
title_short A translational perspective towards clinical AI fairness
title_sort translational perspective towards clinical ai fairness
topic Perspective
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10502051/
https://www.ncbi.nlm.nih.gov/pubmed/37709945
http://dx.doi.org/10.1038/s41746-023-00918-4
work_keys_str_mv AT liumingxuan atranslationalperspectivetowardsclinicalaifairness
AT ningyilin atranslationalperspectivetowardsclinicalaifairness
AT teixayavongsalinelat atranslationalperspectivetowardsclinicalaifairness
AT mertensmayli atranslationalperspectivetowardsclinicalaifairness
AT xujie atranslationalperspectivetowardsclinicalaifairness
AT tingdanielshuwei atranslationalperspectivetowardsclinicalaifairness
AT chenglioneltimee atranslationalperspectivetowardsclinicalaifairness
AT ongjasminechiatling atranslationalperspectivetowardsclinicalaifairness
AT teozhenling atranslationalperspectivetowardsclinicalaifairness
AT tantingfang atranslationalperspectivetowardsclinicalaifairness
AT ravichandrannarrendar atranslationalperspectivetowardsclinicalaifairness
AT wangfei atranslationalperspectivetowardsclinicalaifairness
AT celileoanthony atranslationalperspectivetowardsclinicalaifairness
AT ongmarcusenghock atranslationalperspectivetowardsclinicalaifairness
AT liunan atranslationalperspectivetowardsclinicalaifairness
AT liumingxuan translationalperspectivetowardsclinicalaifairness
AT ningyilin translationalperspectivetowardsclinicalaifairness
AT teixayavongsalinelat translationalperspectivetowardsclinicalaifairness
AT mertensmayli translationalperspectivetowardsclinicalaifairness
AT xujie translationalperspectivetowardsclinicalaifairness
AT tingdanielshuwei translationalperspectivetowardsclinicalaifairness
AT chenglioneltimee translationalperspectivetowardsclinicalaifairness
AT ongjasminechiatling translationalperspectivetowardsclinicalaifairness
AT teozhenling translationalperspectivetowardsclinicalaifairness
AT tantingfang translationalperspectivetowardsclinicalaifairness
AT ravichandrannarrendar translationalperspectivetowardsclinicalaifairness
AT wangfei translationalperspectivetowardsclinicalaifairness
AT celileoanthony translationalperspectivetowardsclinicalaifairness
AT ongmarcusenghock translationalperspectivetowardsclinicalaifairness
AT liunan translationalperspectivetowardsclinicalaifairness