Cargando…

Not in my AI: Moral engagement and disengagement in health care AI development

Machine learning predictive analytics (MLPA) are utilized increasingly in health care, but can pose harms to patients, clinicians, health systems, and the public. The dynamic nature of this technology creates unique challenges to evaluating safety and efficacy and minimizing harms. In response, regu...

Descripción completa

Detalles Bibliográficos
Autores principales: Nichol, Ariadne A., Halley, Meghan C., Federico, Carole A., Cho, Mildred K., Sankar, Pamela L.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9782696/
https://www.ncbi.nlm.nih.gov/pubmed/36541003
_version_ 1784857404667592704
author Nichol, Ariadne A.
Halley, Meghan C.
Federico, Carole A.
Cho, Mildred K.
Sankar, Pamela L.
author_facet Nichol, Ariadne A.
Halley, Meghan C.
Federico, Carole A.
Cho, Mildred K.
Sankar, Pamela L.
author_sort Nichol, Ariadne A.
collection PubMed
description Machine learning predictive analytics (MLPA) are utilized increasingly in health care, but can pose harms to patients, clinicians, health systems, and the public. The dynamic nature of this technology creates unique challenges to evaluating safety and efficacy and minimizing harms. In response, regulators have proposed an approach that would shift more responsibility to MLPA developers for mitigating potential harms. To be effective, this approach requires MLPA developers to recognize, accept, and act on responsibility for mitigating harms. In interviews of 40 MLPA developers of health care applications in the United States, we found that a subset of ML developers made statements reflecting moral disengagement, representing several different potential rationales that could create distance between personal accountability and harms. However, we also found a different subset of ML developers who expressed recognition of their role in creating potential hazards, the moral weight of their design decisions, and a sense of responsibility for mitigating harms. We also found evidence of moral conflict and uncertainty about responsibility for averting harms as an individual developer working in a company. These findings suggest possible facilitators and barriers to the development of ethical ML that could act through encouragement of moral engagement or discouragement of moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.
format Online
Article
Text
id pubmed-9782696
institution National Center for Biotechnology Information
language English
publishDate 2023
record_format MEDLINE/PubMed
spelling pubmed-97826962023-01-01 Not in my AI: Moral engagement and disengagement in health care AI development Nichol, Ariadne A. Halley, Meghan C. Federico, Carole A. Cho, Mildred K. Sankar, Pamela L. Pac Symp Biocomput Article Machine learning predictive analytics (MLPA) are utilized increasingly in health care, but can pose harms to patients, clinicians, health systems, and the public. The dynamic nature of this technology creates unique challenges to evaluating safety and efficacy and minimizing harms. In response, regulators have proposed an approach that would shift more responsibility to MLPA developers for mitigating potential harms. To be effective, this approach requires MLPA developers to recognize, accept, and act on responsibility for mitigating harms. In interviews of 40 MLPA developers of health care applications in the United States, we found that a subset of ML developers made statements reflecting moral disengagement, representing several different potential rationales that could create distance between personal accountability and harms. However, we also found a different subset of ML developers who expressed recognition of their role in creating potential hazards, the moral weight of their design decisions, and a sense of responsibility for mitigating harms. We also found evidence of moral conflict and uncertainty about responsibility for averting harms as an individual developer working in a company. These findings suggest possible facilitators and barriers to the development of ethical ML that could act through encouragement of moral engagement or discouragement of moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them. 2023 /pmc/articles/PMC9782696/ /pubmed/36541003 Text en https://creativecommons.org/licenses/by-nc/4.0/Open Access chapter published by World Scientific Publishing Company and distributed under the terms of the Creative Commons Attribution Non-Commercial (CC BY-NC) 4.0 License.
spellingShingle Article
Nichol, Ariadne A.
Halley, Meghan C.
Federico, Carole A.
Cho, Mildred K.
Sankar, Pamela L.
Not in my AI: Moral engagement and disengagement in health care AI development
title Not in my AI: Moral engagement and disengagement in health care AI development
title_full Not in my AI: Moral engagement and disengagement in health care AI development
title_fullStr Not in my AI: Moral engagement and disengagement in health care AI development
title_full_unstemmed Not in my AI: Moral engagement and disengagement in health care AI development
title_short Not in my AI: Moral engagement and disengagement in health care AI development
title_sort not in my ai: moral engagement and disengagement in health care ai development
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9782696/
https://www.ncbi.nlm.nih.gov/pubmed/36541003
work_keys_str_mv AT nicholariadnea notinmyaimoralengagementanddisengagementinhealthcareaidevelopment
AT halleymeghanc notinmyaimoralengagementanddisengagementinhealthcareaidevelopment
AT federicocarolea notinmyaimoralengagementanddisengagementinhealthcareaidevelopment
AT chomildredk notinmyaimoralengagementanddisengagementinhealthcareaidevelopment
AT sankarpamelal notinmyaimoralengagementanddisengagementinhealthcareaidevelopment