Cargando…

Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis

BACKGROUND: Machine learning predictive analytics (MLPA) is increasingly used in health care to reduce costs and improve efficacy; it also has the potential to harm patients and trust in health care. Academic and regulatory leaders have proposed a variety of principles and guidelines to address the...

Descripción completa

Detalles Bibliográficos
Autores principales: Nichol, Ariadne A, Sankar, Pamela L, Halley, Meghan C, Federico, Carole A, Cho, Mildred K
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10690528/
https://www.ncbi.nlm.nih.gov/pubmed/37971798
http://dx.doi.org/10.2196/47609
_version_ 1785152539772059648
author Nichol, Ariadne A
Sankar, Pamela L
Halley, Meghan C
Federico, Carole A
Cho, Mildred K
author_facet Nichol, Ariadne A
Sankar, Pamela L
Halley, Meghan C
Federico, Carole A
Cho, Mildred K
author_sort Nichol, Ariadne A
collection PubMed
description BACKGROUND: Machine learning predictive analytics (MLPA) is increasingly used in health care to reduce costs and improve efficacy; it also has the potential to harm patients and trust in health care. Academic and regulatory leaders have proposed a variety of principles and guidelines to address the challenges of evaluating the safety of machine learning–based software in the health care context, but accepted practices do not yet exist. However, there appears to be a shift toward process-based regulatory paradigms that rely heavily on self-regulation. At the same time, little research has examined the perspectives about the harms of MLPA developers themselves, whose role will be essential in overcoming the “principles-to-practice” gap. OBJECTIVE: The objective of this study was to understand how MLPA developers of health care products perceived the potential harms of those products and their responses to recognized harms. METHODS: We interviewed 40 individuals who were developing MLPA tools for health care at 15 US-based organizations, including data scientists, software engineers, and those with mid- and high-level management roles. These 15 organizations were selected to represent a range of organizational types and sizes from the 106 that we previously identified. We asked developers about their perspectives on the potential harms of their work, factors that influence these harms, and their role in mitigation. We used standard qualitative analysis of transcribed interviews to identify themes in the data. RESULTS: We found that MLPA developers recognized a range of potential harms of MLPA to individuals, social groups, and the health care system, such as issues of privacy, bias, and system disruption. They also identified drivers of these harms related to the characteristics of machine learning and specific to the health care and commercial contexts in which the products are developed. MLPA developers also described strategies to respond to these drivers and potentially mitigate the harms. Opportunities included balancing algorithm performance goals with potential harms, emphasizing iterative integration of health care expertise, and fostering shared company values. However, their recognition of their own responsibility to address potential harms varied widely. CONCLUSIONS: Even though MLPA developers recognized that their products can harm patients, public, and even health systems, robust procedures to assess the potential for harms and the need for mitigation do not exist. Our findings suggest that, to the extent that new oversight paradigms rely on self-regulation, they will face serious challenges if harms are driven by features that developers consider inescapable in health care and business environments. Furthermore, effective self-regulation will require MLPA developers to accept responsibility for safety and efficacy and know how to act accordingly. Our results suggest that, at the very least, substantial education will be necessary to fill the “principles-to-practice” gap.
format Online
Article
Text
id pubmed-10690528
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-106905282023-12-02 Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis Nichol, Ariadne A Sankar, Pamela L Halley, Meghan C Federico, Carole A Cho, Mildred K J Med Internet Res Original Paper BACKGROUND: Machine learning predictive analytics (MLPA) is increasingly used in health care to reduce costs and improve efficacy; it also has the potential to harm patients and trust in health care. Academic and regulatory leaders have proposed a variety of principles and guidelines to address the challenges of evaluating the safety of machine learning–based software in the health care context, but accepted practices do not yet exist. However, there appears to be a shift toward process-based regulatory paradigms that rely heavily on self-regulation. At the same time, little research has examined the perspectives about the harms of MLPA developers themselves, whose role will be essential in overcoming the “principles-to-practice” gap. OBJECTIVE: The objective of this study was to understand how MLPA developers of health care products perceived the potential harms of those products and their responses to recognized harms. METHODS: We interviewed 40 individuals who were developing MLPA tools for health care at 15 US-based organizations, including data scientists, software engineers, and those with mid- and high-level management roles. These 15 organizations were selected to represent a range of organizational types and sizes from the 106 that we previously identified. We asked developers about their perspectives on the potential harms of their work, factors that influence these harms, and their role in mitigation. We used standard qualitative analysis of transcribed interviews to identify themes in the data. RESULTS: We found that MLPA developers recognized a range of potential harms of MLPA to individuals, social groups, and the health care system, such as issues of privacy, bias, and system disruption. They also identified drivers of these harms related to the characteristics of machine learning and specific to the health care and commercial contexts in which the products are developed. MLPA developers also described strategies to respond to these drivers and potentially mitigate the harms. Opportunities included balancing algorithm performance goals with potential harms, emphasizing iterative integration of health care expertise, and fostering shared company values. However, their recognition of their own responsibility to address potential harms varied widely. CONCLUSIONS: Even though MLPA developers recognized that their products can harm patients, public, and even health systems, robust procedures to assess the potential for harms and the need for mitigation do not exist. Our findings suggest that, to the extent that new oversight paradigms rely on self-regulation, they will face serious challenges if harms are driven by features that developers consider inescapable in health care and business environments. Furthermore, effective self-regulation will require MLPA developers to accept responsibility for safety and efficacy and know how to act accordingly. Our results suggest that, at the very least, substantial education will be necessary to fill the “principles-to-practice” gap. JMIR Publications 2023-11-16 /pmc/articles/PMC10690528/ /pubmed/37971798 http://dx.doi.org/10.2196/47609 Text en ©Ariadne A Nichol, Pamela L Sankar, Meghan C Halley, Carole A Federico, Mildred K Cho. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 16.11.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Nichol, Ariadne A
Sankar, Pamela L
Halley, Meghan C
Federico, Carole A
Cho, Mildred K
Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis
title Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis
title_full Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis
title_fullStr Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis
title_full_unstemmed Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis
title_short Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis
title_sort developer perspectives on potential harms of machine learning predictive analytics in health care: qualitative analysis
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10690528/
https://www.ncbi.nlm.nih.gov/pubmed/37971798
http://dx.doi.org/10.2196/47609
work_keys_str_mv AT nicholariadnea developerperspectivesonpotentialharmsofmachinelearningpredictiveanalyticsinhealthcarequalitativeanalysis
AT sankarpamelal developerperspectivesonpotentialharmsofmachinelearningpredictiveanalyticsinhealthcarequalitativeanalysis
AT halleymeghanc developerperspectivesonpotentialharmsofmachinelearningpredictiveanalyticsinhealthcarequalitativeanalysis
AT federicocarolea developerperspectivesonpotentialharmsofmachinelearningpredictiveanalyticsinhealthcarequalitativeanalysis
AT chomildredk developerperspectivesonpotentialharmsofmachinelearningpredictiveanalyticsinhealthcarequalitativeanalysis