Cargando…

Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey

BACKGROUND: Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, h...

Descripción completa

Detalles Bibliográficos
Autores principales: Ploug, Thomas, Sundby, Anna, Moeslund, Thomas B, Holm, Søren
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8713089/
https://www.ncbi.nlm.nih.gov/pubmed/34898454
http://dx.doi.org/10.2196/26611
_version_ 1784623699536642048
author Ploug, Thomas
Sundby, Anna
Moeslund, Thomas B
Holm, Søren
author_facet Ploug, Thomas
Sundby, Anna
Moeslund, Thomas B
Holm, Søren
author_sort Ploug, Thomas
collection PubMed
description BACKGROUND: Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. OBJECTIVE: This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS: We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS: Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS: The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.
format Online
Article
Text
id pubmed-8713089
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-87130892022-01-14 Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey Ploug, Thomas Sundby, Anna Moeslund, Thomas B Holm, Søren J Med Internet Res Original Paper BACKGROUND: Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. OBJECTIVE: This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS: We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS: Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS: The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information. JMIR Publications 2021-12-13 /pmc/articles/PMC8713089/ /pubmed/34898454 http://dx.doi.org/10.2196/26611 Text en ©Thomas Ploug, Anna Sundby, Thomas B Moeslund, Søren Holm. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 13.12.2021. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Ploug, Thomas
Sundby, Anna
Moeslund, Thomas B
Holm, Søren
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey
title Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey
title_full Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey
title_fullStr Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey
title_full_unstemmed Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey
title_short Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey
title_sort population preferences for performance and explainability of artificial intelligence in health care: choice-based conjoint survey
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8713089/
https://www.ncbi.nlm.nih.gov/pubmed/34898454
http://dx.doi.org/10.2196/26611
work_keys_str_mv AT plougthomas populationpreferencesforperformanceandexplainabilityofartificialintelligenceinhealthcarechoicebasedconjointsurvey
AT sundbyanna populationpreferencesforperformanceandexplainabilityofartificialintelligenceinhealthcarechoicebasedconjointsurvey
AT moeslundthomasb populationpreferencesforperformanceandexplainabilityofartificialintelligenceinhealthcarechoicebasedconjointsurvey
AT holmsøren populationpreferencesforperformanceandexplainabilityofartificialintelligenceinhealthcarechoicebasedconjointsurvey