Cargando…

Patients’ Perceptions Toward Human–Artificial Intelligence Interaction in Health Care: Experimental Study

BACKGROUND: It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producin...

Descripción completa

Detalles Bibliográficos
Autores principales: Esmaeilzadeh, Pouyan, Mirzaei, Tala, Dharanikota, Spurthy
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8663518/
https://www.ncbi.nlm.nih.gov/pubmed/34842535
http://dx.doi.org/10.2196/25856
_version_ 1784613655745134592
author Esmaeilzadeh, Pouyan
Mirzaei, Tala
Dharanikota, Spurthy
author_facet Esmaeilzadeh, Pouyan
Mirzaei, Tala
Dharanikota, Spurthy
author_sort Esmaeilzadeh, Pouyan
collection PubMed
description BACKGROUND: It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producing AI clinical applications. Patients are one of the most important beneficiaries who potentially interact with these technologies and applications; thus, patients’ perceptions may affect the widespread use of clinical AI. Patients should be ensured that AI clinical applications will not harm them, and that they will instead benefit from using AI technology for health care purposes. Although human-AI interaction can enhance health care outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. OBJECTIVE: The main objective of this study was to examine how potential users (patients) perceive the benefits, risks, and use of AI clinical applications for their health care purposes and how their perceptions may be different if faced with three health care service encounter scenarios. METHODS: We designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters between patients and physicians (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI as a traditional in-person visit). We used an online survey to collect data from 634 individuals in the United States. RESULTS: The interactions between the types of health care service encounters and health conditions significantly influenced individuals’ perceptions of privacy concerns, trust issues, communication barriers, concerns about transparency in regulatory standards, liability risks, benefits, and intention to use across the six scenarios. We found no significant differences among scenarios regarding perceptions of performance risk and social biases. CONCLUSIONS: The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals’ intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications.
format Online
Article
Text
id pubmed-8663518
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-86635182022-01-05 Patients’ Perceptions Toward Human–Artificial Intelligence Interaction in Health Care: Experimental Study Esmaeilzadeh, Pouyan Mirzaei, Tala Dharanikota, Spurthy J Med Internet Res Original Paper BACKGROUND: It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producing AI clinical applications. Patients are one of the most important beneficiaries who potentially interact with these technologies and applications; thus, patients’ perceptions may affect the widespread use of clinical AI. Patients should be ensured that AI clinical applications will not harm them, and that they will instead benefit from using AI technology for health care purposes. Although human-AI interaction can enhance health care outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. OBJECTIVE: The main objective of this study was to examine how potential users (patients) perceive the benefits, risks, and use of AI clinical applications for their health care purposes and how their perceptions may be different if faced with three health care service encounter scenarios. METHODS: We designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters between patients and physicians (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI as a traditional in-person visit). We used an online survey to collect data from 634 individuals in the United States. RESULTS: The interactions between the types of health care service encounters and health conditions significantly influenced individuals’ perceptions of privacy concerns, trust issues, communication barriers, concerns about transparency in regulatory standards, liability risks, benefits, and intention to use across the six scenarios. We found no significant differences among scenarios regarding perceptions of performance risk and social biases. CONCLUSIONS: The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals’ intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications. JMIR Publications 2021-11-25 /pmc/articles/PMC8663518/ /pubmed/34842535 http://dx.doi.org/10.2196/25856 Text en ©Pouyan Esmaeilzadeh, Tala Mirzaei, Spurthy Dharanikota. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.11.2021. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Esmaeilzadeh, Pouyan
Mirzaei, Tala
Dharanikota, Spurthy
Patients’ Perceptions Toward Human–Artificial Intelligence Interaction in Health Care: Experimental Study
title Patients’ Perceptions Toward Human–Artificial Intelligence Interaction in Health Care: Experimental Study
title_full Patients’ Perceptions Toward Human–Artificial Intelligence Interaction in Health Care: Experimental Study
title_fullStr Patients’ Perceptions Toward Human–Artificial Intelligence Interaction in Health Care: Experimental Study
title_full_unstemmed Patients’ Perceptions Toward Human–Artificial Intelligence Interaction in Health Care: Experimental Study
title_short Patients’ Perceptions Toward Human–Artificial Intelligence Interaction in Health Care: Experimental Study
title_sort patients’ perceptions toward human–artificial intelligence interaction in health care: experimental study
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8663518/
https://www.ncbi.nlm.nih.gov/pubmed/34842535
http://dx.doi.org/10.2196/25856
work_keys_str_mv AT esmaeilzadehpouyan patientsperceptionstowardhumanartificialintelligenceinteractioninhealthcareexperimentalstudy
AT mirzaeitala patientsperceptionstowardhumanartificialintelligenceinteractioninhealthcareexperimentalstudy
AT dharanikotaspurthy patientsperceptionstowardhumanartificialintelligenceinteractioninhealthcareexperimentalstudy