Cargando…

When Does Physician Use of AI Increase Liability?

An increasing number of automated and artificial intelligence (AI) systems make medical treatment recommendations, including personalized recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in m...

Descripción completa

Detalles Bibliográficos
Autores principales: Tobia, Kevin, Nielsen, Aileen, Stremitzer, Alexander
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Society of Nuclear Medicine 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8679587/
https://www.ncbi.nlm.nih.gov/pubmed/32978285
http://dx.doi.org/10.2967/jnumed.120.256032
_version_ 1784616556862373888
author Tobia, Kevin
Nielsen, Aileen
Stremitzer, Alexander
author_facet Tobia, Kevin
Nielsen, Aileen
Stremitzer, Alexander
author_sort Tobia, Kevin
collection PubMed
description An increasing number of automated and artificial intelligence (AI) systems make medical treatment recommendations, including personalized recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in medical malpractice, undermining the use of potentially beneficial medical AI.  However, such liability depends in part on lay judgments by jurors: when physicians use AI systems, in which circumstances would jurors hold physicians liable? Methods: To determine potential jurors’ judgments of liability, we conducted an online experimental study of a nationally representative sample of 2,000 U.S. adults. Each participant read 1 of 4 scenarios in which an AI system provides a treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard care) and the physician’s decision (to accept or reject that recommendation). Subsequently, the physician’s decision caused harm. Participants then assessed the physician’s liability. Results: Our results indicate that physicians who receive advice from an AI system to provide standard care can reduce the risk of liability by accepting, rather than rejecting, that advice, all else being equal. However, when an AI system recommends nonstandard care, there is no similar shielding effect of rejecting that advice and so providing standard care. Conclusion: The tort law system is unlikely to undermine the use of AI precision medicine tools and may even encourage the use of these tools.
format Online
Article
Text
id pubmed-8679587
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Society of Nuclear Medicine
record_format MEDLINE/PubMed
spelling pubmed-86795872022-01-05 When Does Physician Use of AI Increase Liability? Tobia, Kevin Nielsen, Aileen Stremitzer, Alexander J Nucl Med Artificial Intelligence An increasing number of automated and artificial intelligence (AI) systems make medical treatment recommendations, including personalized recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in medical malpractice, undermining the use of potentially beneficial medical AI.  However, such liability depends in part on lay judgments by jurors: when physicians use AI systems, in which circumstances would jurors hold physicians liable? Methods: To determine potential jurors’ judgments of liability, we conducted an online experimental study of a nationally representative sample of 2,000 U.S. adults. Each participant read 1 of 4 scenarios in which an AI system provides a treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard care) and the physician’s decision (to accept or reject that recommendation). Subsequently, the physician’s decision caused harm. Participants then assessed the physician’s liability. Results: Our results indicate that physicians who receive advice from an AI system to provide standard care can reduce the risk of liability by accepting, rather than rejecting, that advice, all else being equal. However, when an AI system recommends nonstandard care, there is no similar shielding effect of rejecting that advice and so providing standard care. Conclusion: The tort law system is unlikely to undermine the use of AI precision medicine tools and may even encourage the use of these tools. Society of Nuclear Medicine 2021-01 /pmc/articles/PMC8679587/ /pubmed/32978285 http://dx.doi.org/10.2967/jnumed.120.256032 Text en © 2021 by the Society of Nuclear Medicine and Molecular Imaging. https://creativecommons.org/licenses/by/4.0/Immediate Open Access: Creative Commons Attribution 4.0 International License (CC BY) allows users to share and adapt with attribution, excluding materials credited to previous publications. License: https://creativecommons.org/licenses/by/4.0/. Details: http://jnm.snmjournals.org/site/misc/permission.xhtml.
spellingShingle Artificial Intelligence
Tobia, Kevin
Nielsen, Aileen
Stremitzer, Alexander
When Does Physician Use of AI Increase Liability?
title When Does Physician Use of AI Increase Liability?
title_full When Does Physician Use of AI Increase Liability?
title_fullStr When Does Physician Use of AI Increase Liability?
title_full_unstemmed When Does Physician Use of AI Increase Liability?
title_short When Does Physician Use of AI Increase Liability?
title_sort when does physician use of ai increase liability?
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8679587/
https://www.ncbi.nlm.nih.gov/pubmed/32978285
http://dx.doi.org/10.2967/jnumed.120.256032
work_keys_str_mv AT tobiakevin whendoesphysicianuseofaiincreaseliability
AT nielsenaileen whendoesphysicianuseofaiincreaseliability
AT stremitzeralexander whendoesphysicianuseofaiincreaseliability