Cargando…

Artificial intelligence in health care: accountability and safety

The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: mora...

Descripción completa

Detalles Bibliográficos
Autores principales: Habli, Ibrahim, Lawton, Tom, Porter, Zoe
Formato: Online Artículo Texto
Lenguaje:English
Publicado: World Health Organization 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7133468/
https://www.ncbi.nlm.nih.gov/pubmed/32284648
http://dx.doi.org/10.2471/BLT.19.237487
_version_ 1783517635910041600
author Habli, Ibrahim
Lawton, Tom
Porter, Zoe
author_facet Habli, Ibrahim
Lawton, Tom
Porter, Zoe
author_sort Habli, Ibrahim
collection PubMed
description The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed.
format Online
Article
Text
id pubmed-7133468
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher World Health Organization
record_format MEDLINE/PubMed
spelling pubmed-71334682020-04-13 Artificial intelligence in health care: accountability and safety Habli, Ibrahim Lawton, Tom Porter, Zoe Bull World Health Organ Policy & Practice The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed. World Health Organization 2020-04-01 2020-02-25 /pmc/articles/PMC7133468/ /pubmed/32284648 http://dx.doi.org/10.2471/BLT.19.237487 Text en (c) 2020 The authors; licensee World Health Organization. This is an open access article distributed under the terms of the Creative Commons Attribution IGO License (http://creativecommons.org/licenses/by/3.0/igo/legalcode), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In any reproduction of this article there should not be any suggestion that WHO or this article endorse any specific organization or products. The use of the WHO logo is not permitted. This notice should be preserved along with the article's original URL.
spellingShingle Policy & Practice
Habli, Ibrahim
Lawton, Tom
Porter, Zoe
Artificial intelligence in health care: accountability and safety
title Artificial intelligence in health care: accountability and safety
title_full Artificial intelligence in health care: accountability and safety
title_fullStr Artificial intelligence in health care: accountability and safety
title_full_unstemmed Artificial intelligence in health care: accountability and safety
title_short Artificial intelligence in health care: accountability and safety
title_sort artificial intelligence in health care: accountability and safety
topic Policy & Practice
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7133468/
https://www.ncbi.nlm.nih.gov/pubmed/32284648
http://dx.doi.org/10.2471/BLT.19.237487
work_keys_str_mv AT habliibrahim artificialintelligenceinhealthcareaccountabilityandsafety
AT lawtontom artificialintelligenceinhealthcareaccountabilityandsafety
AT porterzoe artificialintelligenceinhealthcareaccountabilityandsafety