Cargando…
Re-focusing explainability in medicine
Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and how much explanation is required to use a...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8841907/ https://www.ncbi.nlm.nih.gov/pubmed/35173981 http://dx.doi.org/10.1177/20552076221074488 |
_version_ | 1784650945969258496 |
---|---|
author | Arbelaez Ossa, Laura Starke, Georg Lorenzini, Giorgia Vogt, Julia E Shaw, David M Elger, Bernice Simone |
author_facet | Arbelaez Ossa, Laura Starke, Georg Lorenzini, Giorgia Vogt, Julia E Shaw, David M Elger, Bernice Simone |
author_sort | Arbelaez Ossa, Laura |
collection | PubMed |
description | Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and how much explanation is required to use artificial intelligence safely in healthcare. A key issue is the lack of consensus on the definition of explainability by experts, regulators, and healthcare professionals, resulting in a wide variety of terminology and expectations. This paper aims to fill the gap by defining minimal explainability standards to serve the views and needs of essential stakeholders in healthcare. In that sense, we propose to define minimal explainability criteria that can support doctors’ understanding, meet patients’ needs, and fulfill legal requirements. Therefore, explainability need not to be exhaustive but sufficient for doctors and patients to comprehend the artificial intelligence models’ clinical implications and be integrated safely into clinical practice. Thus, minimally acceptable standards for explainability are context-dependent and should respond to the specific need and potential risks of each clinical scenario for a responsible and ethical implementation of artificial intelligence. |
format | Online Article Text |
id | pubmed-8841907 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | SAGE Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-88419072022-02-15 Re-focusing explainability in medicine Arbelaez Ossa, Laura Starke, Georg Lorenzini, Giorgia Vogt, Julia E Shaw, David M Elger, Bernice Simone Digit Health Review Article Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and how much explanation is required to use artificial intelligence safely in healthcare. A key issue is the lack of consensus on the definition of explainability by experts, regulators, and healthcare professionals, resulting in a wide variety of terminology and expectations. This paper aims to fill the gap by defining minimal explainability standards to serve the views and needs of essential stakeholders in healthcare. In that sense, we propose to define minimal explainability criteria that can support doctors’ understanding, meet patients’ needs, and fulfill legal requirements. Therefore, explainability need not to be exhaustive but sufficient for doctors and patients to comprehend the artificial intelligence models’ clinical implications and be integrated safely into clinical practice. Thus, minimally acceptable standards for explainability are context-dependent and should respond to the specific need and potential risks of each clinical scenario for a responsible and ethical implementation of artificial intelligence. SAGE Publications 2022-02-11 /pmc/articles/PMC8841907/ /pubmed/35173981 http://dx.doi.org/10.1177/20552076221074488 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). |
spellingShingle | Review Article Arbelaez Ossa, Laura Starke, Georg Lorenzini, Giorgia Vogt, Julia E Shaw, David M Elger, Bernice Simone Re-focusing explainability in medicine |
title | Re-focusing explainability in medicine |
title_full | Re-focusing explainability in medicine |
title_fullStr | Re-focusing explainability in medicine |
title_full_unstemmed | Re-focusing explainability in medicine |
title_short | Re-focusing explainability in medicine |
title_sort | re-focusing explainability in medicine |
topic | Review Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8841907/ https://www.ncbi.nlm.nih.gov/pubmed/35173981 http://dx.doi.org/10.1177/20552076221074488 |
work_keys_str_mv | AT arbelaezossalaura refocusingexplainabilityinmedicine AT starkegeorg refocusingexplainabilityinmedicine AT lorenzinigiorgia refocusingexplainabilityinmedicine AT vogtjuliae refocusingexplainabilityinmedicine AT shawdavidm refocusingexplainabilityinmedicine AT elgerbernicesimone refocusingexplainabilityinmedicine |