Cargando…

The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions

Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedica...

Descripción completa

Detalles Bibliográficos
Autores principales: Bruckert, Sebastian, Finzel, Bettina, Schmid, Ute
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861251/
https://www.ncbi.nlm.nih.gov/pubmed/33733193
http://dx.doi.org/10.3389/frai.2020.507973
_version_ 1783647045218729984
author Bruckert, Sebastian
Finzel, Bettina
Schmid, Ute
author_facet Bruckert, Sebastian
Finzel, Bettina
Schmid, Ute
author_sort Bruckert, Sebastian
collection PubMed
description Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of Semantic Alignment between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a hands-on-cookbook for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process.
format Online
Article
Text
id pubmed-7861251
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78612512021-03-16 The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions Bruckert, Sebastian Finzel, Bettina Schmid, Ute Front Artif Intell Artificial Intelligence Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of Semantic Alignment between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a hands-on-cookbook for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process. Frontiers Media S.A. 2020-09-24 /pmc/articles/PMC7861251/ /pubmed/33733193 http://dx.doi.org/10.3389/frai.2020.507973 Text en Copyright © 2020 Bruckert, Finzel and Schmid. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Bruckert, Sebastian
Finzel, Bettina
Schmid, Ute
The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions
title The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions
title_full The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions
title_fullStr The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions
title_full_unstemmed The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions
title_short The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions
title_sort next generation of medical decision support: a roadmap toward transparent expert companions
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861251/
https://www.ncbi.nlm.nih.gov/pubmed/33733193
http://dx.doi.org/10.3389/frai.2020.507973
work_keys_str_mv AT bruckertsebastian thenextgenerationofmedicaldecisionsupportaroadmaptowardtransparentexpertcompanions
AT finzelbettina thenextgenerationofmedicaldecisionsupportaroadmaptowardtransparentexpertcompanions
AT schmidute thenextgenerationofmedicaldecisionsupportaroadmaptowardtransparentexpertcompanions
AT bruckertsebastian nextgenerationofmedicaldecisionsupportaroadmaptowardtransparentexpertcompanions
AT finzelbettina nextgenerationofmedicaldecisionsupportaroadmaptowardtransparentexpertcompanions
AT schmidute nextgenerationofmedicaldecisionsupportaroadmaptowardtransparentexpertcompanions