Cargando…
Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review
BACKGROUND: Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. OBJECTIVE: This study is a...
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9918364/ https://www.ncbi.nlm.nih.gov/pubmed/36776958 http://dx.doi.org/10.1155/2023/9919269 |
_version_ | 1784886593197178880 |
---|---|
author | Xu, Qian Xie, Wenzhao Liao, Bolin Hu, Chao Qin, Lu Yang, Zhengzijin Xiong, Huan Lyu, Yi Zhou, Yue Luo, Aijing |
author_facet | Xu, Qian Xie, Wenzhao Liao, Bolin Hu, Chao Qin, Lu Yang, Zhengzijin Xiong, Huan Lyu, Yi Zhou, Yue Luo, Aijing |
author_sort | Xu, Qian |
collection | PubMed |
description | BACKGROUND: Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. OBJECTIVE: This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives. METHODS: A systematic search was conducted on the interpretability-related literature published from 2011 to 2020 and indexed in the five databases: Web of Science, PubMed, ScienceDirect, Cochrane, and Scopus. Journal articles that focus on the interpretability of CDSS were included for analysis. Experienced researchers also participated in manually reviewing the selected articles for inclusion/exclusion and categorization. RESULTS: Based on the inclusion and exclusion criteria, 20 articles from 16 journals were finally selected for this review. Interpretability, which means a transparent structure of the model, a clear relationship between input and output, and explainability of artificial intelligence algorithms, is essential for CDSS application in the healthcare setting. Methods for improving the interpretability of CDSS include ante-hoc methods such as fuzzy logic, decision rules, logistic regression, decision trees for knowledge-based AI, and white box models, post hoc methods such as feature importance, sensitivity analysis, visualization, and activation maximization for black box models. A number of factors, such as data type, biomarkers, human-AI interaction, needs of clinicians, and patients, can affect the interpretability of CDSS. CONCLUSIONS: The review explores the meaning of the interpretability of CDSS and summarizes the current methods for improving interpretability from technological and medical perspectives. The results contribute to the understanding of the interpretability of CDSS based on AI in health care. Future studies should focus on establishing formalism for defining interpretability, identifying the properties of interpretability, and developing an appropriate and objective metric for interpretability; in addition, the user's demand for interpretability and how to express and provide explanations are also the directions for future research. |
format | Online Article Text |
id | pubmed-9918364 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-99183642023-02-11 Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review Xu, Qian Xie, Wenzhao Liao, Bolin Hu, Chao Qin, Lu Yang, Zhengzijin Xiong, Huan Lyu, Yi Zhou, Yue Luo, Aijing J Healthc Eng Review Article BACKGROUND: Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. OBJECTIVE: This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives. METHODS: A systematic search was conducted on the interpretability-related literature published from 2011 to 2020 and indexed in the five databases: Web of Science, PubMed, ScienceDirect, Cochrane, and Scopus. Journal articles that focus on the interpretability of CDSS were included for analysis. Experienced researchers also participated in manually reviewing the selected articles for inclusion/exclusion and categorization. RESULTS: Based on the inclusion and exclusion criteria, 20 articles from 16 journals were finally selected for this review. Interpretability, which means a transparent structure of the model, a clear relationship between input and output, and explainability of artificial intelligence algorithms, is essential for CDSS application in the healthcare setting. Methods for improving the interpretability of CDSS include ante-hoc methods such as fuzzy logic, decision rules, logistic regression, decision trees for knowledge-based AI, and white box models, post hoc methods such as feature importance, sensitivity analysis, visualization, and activation maximization for black box models. A number of factors, such as data type, biomarkers, human-AI interaction, needs of clinicians, and patients, can affect the interpretability of CDSS. CONCLUSIONS: The review explores the meaning of the interpretability of CDSS and summarizes the current methods for improving interpretability from technological and medical perspectives. The results contribute to the understanding of the interpretability of CDSS based on AI in health care. Future studies should focus on establishing formalism for defining interpretability, identifying the properties of interpretability, and developing an appropriate and objective metric for interpretability; in addition, the user's demand for interpretability and how to express and provide explanations are also the directions for future research. Hindawi 2023-02-03 /pmc/articles/PMC9918364/ /pubmed/36776958 http://dx.doi.org/10.1155/2023/9919269 Text en Copyright © 2023 Qian Xu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Review Article Xu, Qian Xie, Wenzhao Liao, Bolin Hu, Chao Qin, Lu Yang, Zhengzijin Xiong, Huan Lyu, Yi Zhou, Yue Luo, Aijing Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review |
title | Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review |
title_full | Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review |
title_fullStr | Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review |
title_full_unstemmed | Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review |
title_short | Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review |
title_sort | interpretability of clinical decision support systems based on artificial intelligence from technological and medical perspective: a systematic review |
topic | Review Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9918364/ https://www.ncbi.nlm.nih.gov/pubmed/36776958 http://dx.doi.org/10.1155/2023/9919269 |
work_keys_str_mv | AT xuqian interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT xiewenzhao interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT liaobolin interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT huchao interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT qinlu interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT yangzhengzijin interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT xionghuan interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT lyuyi interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT zhouyue interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview AT luoaijing interpretabilityofclinicaldecisionsupportsystemsbasedonartificialintelligencefromtechnologicalandmedicalperspectiveasystematicreview |