Cargando…

Interpretable AI for bio-medical applications

This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Bre...

Descripción completa

Detalles Bibliográficos
Autores principales: Sathyan, Anoop, Weinberg, Abraham Itzhak, Cohen, Kelly
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10074303/
https://www.ncbi.nlm.nih.gov/pubmed/37025127
http://dx.doi.org/10.20517/ces.2022.41
_version_ 1785019731147751424
author Sathyan, Anoop
Weinberg, Abraham Itzhak
Cohen, Kelly
author_facet Sathyan, Anoop
Weinberg, Abraham Itzhak
Cohen, Kelly
author_sort Sathyan, Anoop
collection PubMed
description This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide further insights into the relationship between the input features and the predictions. SHAP methodology additionally provides a more holistic view of the effect of the inputs on the output predictions. The results also present the commonalities between the insights gained using LIME and SHAP. Although this paper focuses on the use of deep neural networks trained on UCI Breast Cancer Wisconsin dataset, the methodology can be applied to other neural networks and architectures trained on other applications. The deep neural network trained in this work provides a high level of accuracy. Analyzing the model using LIME and SHAP adds the much desired benefit of providing explanations for the recommendations made by the trained model.
format Online
Article
Text
id pubmed-10074303
institution National Center for Biotechnology Information
language English
publishDate 2022
record_format MEDLINE/PubMed
spelling pubmed-100743032023-04-05 Interpretable AI for bio-medical applications Sathyan, Anoop Weinberg, Abraham Itzhak Cohen, Kelly Complex Eng Syst Article This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide further insights into the relationship between the input features and the predictions. SHAP methodology additionally provides a more holistic view of the effect of the inputs on the output predictions. The results also present the commonalities between the insights gained using LIME and SHAP. Although this paper focuses on the use of deep neural networks trained on UCI Breast Cancer Wisconsin dataset, the methodology can be applied to other neural networks and architectures trained on other applications. The deep neural network trained in this work provides a high level of accuracy. Analyzing the model using LIME and SHAP adds the much desired benefit of providing explanations for the recommendations made by the trained model. 2022-12 2022-12-28 /pmc/articles/PMC10074303/ /pubmed/37025127 http://dx.doi.org/10.20517/ces.2022.41 Text en https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Article
Sathyan, Anoop
Weinberg, Abraham Itzhak
Cohen, Kelly
Interpretable AI for bio-medical applications
title Interpretable AI for bio-medical applications
title_full Interpretable AI for bio-medical applications
title_fullStr Interpretable AI for bio-medical applications
title_full_unstemmed Interpretable AI for bio-medical applications
title_short Interpretable AI for bio-medical applications
title_sort interpretable ai for bio-medical applications
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10074303/
https://www.ncbi.nlm.nih.gov/pubmed/37025127
http://dx.doi.org/10.20517/ces.2022.41
work_keys_str_mv AT sathyananoop interpretableaiforbiomedicalapplications
AT weinbergabrahamitzhak interpretableaiforbiomedicalapplications
AT cohenkelly interpretableaiforbiomedicalapplications