Cargando…
Interpretable AI for bio-medical applications
This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Bre...
Autores principales: | Sathyan, Anoop, Weinberg, Abraham Itzhak, Cohen, Kelly |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10074303/ https://www.ncbi.nlm.nih.gov/pubmed/37025127 http://dx.doi.org/10.20517/ces.2022.41 |
Ejemplares similares
-
Genetic Fuzzy Based Scalable System of Distributed Robots for a Collaborative Task
por: Sathyan, Anoop, et al.
Publicado: (2020) -
Artificial Intelligence (AI)-Empowered Echocardiography Interpretation: A State-of-the-Art Review
por: Akkus, Zeynettin, et al.
Publicado: (2021) -
IoT and AI-Based Application for Automatic Interpretation of the Affective State of Children Diagnosed with Autism
por: Popescu, Aura-Loredana, et al.
Publicado: (2022) -
A method for AI assisted human interpretation of neonatal EEG
por: Gomez-Quintana, Sergi, et al.
Publicado: (2022) -
SpliceAI-visual: a free online tool to improve SpliceAI splicing variant interpretation
por: de Sainte Agathe, Jean-Madeleine, et al.
Publicado: (2023)