Cargando…

Reliable interpretability of biology-inspired deep neural networks

Deep neural networks display impressive performance but suffer from limited interpretability. Biology-inspired deep learning, where the architecture of the computational graph is based on biological knowledge, enables unique interpretability where real-world concepts are encoded in hidden nodes, whi...

Descripción completa

Detalles Bibliográficos
Autores principales: Esser-Skala, Wolfgang, Fortelny, Nikolaus
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10564878/
https://www.ncbi.nlm.nih.gov/pubmed/37816807
http://dx.doi.org/10.1038/s41540-023-00310-8
_version_ 1785118573691142144
author Esser-Skala, Wolfgang
Fortelny, Nikolaus
author_facet Esser-Skala, Wolfgang
Fortelny, Nikolaus
author_sort Esser-Skala, Wolfgang
collection PubMed
description Deep neural networks display impressive performance but suffer from limited interpretability. Biology-inspired deep learning, where the architecture of the computational graph is based on biological knowledge, enables unique interpretability where real-world concepts are encoded in hidden nodes, which can be ranked by importance and thereby interpreted. In such models trained on single-cell transcriptomes, we previously demonstrated that node-level interpretations lack robustness upon repeated training and are influenced by biases in biological knowledge. Similar studies are missing for related models. Here, we test and extend our methodology for reliable interpretability in P-NET, a biology-inspired model trained on patient mutation data. We observe variability of interpretations and susceptibility to knowledge biases, and identify the network properties that drive interpretation biases. We further present an approach to control the robustness and biases of interpretations, which leads to more specific interpretations. In summary, our study reveals the broad importance of methods to ensure robust and bias-aware interpretability in biology-inspired deep learning.
format Online
Article
Text
id pubmed-10564878
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-105648782023-10-12 Reliable interpretability of biology-inspired deep neural networks Esser-Skala, Wolfgang Fortelny, Nikolaus NPJ Syst Biol Appl Article Deep neural networks display impressive performance but suffer from limited interpretability. Biology-inspired deep learning, where the architecture of the computational graph is based on biological knowledge, enables unique interpretability where real-world concepts are encoded in hidden nodes, which can be ranked by importance and thereby interpreted. In such models trained on single-cell transcriptomes, we previously demonstrated that node-level interpretations lack robustness upon repeated training and are influenced by biases in biological knowledge. Similar studies are missing for related models. Here, we test and extend our methodology for reliable interpretability in P-NET, a biology-inspired model trained on patient mutation data. We observe variability of interpretations and susceptibility to knowledge biases, and identify the network properties that drive interpretation biases. We further present an approach to control the robustness and biases of interpretations, which leads to more specific interpretations. In summary, our study reveals the broad importance of methods to ensure robust and bias-aware interpretability in biology-inspired deep learning. Nature Publishing Group UK 2023-10-10 /pmc/articles/PMC10564878/ /pubmed/37816807 http://dx.doi.org/10.1038/s41540-023-00310-8 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Esser-Skala, Wolfgang
Fortelny, Nikolaus
Reliable interpretability of biology-inspired deep neural networks
title Reliable interpretability of biology-inspired deep neural networks
title_full Reliable interpretability of biology-inspired deep neural networks
title_fullStr Reliable interpretability of biology-inspired deep neural networks
title_full_unstemmed Reliable interpretability of biology-inspired deep neural networks
title_short Reliable interpretability of biology-inspired deep neural networks
title_sort reliable interpretability of biology-inspired deep neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10564878/
https://www.ncbi.nlm.nih.gov/pubmed/37816807
http://dx.doi.org/10.1038/s41540-023-00310-8
work_keys_str_mv AT esserskalawolfgang reliableinterpretabilityofbiologyinspireddeepneuralnetworks
AT fortelnynikolaus reliableinterpretabilityofbiologyinspireddeepneuralnetworks