Cargando…
Global importance analysis: An interpretability method to quantify importance of genomic features in deep neural networks
Deep neural networks have demonstrated improved performance at predicting the sequence specificities of DNA- and RNA-binding proteins compared to previous methods that rely on k-mers and position weight matrices. To gain insights into why a DNN makes a given prediction, model interpretability method...
Autores principales: | Koo, Peter K., Majdandzic, Antonio, Ploenzke, Matthew, Anand, Praveen, Paul, Steffan B. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8118286/ https://www.ncbi.nlm.nih.gov/pubmed/33983921 http://dx.doi.org/10.1371/journal.pcbi.1008925 |
Ejemplares similares
-
Correcting gradient-based interpretations of deep neural networks for genomics
por: Majdandzic, Antonio, et al.
Publicado: (2023) -
Quantifying the Importance of Rapid Adjustments for Global Precipitation Changes
por: Myhre, G., et al.
Publicado: (2018) -
Interpreting Deep Neural Networks and their Predictions
por: Samek, Wojciech
Publicado: (2018) -
Quantifying the Importance of Firms by Means of Reputation and Network Control
por: Zhang, Yan, et al.
Publicado: (2021) -
Interpreting Cis-Regulatory Interactions from Large-Scale Deep Neural Networks for Genomics
por: Toneyan, Shushan, et al.
Publicado: (2023)