Cargando…
An intrinsically interpretable neural network architecture for sequence to function learning
MOTIVATION: Sequence-based deep learning approaches have been shown to predict a multitude of functional genomic readouts, including regions of open chromatin and RNA expression of genes. However, a major limitation of current methods is that model interpretation relies on computationally demanding...
Autores principales: | Balcı, Ali Tuğrul, Ebeid, Mark Maher, Benos, Panayiotis V, Kostka, Dennis, Chikina, Maria |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cold Spring Harbor Laboratory
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9900791/ https://www.ncbi.nlm.nih.gov/pubmed/36747873 http://dx.doi.org/10.1101/2023.01.25.525572 |
Ejemplares similares
-
An intrinsically interpretable neural network architecture for sequence-to-function learning
por: Balcı, Ali Tuğrul, et al.
Publicado: (2023) -
Causal network perturbations for instance-specific analysis of single cell and disease samples
por: Buschur, Kristina L, et al.
Publicado: (2020) -
Learning, Memory, and the Role of Neural Network Architecture
por: Hermundstad, Ann M., et al.
Publicado: (2011) -
Correlator convolutional neural networks as an interpretable architecture for image-like quantum matter data
por: Miles, Cole, et al.
Publicado: (2021) -
Parsimonious neural networks learn interpretable physical laws
por: Desai, Saaketh, et al.
Publicado: (2021)