Cargando…
FLAN: feature-wise latent additive neural models for biological applications
MOTIVATION: Interpretability has become a necessary feature for machine learning models deployed in critical scenarios, e.g. legal system, healthcare. In these situations, algorithmic decisions may have (potentially negative) long-lasting effects on the end-user affected by the decision. While deep...
Autores principales: | Nguyen, An-Phi, Vasilaki, Stefania, Martínez, María Rodríguez |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199769/ https://www.ncbi.nlm.nih.gov/pubmed/37031956 http://dx.doi.org/10.1093/bib/bbad056 |
Ejemplares similares
-
DeepFeature: feature selection in nonimage data using convolutional neural network
por: Sharma, Alok, et al.
Publicado: (2021) -
An interpretable single-cell RNA sequencing data clustering method based on latent Dirichlet allocation
por: Yang, Qi, et al.
Publicado: (2023) -
MathFeature: feature extraction package for DNA, RNA and protein sequences based on mathematical descriptors
por: Bonidia, Robson P, et al.
Publicado: (2021) -
Assessing polygenic risk score models for applications in populations with under-represented genomics data: an example of Vietnam
por: Pham, Duy, et al.
Publicado: (2022) -
Locating transcription factor binding sites by fully convolutional neural network
por: Zhang, Qinhu, et al.
Publicado: (2021)