Cargando…
Benchmarking Attention-Based Interpretability of Deep Learning in Multivariate Time Series Predictions
The adaptation of deep learning models within safety-critical systems cannot rely only on good prediction performance but needs to provide interpretable and robust explanations for their decisions. When modeling complex sequences, attention mechanisms are regarded as the established approach to supp...
Autores principales: | Barić, Domjan, Fumić, Petar, Horvatić, Davor, Lipic, Tomislav |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7912396/ https://www.ncbi.nlm.nih.gov/pubmed/33503822 http://dx.doi.org/10.3390/e23020143 |
Ejemplares similares
-
Human-Centric AI: The Symbiosis of Human and Artificial Intelligence
por: Horvatić, Davor, et al.
Publicado: (2021) -
Predicting the Lifetime of Dynamic Networks Experiencing Persistent Random Attacks
por: Podobnik, Boris, et al.
Publicado: (2015) -
Interpretable machine learning approach for neuron-centric analysis of human cortical cytoarchitecture
por: Štajduhar, Andrija, et al.
Publicado: (2023) -
Deep Multivariate Time Series Embedding Clustering via Attentive-Gated Autoencoder
por: Ienco, Dino, et al.
Publicado: (2020) -
Evaluation of interpretability methods for multivariate time series forecasting
por: Ozyegen, Ozan, et al.
Publicado: (2021)