Cargando…
Interpreting the decisions of CNNs via influence functions
An understanding of deep neural network decisions is based on the interpretability of model, which provides explanations that are understandable to human beings and helps avoid biases in model predictions. This study investigates and interprets the model output based on images from the training data...
Autores principales: | Aamir, Aisha, Tamosiunaite, Minija, Wörgötter, Florentin |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10410673/ https://www.ncbi.nlm.nih.gov/pubmed/37564901 http://dx.doi.org/10.3389/fncom.2023.1172883 |
Ejemplares similares
-
Erratum to: Self-influencing synaptic plasticity: recurrent changes of synaptic weights can lead to specific functional properties
por: Tamosiunaite, Minija, et al.
Publicado: (2010) -
Simulated mental imagery for robotic task planning
por: Li, Shijia, et al.
Publicado: (2023) -
Perceptual influence of elementary three-dimensional geometry: (1) objectness
por: Wörgötter, Florentin, et al.
Publicado: (2015) -
One-Shot Multi-Path Planning Using Fully Convolutional Networks in a Comparison to Other Algorithms
por: Kulvicius, Tomas, et al.
Publicado: (2021) -
Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions
por: Tamosiunaite, Minija, et al.
Publicado: (2009)