Cargando…

Interpreting the decisions of CNNs via influence functions

An understanding of deep neural network decisions is based on the interpretability of model, which provides explanations that are understandable to human beings and helps avoid biases in model predictions. This study investigates and interprets the model output based on images from the training data...

Descripción completa

Detalles Bibliográficos
Autores principales: Aamir, Aisha, Tamosiunaite, Minija, Wörgötter, Florentin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10410673/
https://www.ncbi.nlm.nih.gov/pubmed/37564901
http://dx.doi.org/10.3389/fncom.2023.1172883
_version_ 1785086510325825536
author Aamir, Aisha
Tamosiunaite, Minija
Wörgötter, Florentin
author_facet Aamir, Aisha
Tamosiunaite, Minija
Wörgötter, Florentin
author_sort Aamir, Aisha
collection PubMed
description An understanding of deep neural network decisions is based on the interpretability of model, which provides explanations that are understandable to human beings and helps avoid biases in model predictions. This study investigates and interprets the model output based on images from the training dataset, i.e., to debug the results of a network model in relation to the training dataset. Our objective was to understand the behavior (specifically, class prediction) of deep learning models through the analysis of perturbations of the loss functions. We calculated influence scores for the VGG16 network at different hidden layers across three types of disturbances in the original images of the ImageNet dataset: texture, style, and background elimination. The global and layer-wise influence scores allowed the identification of the most influential training images for the given testing set. We illustrated our findings using influence scores by highlighting the types of disturbances that bias predictions of the network. According to our results, layer-wise influence analysis pairs well with local interpretability methods such as Shapley values to demonstrate significant differences between disturbed image subgroups. Particularly in an image classification task, our layer-wise interpretability approach plays a pivotal role to identify the classification bias in pre-trained convolutional neural networks, thus, providing useful insights to retrain specific hidden layers.
format Online
Article
Text
id pubmed-10410673
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104106732023-08-10 Interpreting the decisions of CNNs via influence functions Aamir, Aisha Tamosiunaite, Minija Wörgötter, Florentin Front Comput Neurosci Neuroscience An understanding of deep neural network decisions is based on the interpretability of model, which provides explanations that are understandable to human beings and helps avoid biases in model predictions. This study investigates and interprets the model output based on images from the training dataset, i.e., to debug the results of a network model in relation to the training dataset. Our objective was to understand the behavior (specifically, class prediction) of deep learning models through the analysis of perturbations of the loss functions. We calculated influence scores for the VGG16 network at different hidden layers across three types of disturbances in the original images of the ImageNet dataset: texture, style, and background elimination. The global and layer-wise influence scores allowed the identification of the most influential training images for the given testing set. We illustrated our findings using influence scores by highlighting the types of disturbances that bias predictions of the network. According to our results, layer-wise influence analysis pairs well with local interpretability methods such as Shapley values to demonstrate significant differences between disturbed image subgroups. Particularly in an image classification task, our layer-wise interpretability approach plays a pivotal role to identify the classification bias in pre-trained convolutional neural networks, thus, providing useful insights to retrain specific hidden layers. Frontiers Media S.A. 2023-07-26 /pmc/articles/PMC10410673/ /pubmed/37564901 http://dx.doi.org/10.3389/fncom.2023.1172883 Text en Copyright © 2023 Aamir, Tamosiunaite and Wörgötter. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Aamir, Aisha
Tamosiunaite, Minija
Wörgötter, Florentin
Interpreting the decisions of CNNs via influence functions
title Interpreting the decisions of CNNs via influence functions
title_full Interpreting the decisions of CNNs via influence functions
title_fullStr Interpreting the decisions of CNNs via influence functions
title_full_unstemmed Interpreting the decisions of CNNs via influence functions
title_short Interpreting the decisions of CNNs via influence functions
title_sort interpreting the decisions of cnns via influence functions
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10410673/
https://www.ncbi.nlm.nih.gov/pubmed/37564901
http://dx.doi.org/10.3389/fncom.2023.1172883
work_keys_str_mv AT aamiraisha interpretingthedecisionsofcnnsviainfluencefunctions
AT tamosiunaiteminija interpretingthedecisionsofcnnsviainfluencefunctions
AT worgotterflorentin interpretingthedecisionsofcnnsviainfluencefunctions