Cargando…

Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?

The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the “black box” model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of underst...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Hao, Chen, Yuntian, Zhang, Dongxiao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9762288/
https://www.ncbi.nlm.nih.gov/pubmed/36216585
http://dx.doi.org/10.1002/advs.202204723
_version_ 1784852832818561024
author Xu, Hao
Chen, Yuntian
Zhang, Dongxiao
author_facet Xu, Hao
Chen, Yuntian
Zhang, Dongxiao
author_sort Xu, Hao
collection PubMed
description The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the “black box” model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, the framework of semantic explainable artificial intelligence (S‐XAI) is introduced, which utilizes a sample compression method based on the distinctive row‐centered principal component analysis (PCA) that is different from the conventional column‐centered PCA to obtain common traits of samples from the convolutional neural network (CNN), and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed. The experimental results demonstrate that S‐XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching.
format Online
Article
Text
id pubmed-9762288
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-97622882022-12-20 Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat? Xu, Hao Chen, Yuntian Zhang, Dongxiao Adv Sci (Weinh) Research Articles The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the “black box” model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, the framework of semantic explainable artificial intelligence (S‐XAI) is introduced, which utilizes a sample compression method based on the distinctive row‐centered principal component analysis (PCA) that is different from the conventional column‐centered PCA to obtain common traits of samples from the convolutional neural network (CNN), and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed. The experimental results demonstrate that S‐XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching. John Wiley and Sons Inc. 2022-10-10 /pmc/articles/PMC9762288/ /pubmed/36216585 http://dx.doi.org/10.1002/advs.202204723 Text en © 2022 The Authors. Advanced Science published by Wiley‐VCH GmbH https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Articles
Xu, Hao
Chen, Yuntian
Zhang, Dongxiao
Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
title Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
title_full Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
title_fullStr Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
title_full_unstemmed Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
title_short Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
title_sort semantic interpretation for convolutional neural networks: what makes a cat a cat?
topic Research Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9762288/
https://www.ncbi.nlm.nih.gov/pubmed/36216585
http://dx.doi.org/10.1002/advs.202204723
work_keys_str_mv AT xuhao semanticinterpretationforconvolutionalneuralnetworkswhatmakesacatacat
AT chenyuntian semanticinterpretationforconvolutionalneuralnetworkswhatmakesacatacat
AT zhangdongxiao semanticinterpretationforconvolutionalneuralnetworkswhatmakesacatacat