Cargando…

Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat? (Adv. Sci. 35/2022)

Interpretable Machine Learning The semantic explainable AI (S‐XAI) discovers what makes a cat to be recognized as a cat in the convolutional neural networks by extracting common traits and establishing semantic space from diversified samples of cats. The visualized common traits contain identifiable...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Hao, Chen, Yuntian, Zhang, Dongxiao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9762285/
http://dx.doi.org/10.1002/advs.202270221
Descripción
Sumario:Interpretable Machine Learning The semantic explainable AI (S‐XAI) discovers what makes a cat to be recognized as a cat in the convolutional neural networks by extracting common traits and establishing semantic space from diversified samples of cats. The visualized common traits contain identifiable semantic concepts like eyes, noses and beards, which give a semantic interpretation for the convolutional neural networks. The S‐XAI has a promising prospect in the aspect of trustworthiness assessment and semantic sample searching. More details can be found in the article number 2204723 by Hao Xu, Yuntian Chen, and Dongxiao Zhang. [Image: see text]