Cargando…

Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis

[Image: see text] The materials science community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges. However, despite their effectiveness in building highly predictive models, e.g., predicting material properties from microstructure imaging,...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Shusen, Kailkhura, Bhavya, Zhang, Jize, Hiszpanski, Anna M., Robertson, Emily, Loveland, Donald, Zhong, Xiaoting, Han, T. Yong-Jin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Chemical Society 2022
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8793074/
https://www.ncbi.nlm.nih.gov/pubmed/35097261
http://dx.doi.org/10.1021/acsomega.1c04796
_version_ 1784640517465702400
author Liu, Shusen
Kailkhura, Bhavya
Zhang, Jize
Hiszpanski, Anna M.
Robertson, Emily
Loveland, Donald
Zhong, Xiaoting
Han, T. Yong-Jin
author_facet Liu, Shusen
Kailkhura, Bhavya
Zhang, Jize
Hiszpanski, Anna M.
Robertson, Emily
Loveland, Donald
Zhong, Xiaoting
Han, T. Yong-Jin
author_sort Liu, Shusen
collection PubMed
description [Image: see text] The materials science community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges. However, despite their effectiveness in building highly predictive models, e.g., predicting material properties from microstructure imaging, due to their opaque nature fundamental challenges exist in extracting meaningful domain knowledge from the deep neural networks. In this work, we propose a technique for interpreting the behavior of deep learning models by injecting domain-specific attributes as tunable “knobs” in the material optimization analysis pipeline. By incorporating the material concepts in a generative modeling framework, we are able to explain what structure-to-property linkages these black-box models have learned, which provides scientists with a tool to leverage the full potential of deep learning for domain discoveries.
format Online
Article
Text
id pubmed-8793074
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher American Chemical Society
record_format MEDLINE/PubMed
spelling pubmed-87930742022-01-28 Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis Liu, Shusen Kailkhura, Bhavya Zhang, Jize Hiszpanski, Anna M. Robertson, Emily Loveland, Donald Zhong, Xiaoting Han, T. Yong-Jin ACS Omega [Image: see text] The materials science community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges. However, despite their effectiveness in building highly predictive models, e.g., predicting material properties from microstructure imaging, due to their opaque nature fundamental challenges exist in extracting meaningful domain knowledge from the deep neural networks. In this work, we propose a technique for interpreting the behavior of deep learning models by injecting domain-specific attributes as tunable “knobs” in the material optimization analysis pipeline. By incorporating the material concepts in a generative modeling framework, we are able to explain what structure-to-property linkages these black-box models have learned, which provides scientists with a tool to leverage the full potential of deep learning for domain discoveries. American Chemical Society 2022-01-07 /pmc/articles/PMC8793074/ /pubmed/35097261 http://dx.doi.org/10.1021/acsomega.1c04796 Text en © 2022 The Authors. Published by American Chemical Society https://creativecommons.org/licenses/by-nc-nd/4.0/Permits non-commercial access and re-use, provided that author attribution and integrity are maintained; but does not permit creation of adaptations or other derivative works (https://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Liu, Shusen
Kailkhura, Bhavya
Zhang, Jize
Hiszpanski, Anna M.
Robertson, Emily
Loveland, Donald
Zhong, Xiaoting
Han, T. Yong-Jin
Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis
title Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis
title_full Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis
title_fullStr Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis
title_full_unstemmed Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis
title_short Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis
title_sort attribution-driven explanation of the deep neural network model via conditional microstructure image synthesis
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8793074/
https://www.ncbi.nlm.nih.gov/pubmed/35097261
http://dx.doi.org/10.1021/acsomega.1c04796
work_keys_str_mv AT liushusen attributiondrivenexplanationofthedeepneuralnetworkmodelviaconditionalmicrostructureimagesynthesis
AT kailkhurabhavya attributiondrivenexplanationofthedeepneuralnetworkmodelviaconditionalmicrostructureimagesynthesis
AT zhangjize attributiondrivenexplanationofthedeepneuralnetworkmodelviaconditionalmicrostructureimagesynthesis
AT hiszpanskiannam attributiondrivenexplanationofthedeepneuralnetworkmodelviaconditionalmicrostructureimagesynthesis
AT robertsonemily attributiondrivenexplanationofthedeepneuralnetworkmodelviaconditionalmicrostructureimagesynthesis
AT lovelanddonald attributiondrivenexplanationofthedeepneuralnetworkmodelviaconditionalmicrostructureimagesynthesis
AT zhongxiaoting attributiondrivenexplanationofthedeepneuralnetworkmodelviaconditionalmicrostructureimagesynthesis
AT hantyongjin attributiondrivenexplanationofthedeepneuralnetworkmodelviaconditionalmicrostructureimagesynthesis