Cargando…
Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis
[Image: see text] The materials science community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges. However, despite their effectiveness in building highly predictive models, e.g., predicting material properties from microstructure imaging,...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
American Chemical Society
2022
|
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8793074/ https://www.ncbi.nlm.nih.gov/pubmed/35097261 http://dx.doi.org/10.1021/acsomega.1c04796 |
Sumario: | [Image: see text] The materials science community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges. However, despite their effectiveness in building highly predictive models, e.g., predicting material properties from microstructure imaging, due to their opaque nature fundamental challenges exist in extracting meaningful domain knowledge from the deep neural networks. In this work, we propose a technique for interpreting the behavior of deep learning models by injecting domain-specific attributes as tunable “knobs” in the material optimization analysis pipeline. By incorporating the material concepts in a generative modeling framework, we are able to explain what structure-to-property linkages these black-box models have learned, which provides scientists with a tool to leverage the full potential of deep learning for domain discoveries. |
---|