Cargando…

A generalised deep learning-based surrogate model for homogenisation utilising material property encoding and physics-based bounds

The use of surrogate models based on Convolutional Neural Networks (CNN) is increasing significantly in microstructure analysis and property predictions. One of the shortcomings of the existing models is their limitation in feeding the material information. In this context, a simple method is develo...

Descripción completa

Detalles Bibliográficos
Autores principales: Nakka, Rajesh, Harursampath, Dineshkumar, Ponnusami, Sathiskumar A
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10241949/
https://www.ncbi.nlm.nih.gov/pubmed/37277405
http://dx.doi.org/10.1038/s41598-023-34823-3
Descripción
Sumario:The use of surrogate models based on Convolutional Neural Networks (CNN) is increasing significantly in microstructure analysis and property predictions. One of the shortcomings of the existing models is their limitation in feeding the material information. In this context, a simple method is developed for encoding material properties into the microstructure image so that the model learns material information in addition to the structure-property relationship. These ideas are demonstrated by developing a CNN model that can be used for fibre-reinforced composite materials with a ratio of elastic moduli of the fibre to the matrix between 5 and 250 and fibre volume fractions between 25 and 75%, which span end-to-end practical range. The learning convergence curves, with mean absolute percentage error as the metric of interest, are used to find the optimal number of training samples and demonstrate the model performance. The generality of the trained model is showcased through its predictions on completely unseen microstructures whose samples are drawn from the extrapolated domain of the fibre volume fractions and elastic moduli contrasts. Also, in order to make the predictions physically admissible, models are trained by enforcing Hashin–Shtrikman bounds which led to enhanced model performance in the extrapolated domain.