Cargando…

A Meta-Model to Predict the Drag Coefficient of a Particle Translating in Viscoelastic Fluids: A Machine Learning Approach

This study presents a framework based on Machine Learning (ML) models to predict the drag coefficient of a spherical particle translating in viscoelastic fluids. For the purpose of training and testing the ML models, two datasets were generated using direct numerical simulations (DNSs) for the visco...

Descripción completa

Detalles Bibliográficos
Autores principales: Faroughi, Salah A., Roriz, Ana I., Fernandes, Célio
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8838701/
https://www.ncbi.nlm.nih.gov/pubmed/35160419
http://dx.doi.org/10.3390/polym14030430
Descripción
Sumario:This study presents a framework based on Machine Learning (ML) models to predict the drag coefficient of a spherical particle translating in viscoelastic fluids. For the purpose of training and testing the ML models, two datasets were generated using direct numerical simulations (DNSs) for the viscoelastic unbounded flow of Oldroyd-B (OB-set containing 12,120 data points) and Giesekus (GI-set containing 4950 data points) fluids past a spherical particle. The kinematic input features were selected to be Reynolds number, [Formula: see text] , Weissenberg number, [Formula: see text] , polymeric retardation ratio, [Formula: see text] , and shear thinning mobility parameter, [Formula: see text]. The ML models, specifically Random Forest (RF), Deep Neural Network (DNN) and Extreme Gradient Boosting (XGBoost), were all trained, validated, and tested, and their best architecture was obtained using a 10-Fold cross-validation method. All the ML models presented remarkable accuracy on these datasets; however the XGBoost model resulted in the highest [Formula: see text] and the lowest root mean square error (RMSE) and mean absolute percentage error (MAPE) measures. Additionally, a blind dataset was generated using DNSs, where the input feature coverage was outside the scope of the training set or interpolated within the training sets. The ML models were tested against this blind dataset, to further assess their generalization capability. The DNN model achieved the highest [Formula: see text] and the lowest RMSE and MAPE measures when inferred on this blind dataset. Finally, we developed a meta-model using stacking technique to ensemble RF, XGBoost and DNN models and output a prediction based on the individual learner’s predictions and a DNN meta-regressor. The meta-model consistently outperformed the individual models on all datasets.