Cargando…

Variational and Deep Learning Segmentation of Very-Low-Contrast X-ray Computed Tomography Images of Carbon/Epoxy Woven Composites

The purpose of this work is to find an effective image segmentation method for lab-based micro-tomography (µ-CT) data of carbon fiber reinforced polymers (CFRP) with insufficient contrast-to-noise ratio. The segmentation is the first step in creating a realistic geometry (based on µ-CT) for finite e...

Descripción completa

Detalles Bibliográficos
Autores principales: Sinchuk, Yuriy, Kibleur, Pierre, Aelterman, Jan, Boone, Matthieu N., Van Paepegem, Wim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7079634/
https://www.ncbi.nlm.nih.gov/pubmed/32093177
http://dx.doi.org/10.3390/ma13040936
Descripción
Sumario:The purpose of this work is to find an effective image segmentation method for lab-based micro-tomography (µ-CT) data of carbon fiber reinforced polymers (CFRP) with insufficient contrast-to-noise ratio. The segmentation is the first step in creating a realistic geometry (based on µ-CT) for finite element modelling of textile composites on meso-scale. Noise in X-ray imaging data of carbon/polymer composites forms a challenge for this segmentation due to the very low X-ray contrast between fiber and polymer and unclear fiber gradients. To the best of our knowledge, segmentation of µ-CT images of carbon/polymer textile composites with low resolution data (voxel size close to the fiber diameter) remains poorly documented. In this paper, we propose and evaluate different approaches for solving the segmentation problem: variational on the one hand and deep-learning-based on the other. In the author’s view, both strategies present a novel and reliable ground for the segmentation of µ-CT data of CFRP woven composites. The predictions of both approaches were evaluated against a manual segmentation of the volume, constituting our “ground truth”, which provides quantitative data on the segmentation accuracy. The highest segmentation accuracy (about 4.7% in terms of voxel-wise Dice similarity) was achieved using the deep learning approach with U-Net neural network.