Cargando…

Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy

This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set of images of HeLa ce...

Descripción completa

Detalles Bibliográficos
Autores principales: Karabağ, Cefa, Ortega-Ruíz, Mauricio Alberto, Reyes-Aldasoro, Constantino Carlos
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10058680/
https://www.ncbi.nlm.nih.gov/pubmed/36976110
http://dx.doi.org/10.3390/jimaging9030059
_version_ 1785016691907887104
author Karabağ, Cefa
Ortega-Ruíz, Mauricio Alberto
Reyes-Aldasoro, Constantino Carlos
author_facet Karabağ, Cefa
Ortega-Ruíz, Mauricio Alberto
Reyes-Aldasoro, Constantino Carlos
author_sort Karabağ, Cefa
collection PubMed
description This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set of images of HeLa cells observed with an electron microscope with dimensions [Formula: see text]. From there, a smaller region of interest (ROI) of [Formula: see text] was cropped and manually delineated to obtain the ground truth necessary for a quantitative evaluation. A qualitative evaluation was performed on the [Formula: see text] slices due to the lack of ground truth. Pairs of patches of data and labels for the classes nucleus, nuclear envelope, cell and background were generated to train U-Net architectures from scratch. Several training strategies were followed, and the results were compared against a traditional image processing algorithm. The correctness of GT, that is, the inclusion of one or more nuclei within the region of interest was also evaluated. The impact of the extent of training data was evaluated by comparing results from 36,000 pairs of data and label patches extracted from the odd slices in the central region, to 135,000 patches obtained from every other slice in the set. Then, 135,000 patches from several cells from the [Formula: see text] slices were generated automatically using the image processing algorithm. Finally, the two sets of 135,000 pairs were combined to train once more with 270,000 pairs. As would be expected, the accuracy and Jaccard similarity index improved as the number of pairs increased for the ROI. This was also observed qualitatively for the [Formula: see text] slices. When the [Formula: see text] slices were segmented with U-Nets trained with 135,000 pairs, the architecture trained with automatically generated pairs provided better results than the architecture trained with the pairs from the manually segmented ground truths. This suggests that the pairs that were extracted automatically from many cells provided a better representation of the four classes of the various cells in the [Formula: see text] slice than those pairs that were manually segmented from a single cell. Finally, the two sets of 135,000 pairs were combined, and the U-Net trained with these provided the best results.
format Online
Article
Text
id pubmed-10058680
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100586802023-03-30 Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy Karabağ, Cefa Ortega-Ruíz, Mauricio Alberto Reyes-Aldasoro, Constantino Carlos J Imaging Article This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set of images of HeLa cells observed with an electron microscope with dimensions [Formula: see text]. From there, a smaller region of interest (ROI) of [Formula: see text] was cropped and manually delineated to obtain the ground truth necessary for a quantitative evaluation. A qualitative evaluation was performed on the [Formula: see text] slices due to the lack of ground truth. Pairs of patches of data and labels for the classes nucleus, nuclear envelope, cell and background were generated to train U-Net architectures from scratch. Several training strategies were followed, and the results were compared against a traditional image processing algorithm. The correctness of GT, that is, the inclusion of one or more nuclei within the region of interest was also evaluated. The impact of the extent of training data was evaluated by comparing results from 36,000 pairs of data and label patches extracted from the odd slices in the central region, to 135,000 patches obtained from every other slice in the set. Then, 135,000 patches from several cells from the [Formula: see text] slices were generated automatically using the image processing algorithm. Finally, the two sets of 135,000 pairs were combined to train once more with 270,000 pairs. As would be expected, the accuracy and Jaccard similarity index improved as the number of pairs increased for the ROI. This was also observed qualitatively for the [Formula: see text] slices. When the [Formula: see text] slices were segmented with U-Nets trained with 135,000 pairs, the architecture trained with automatically generated pairs provided better results than the architecture trained with the pairs from the manually segmented ground truths. This suggests that the pairs that were extracted automatically from many cells provided a better representation of the four classes of the various cells in the [Formula: see text] slice than those pairs that were manually segmented from a single cell. Finally, the two sets of 135,000 pairs were combined, and the U-Net trained with these provided the best results. MDPI 2023-03-01 /pmc/articles/PMC10058680/ /pubmed/36976110 http://dx.doi.org/10.3390/jimaging9030059 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Karabağ, Cefa
Ortega-Ruíz, Mauricio Alberto
Reyes-Aldasoro, Constantino Carlos
Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy
title Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy
title_full Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy
title_fullStr Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy
title_full_unstemmed Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy
title_short Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy
title_sort impact of training data, ground truth and shape variability in the deep learning-based semantic segmentation of hela cells observed with electron microscopy
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10058680/
https://www.ncbi.nlm.nih.gov/pubmed/36976110
http://dx.doi.org/10.3390/jimaging9030059
work_keys_str_mv AT karabagcefa impactoftrainingdatagroundtruthandshapevariabilityinthedeeplearningbasedsemanticsegmentationofhelacellsobservedwithelectronmicroscopy
AT ortegaruizmauricioalberto impactoftrainingdatagroundtruthandshapevariabilityinthedeeplearningbasedsemanticsegmentationofhelacellsobservedwithelectronmicroscopy
AT reyesaldasoroconstantinocarlos impactoftrainingdatagroundtruthandshapevariabilityinthedeeplearningbasedsemanticsegmentationofhelacellsobservedwithelectronmicroscopy