Cargando…
Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes
BACKGROUND: Nuclear segmentation is an important step for profiling aberrant regions of histology sections. If nuclear segmentation can be resolved, then new biomarkers of nuclear phenotypes and their organization can be predicted for the application of precision medicine. However, segmentation is a...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6081825/ https://www.ncbi.nlm.nih.gov/pubmed/30086715 http://dx.doi.org/10.1186/s12859-018-2285-0 |
_version_ | 1783345717234892800 |
---|---|
author | Khoshdeli, Mina Winkelmaier, Garrett Parvin, Bahram |
author_facet | Khoshdeli, Mina Winkelmaier, Garrett Parvin, Bahram |
author_sort | Khoshdeli, Mina |
collection | PubMed |
description | BACKGROUND: Nuclear segmentation is an important step for profiling aberrant regions of histology sections. If nuclear segmentation can be resolved, then new biomarkers of nuclear phenotypes and their organization can be predicted for the application of precision medicine. However, segmentation is a complex problem as a result of variations in nuclear geometry (e.g., size, shape), nuclear type (e.g., epithelial, fibroblast), nuclear phenotypes (e.g., vesicular, aneuploidy), and overlapping nuclei. The problem is further complicated as a result of variations in sample preparation (e.g., fixation, staining). Our hypothesis is that (i) deep learning techniques can learn complex phenotypic signatures that rise in tumor sections, and (ii) fusion of different representations (e.g., regions, boundaries) contributes to improved nuclear segmentation. RESULTS: We have demonstrated that training of deep encoder-decoder convolutional networks overcomes complexities associated with multiple nuclear phenotypes, where we evaluate alternative architecture of deep learning for an improved performance against the simplicity of the design. In addition, improved nuclear segmentation is achieved by color decomposition and combining region- and boundary-based features through a fusion network. The trained models have been evaluated against approximately 19,000 manually annotated nuclei, and object-level Precision, Recall, F1-score and Standard Error are reported with the best F1-score being 0.91. Raw training images, annotated images, processed images, and source codes are released as a part of the Additional file 1. CONCLUSIONS: There are two intrinsic barriers in nuclear segmentation to H&E stained images, which correspond to the diversity of nuclear phenotypes and perceptual boundaries between adjacent cells. We demonstrate that (i) the encoder-decoder architecture can learn complex phenotypes that include the vesicular type; (ii) delineation of overlapping nuclei is enhanced by fusion of region- and edge-based networks; (iii) fusion of ENets produces an improved result over the fusion of UNets; and (iv) fusion of networks is better than multitask learning. We suggest that our protocol enables processing a large cohort of whole slide images for applications in precision medicine. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s12859-018-2285-0) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-6081825 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-60818252018-08-09 Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes Khoshdeli, Mina Winkelmaier, Garrett Parvin, Bahram BMC Bioinformatics Research Article BACKGROUND: Nuclear segmentation is an important step for profiling aberrant regions of histology sections. If nuclear segmentation can be resolved, then new biomarkers of nuclear phenotypes and their organization can be predicted for the application of precision medicine. However, segmentation is a complex problem as a result of variations in nuclear geometry (e.g., size, shape), nuclear type (e.g., epithelial, fibroblast), nuclear phenotypes (e.g., vesicular, aneuploidy), and overlapping nuclei. The problem is further complicated as a result of variations in sample preparation (e.g., fixation, staining). Our hypothesis is that (i) deep learning techniques can learn complex phenotypic signatures that rise in tumor sections, and (ii) fusion of different representations (e.g., regions, boundaries) contributes to improved nuclear segmentation. RESULTS: We have demonstrated that training of deep encoder-decoder convolutional networks overcomes complexities associated with multiple nuclear phenotypes, where we evaluate alternative architecture of deep learning for an improved performance against the simplicity of the design. In addition, improved nuclear segmentation is achieved by color decomposition and combining region- and boundary-based features through a fusion network. The trained models have been evaluated against approximately 19,000 manually annotated nuclei, and object-level Precision, Recall, F1-score and Standard Error are reported with the best F1-score being 0.91. Raw training images, annotated images, processed images, and source codes are released as a part of the Additional file 1. CONCLUSIONS: There are two intrinsic barriers in nuclear segmentation to H&E stained images, which correspond to the diversity of nuclear phenotypes and perceptual boundaries between adjacent cells. We demonstrate that (i) the encoder-decoder architecture can learn complex phenotypes that include the vesicular type; (ii) delineation of overlapping nuclei is enhanced by fusion of region- and edge-based networks; (iii) fusion of ENets produces an improved result over the fusion of UNets; and (iv) fusion of networks is better than multitask learning. We suggest that our protocol enables processing a large cohort of whole slide images for applications in precision medicine. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s12859-018-2285-0) contains supplementary material, which is available to authorized users. BioMed Central 2018-08-07 /pmc/articles/PMC6081825/ /pubmed/30086715 http://dx.doi.org/10.1186/s12859-018-2285-0 Text en © The Author(s) 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Article Khoshdeli, Mina Winkelmaier, Garrett Parvin, Bahram Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes |
title | Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes |
title_full | Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes |
title_fullStr | Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes |
title_full_unstemmed | Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes |
title_short | Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes |
title_sort | fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6081825/ https://www.ncbi.nlm.nih.gov/pubmed/30086715 http://dx.doi.org/10.1186/s12859-018-2285-0 |
work_keys_str_mv | AT khoshdelimina fusionofencoderdecoderdeepnetworksimprovesdelineationofmultiplenuclearphenotypes AT winkelmaiergarrett fusionofencoderdecoderdeepnetworksimprovesdelineationofmultiplenuclearphenotypes AT parvinbahram fusionofencoderdecoderdeepnetworksimprovesdelineationofmultiplenuclearphenotypes |