Cargando…

Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67

PURPOSE: We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotation...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Xuhong, Cornish, Toby C., Yang, Lin, Bennett, Tellen D., Ghosh, Debashis, Xing, Fuyong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Society of Clinical Oncology 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7397778/
https://www.ncbi.nlm.nih.gov/pubmed/32730116
http://dx.doi.org/10.1200/CCI.19.00108
_version_ 1783565833387114496
author Zhang, Xuhong
Cornish, Toby C.
Yang, Lin
Bennett, Tellen D.
Ghosh, Debashis
Xing, Fuyong
author_facet Zhang, Xuhong
Cornish, Toby C.
Yang, Lin
Bennett, Tellen D.
Ghosh, Debashis
Xing, Fuyong
author_sort Zhang, Xuhong
collection PubMed
description PURPOSE: We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotations are unavailable in target data sets. METHODS: We considered 2 different institutional pancreatic NET data sets: one (ie, source) containing 38 cases with 114 annotated images and the other (ie, target) containing 72 cases with 20 annotated images. The gold standards were manually annotated by 1 pathologist. We developed a novel deep learning–based domain adaptation framework to count different types of nuclei (ie, immunopositive tumor, immunonegative tumor, nontumor nuclei). We compared the proposed method with several recent fully supervised deep learning models, such as fully convolutional network-8s (FCN-8s), U-Net, fully convolutional regression network (FCRN) A, FCRNB, and fully residual convolutional network (FRCN). We also evaluated the proposed method by learning with a mixture of converted source images and real target annotations. RESULTS: Our method achieved an F(1) score of 81.3% and 62.3% for nucleus detection and classification in the target data set, respectively. Our method outperformed FCN-8s (53.6% and 43.6% for nucleus detection and classification, respectively), U-Net (61.1% and 47.6%), FCRNA (63.4% and 55.8%), and FCRNB (68.2% and 60.6%) in terms of F(1) score and was competitive with FRCN (81.7% and 70.7%). In addition, learning with a mixture of converted source images and only a small set of real target labels could further boost the performance. CONCLUSION: This study demonstrates that deep learning–based domain adaptation is helpful for nucleus recognition in Ki-67 IHC stained images when target data annotations are not available. It would improve the applicability of deep learning models designed for downstream supervised learning tasks on different data sets.
format Online
Article
Text
id pubmed-7397778
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher American Society of Clinical Oncology
record_format MEDLINE/PubMed
spelling pubmed-73977782021-07-30 Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67 Zhang, Xuhong Cornish, Toby C. Yang, Lin Bennett, Tellen D. Ghosh, Debashis Xing, Fuyong JCO Clin Cancer Inform ORIGINAL REPORTS PURPOSE: We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotations are unavailable in target data sets. METHODS: We considered 2 different institutional pancreatic NET data sets: one (ie, source) containing 38 cases with 114 annotated images and the other (ie, target) containing 72 cases with 20 annotated images. The gold standards were manually annotated by 1 pathologist. We developed a novel deep learning–based domain adaptation framework to count different types of nuclei (ie, immunopositive tumor, immunonegative tumor, nontumor nuclei). We compared the proposed method with several recent fully supervised deep learning models, such as fully convolutional network-8s (FCN-8s), U-Net, fully convolutional regression network (FCRN) A, FCRNB, and fully residual convolutional network (FRCN). We also evaluated the proposed method by learning with a mixture of converted source images and real target annotations. RESULTS: Our method achieved an F(1) score of 81.3% and 62.3% for nucleus detection and classification in the target data set, respectively. Our method outperformed FCN-8s (53.6% and 43.6% for nucleus detection and classification, respectively), U-Net (61.1% and 47.6%), FCRNA (63.4% and 55.8%), and FCRNB (68.2% and 60.6%) in terms of F(1) score and was competitive with FRCN (81.7% and 70.7%). In addition, learning with a mixture of converted source images and only a small set of real target labels could further boost the performance. CONCLUSION: This study demonstrates that deep learning–based domain adaptation is helpful for nucleus recognition in Ki-67 IHC stained images when target data annotations are not available. It would improve the applicability of deep learning models designed for downstream supervised learning tasks on different data sets. American Society of Clinical Oncology 2020-07-30 /pmc/articles/PMC7397778/ /pubmed/32730116 http://dx.doi.org/10.1200/CCI.19.00108 Text en © 2020 by American Society of Clinical Oncology https://creativecommons.org/licenses/by/4.0/ Licensed under the Creative Commons Attribution 4.0 License: https://creativecommons.org/licenses/by/4.0/
spellingShingle ORIGINAL REPORTS
Zhang, Xuhong
Cornish, Toby C.
Yang, Lin
Bennett, Tellen D.
Ghosh, Debashis
Xing, Fuyong
Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67
title Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67
title_full Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67
title_fullStr Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67
title_full_unstemmed Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67
title_short Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67
title_sort generative adversarial domain adaptation for nucleus quantification in images of tissue immunohistochemically stained for ki-67
topic ORIGINAL REPORTS
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7397778/
https://www.ncbi.nlm.nih.gov/pubmed/32730116
http://dx.doi.org/10.1200/CCI.19.00108
work_keys_str_mv AT zhangxuhong generativeadversarialdomainadaptationfornucleusquantificationinimagesoftissueimmunohistochemicallystainedforki67
AT cornishtobyc generativeadversarialdomainadaptationfornucleusquantificationinimagesoftissueimmunohistochemicallystainedforki67
AT yanglin generativeadversarialdomainadaptationfornucleusquantificationinimagesoftissueimmunohistochemicallystainedforki67
AT bennetttellend generativeadversarialdomainadaptationfornucleusquantificationinimagesoftissueimmunohistochemicallystainedforki67
AT ghoshdebashis generativeadversarialdomainadaptationfornucleusquantificationinimagesoftissueimmunohistochemicallystainedforki67
AT xingfuyong generativeadversarialdomainadaptationfornucleusquantificationinimagesoftissueimmunohistochemicallystainedforki67