Cargando…
Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation
Self-training is an important class of unsupervised domain adaptation (UDA) approaches that are used to mitigate the problem of domain shift, when applying knowledge learned from a labeled source domain to unlabeled and heterogeneous target domains. While self-training-based UDA has shown considerab...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cornell University
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10246114/ https://www.ncbi.nlm.nih.gov/pubmed/37292465 |
_version_ | 1785054980672061440 |
---|---|
author | Liu, Xiaofeng Prince, Jerry L. Xing, Fangxu Zhuo, Jiachen Reese, Timothy Stone, Maureen El Fakhri, Georges Woo, Jonghye |
author_facet | Liu, Xiaofeng Prince, Jerry L. Xing, Fangxu Zhuo, Jiachen Reese, Timothy Stone, Maureen El Fakhri, Georges Woo, Jonghye |
author_sort | Liu, Xiaofeng |
collection | PubMed |
description | Self-training is an important class of unsupervised domain adaptation (UDA) approaches that are used to mitigate the problem of domain shift, when applying knowledge learned from a labeled source domain to unlabeled and heterogeneous target domains. While self-training-based UDA has shown considerable promise on discriminative tasks, including classification and segmentation, through reliable pseudo-label filtering based on the maximum softmax probability, there is a paucity of prior work on self-training-based UDA for generative tasks, including image modality translation. To fill this gap, in this work, we seek to develop a generative self-training (GST) framework for domain adaptive image translation with continuous value prediction and regression objectives. Specifically, we quantify both aleatoric and epistemic uncertainties within our GST using variational Bayes learning to measure the reliability of synthesized data. We also introduce a self-attention scheme that de-emphasizes the background region to prevent it from dominating the training process. The adaptation is then carried out by an alternating optimization scheme with target domain supervision that focuses attention on the regions with reliable pseudo-labels. We evaluated our framework on two cross-scanner/center, inter-subject translation tasks, including tagged-to-cine magnetic resonance (MR) image translation and T1-weighted MR-to-fractional anisotropy translation. Extensive validations with unpaired target domain data showed that our GST yielded superior synthesis performance in comparison to adversarial training UDA methods. |
format | Online Article Text |
id | pubmed-10246114 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cornell University |
record_format | MEDLINE/PubMed |
spelling | pubmed-102461142023-06-08 Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation Liu, Xiaofeng Prince, Jerry L. Xing, Fangxu Zhuo, Jiachen Reese, Timothy Stone, Maureen El Fakhri, Georges Woo, Jonghye ArXiv Article Self-training is an important class of unsupervised domain adaptation (UDA) approaches that are used to mitigate the problem of domain shift, when applying knowledge learned from a labeled source domain to unlabeled and heterogeneous target domains. While self-training-based UDA has shown considerable promise on discriminative tasks, including classification and segmentation, through reliable pseudo-label filtering based on the maximum softmax probability, there is a paucity of prior work on self-training-based UDA for generative tasks, including image modality translation. To fill this gap, in this work, we seek to develop a generative self-training (GST) framework for domain adaptive image translation with continuous value prediction and regression objectives. Specifically, we quantify both aleatoric and epistemic uncertainties within our GST using variational Bayes learning to measure the reliability of synthesized data. We also introduce a self-attention scheme that de-emphasizes the background region to prevent it from dominating the training process. The adaptation is then carried out by an alternating optimization scheme with target domain supervision that focuses attention on the regions with reliable pseudo-labels. We evaluated our framework on two cross-scanner/center, inter-subject translation tasks, including tagged-to-cine magnetic resonance (MR) image translation and T1-weighted MR-to-fractional anisotropy translation. Extensive validations with unpaired target domain data showed that our GST yielded superior synthesis performance in comparison to adversarial training UDA methods. Cornell University 2023-05-23 /pmc/articles/PMC10246114/ /pubmed/37292465 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use. |
spellingShingle | Article Liu, Xiaofeng Prince, Jerry L. Xing, Fangxu Zhuo, Jiachen Reese, Timothy Stone, Maureen El Fakhri, Georges Woo, Jonghye Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation |
title | Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation |
title_full | Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation |
title_fullStr | Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation |
title_full_unstemmed | Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation |
title_short | Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation |
title_sort | attentive continuous generative self-training for unsupervised domain adaptive medical image translation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10246114/ https://www.ncbi.nlm.nih.gov/pubmed/37292465 |
work_keys_str_mv | AT liuxiaofeng attentivecontinuousgenerativeselftrainingforunsuperviseddomainadaptivemedicalimagetranslation AT princejerryl attentivecontinuousgenerativeselftrainingforunsuperviseddomainadaptivemedicalimagetranslation AT xingfangxu attentivecontinuousgenerativeselftrainingforunsuperviseddomainadaptivemedicalimagetranslation AT zhuojiachen attentivecontinuousgenerativeselftrainingforunsuperviseddomainadaptivemedicalimagetranslation AT reesetimothy attentivecontinuousgenerativeselftrainingforunsuperviseddomainadaptivemedicalimagetranslation AT stonemaureen attentivecontinuousgenerativeselftrainingforunsuperviseddomainadaptivemedicalimagetranslation AT elfakhrigeorges attentivecontinuousgenerativeselftrainingforunsuperviseddomainadaptivemedicalimagetranslation AT woojonghye attentivecontinuousgenerativeselftrainingforunsuperviseddomainadaptivemedicalimagetranslation |