Cargando…
Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation
Self-training is an important class of unsupervised domain adaptation (UDA) approaches that are used to mitigate the problem of domain shift, when applying knowledge learned from a labeled source domain to unlabeled and heterogeneous target domains. While self-training-based UDA has shown considerab...
Autores principales: | Liu, Xiaofeng, Prince, Jerry L., Xing, Fangxu, Zhuo, Jiachen, Reese, Timothy, Stone, Maureen, El Fakhri, Georges, Woo, Jonghye |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cornell University
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10246114/ https://www.ncbi.nlm.nih.gov/pubmed/37292465 |
Ejemplares similares
-
Unsupervised Black-Box Model Domain Adaptation for Brain Tumor Segmentation
por: Liu, Xiaofeng, et al.
Publicado: (2022) -
Incremental Learning for Heterogeneous Structure Segmentation in Brain Tumor MRI
por: Liu, Xiaofeng, et al.
Publicado: (2023) -
Posterior Estimation Using Deep Learning: A Simulation Study of Compartmental Modeling in Dynamic PET
por: Liu, Xiaofeng, et al.
Publicado: (2023) -
Progressively Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
por: Lee, Hong-Yu, et al.
Publicado: (2023) -
Investigation of Bias in Continuous Medical Image Label Fusion
por: Xing, Fangxu, et al.
Publicado: (2016)