Cargando…
Synthetic Source Universal Domain Adaptation through Contrastive Learning
Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8620052/ https://www.ncbi.nlm.nih.gov/pubmed/34833615 http://dx.doi.org/10.3390/s21227539 |
_version_ | 1784605132237832192 |
---|---|
author | Cho, Jungchan |
author_facet | Cho, Jungchan |
author_sort | Cho, Jungchan |
collection | PubMed |
description | Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more challenging for UDA to train models. I hypothesize that the degradation of trained models in the target domain is caused by the lack of direct training loss to improve the discriminative power of the target domain data. As a result, the target data adapted to the source representations is biased toward the source domain. I found that the degradation was more pronounced when I used synthetic data for the source domain and real data for the target domain. In this paper, I propose a UDA method with target domain contrastive learning. The proposed method enables models to leverage synthetic data for the source domain and train the discriminativeness of target features in an unsupervised manner. In addition, the target domain feature extraction network is shared with the source domain classification task, preventing unnecessary computational growth. Extensive experimental results on VisDa-2017 and MNIST to SVHN demonstrated that the proposed method significantly outperforms the baseline by 2.7% and 5.1%, respectively. |
format | Online Article Text |
id | pubmed-8620052 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-86200522021-11-27 Synthetic Source Universal Domain Adaptation through Contrastive Learning Cho, Jungchan Sensors (Basel) Article Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more challenging for UDA to train models. I hypothesize that the degradation of trained models in the target domain is caused by the lack of direct training loss to improve the discriminative power of the target domain data. As a result, the target data adapted to the source representations is biased toward the source domain. I found that the degradation was more pronounced when I used synthetic data for the source domain and real data for the target domain. In this paper, I propose a UDA method with target domain contrastive learning. The proposed method enables models to leverage synthetic data for the source domain and train the discriminativeness of target features in an unsupervised manner. In addition, the target domain feature extraction network is shared with the source domain classification task, preventing unnecessary computational growth. Extensive experimental results on VisDa-2017 and MNIST to SVHN demonstrated that the proposed method significantly outperforms the baseline by 2.7% and 5.1%, respectively. MDPI 2021-11-12 /pmc/articles/PMC8620052/ /pubmed/34833615 http://dx.doi.org/10.3390/s21227539 Text en © 2021 by the author. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Cho, Jungchan Synthetic Source Universal Domain Adaptation through Contrastive Learning |
title | Synthetic Source Universal Domain Adaptation through Contrastive Learning |
title_full | Synthetic Source Universal Domain Adaptation through Contrastive Learning |
title_fullStr | Synthetic Source Universal Domain Adaptation through Contrastive Learning |
title_full_unstemmed | Synthetic Source Universal Domain Adaptation through Contrastive Learning |
title_short | Synthetic Source Universal Domain Adaptation through Contrastive Learning |
title_sort | synthetic source universal domain adaptation through contrastive learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8620052/ https://www.ncbi.nlm.nih.gov/pubmed/34833615 http://dx.doi.org/10.3390/s21227539 |
work_keys_str_mv | AT chojungchan syntheticsourceuniversaldomainadaptationthroughcontrastivelearning |