Cargando…

Object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module

Recently, contrastive learning has gained popularity in the field of unsupervised image-to-image (I2I) translation. In a previous study, a query-selected attention (QS-Attn) module, which employed an attention matrix with a probability distribution, was used to maximize the mutual information betwee...

Descripción completa

Detalles Bibliográficos
Autores principales: Oh, Yunseok, Oh, Seonhye, Noh, Sangwoo, Kim, Hangyu, Seo, Hyeon
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10627467/
https://www.ncbi.nlm.nih.gov/pubmed/37930987
http://dx.doi.org/10.1371/journal.pone.0293885
_version_ 1785131533831503872
author Oh, Yunseok
Oh, Seonhye
Noh, Sangwoo
Kim, Hangyu
Seo, Hyeon
author_facet Oh, Yunseok
Oh, Seonhye
Noh, Sangwoo
Kim, Hangyu
Seo, Hyeon
author_sort Oh, Yunseok
collection PubMed
description Recently, contrastive learning has gained popularity in the field of unsupervised image-to-image (I2I) translation. In a previous study, a query-selected attention (QS-Attn) module, which employed an attention matrix with a probability distribution, was used to maximize the mutual information between the source and translated images. This module selected significant queries using an entropy metric computed from the attention matrix. However, it often selected many queries with equal significance measures, leading to an excessive focus on the background. In this study, we proposed a dual-learning framework with QS-Attn and convolutional block attention module (CBAM) called object-stable dual contrastive learning generative adversarial network (OS-DCLGAN). In this paper, we utilize a CBAM, which learns what and where to emphasize or suppress, thereby refining intermediate features effectively. This CBAM was integrated before the QS-Attn module to capture significant domain information for I2I translation tasks. The proposed framework outperformed recently introduced approaches in various I2I translation tasks, showing its effectiveness and versatility. The code is available at https://github.com/RedPotatoChip/OSUDL
format Online
Article
Text
id pubmed-10627467
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-106274672023-11-07 Object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module Oh, Yunseok Oh, Seonhye Noh, Sangwoo Kim, Hangyu Seo, Hyeon PLoS One Research Article Recently, contrastive learning has gained popularity in the field of unsupervised image-to-image (I2I) translation. In a previous study, a query-selected attention (QS-Attn) module, which employed an attention matrix with a probability distribution, was used to maximize the mutual information between the source and translated images. This module selected significant queries using an entropy metric computed from the attention matrix. However, it often selected many queries with equal significance measures, leading to an excessive focus on the background. In this study, we proposed a dual-learning framework with QS-Attn and convolutional block attention module (CBAM) called object-stable dual contrastive learning generative adversarial network (OS-DCLGAN). In this paper, we utilize a CBAM, which learns what and where to emphasize or suppress, thereby refining intermediate features effectively. This CBAM was integrated before the QS-Attn module to capture significant domain information for I2I translation tasks. The proposed framework outperformed recently introduced approaches in various I2I translation tasks, showing its effectiveness and versatility. The code is available at https://github.com/RedPotatoChip/OSUDL Public Library of Science 2023-11-06 /pmc/articles/PMC10627467/ /pubmed/37930987 http://dx.doi.org/10.1371/journal.pone.0293885 Text en © 2023 Oh et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Oh, Yunseok
Oh, Seonhye
Noh, Sangwoo
Kim, Hangyu
Seo, Hyeon
Object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module
title Object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module
title_full Object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module
title_fullStr Object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module
title_full_unstemmed Object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module
title_short Object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module
title_sort object-stable unsupervised dual contrastive learning image-to-image translation with query-selected attention and convolutional block attention module
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10627467/
https://www.ncbi.nlm.nih.gov/pubmed/37930987
http://dx.doi.org/10.1371/journal.pone.0293885
work_keys_str_mv AT ohyunseok objectstableunsuperviseddualcontrastivelearningimagetoimagetranslationwithqueryselectedattentionandconvolutionalblockattentionmodule
AT ohseonhye objectstableunsuperviseddualcontrastivelearningimagetoimagetranslationwithqueryselectedattentionandconvolutionalblockattentionmodule
AT nohsangwoo objectstableunsuperviseddualcontrastivelearningimagetoimagetranslationwithqueryselectedattentionandconvolutionalblockattentionmodule
AT kimhangyu objectstableunsuperviseddualcontrastivelearningimagetoimagetranslationwithqueryselectedattentionandconvolutionalblockattentionmodule
AT seohyeon objectstableunsuperviseddualcontrastivelearningimagetoimagetranslationwithqueryselectedattentionandconvolutionalblockattentionmodule