Cargando…
Unsupervised Exemplar-Domain Aware Image-to-Image Translation
Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, n...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8147429/ https://www.ncbi.nlm.nih.gov/pubmed/34063192 http://dx.doi.org/10.3390/e23050565 |
_version_ | 1783697628762996736 |
---|---|
author | Fu, Yuanbin Ma, Jiayi Guo, Xiaojie |
author_facet | Fu, Yuanbin Ma, Jiayi Guo, Xiaojie |
author_sort | Fu, Yuanbin |
collection | PubMed |
description | Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, namely exemplar-domain aware image-to-image translator (EDIT for short). From a logical perspective, the translator needs to perform two main functions, i.e., feature extraction and style transfer. With consideration of logical network partition, the generator of our EDIT comprises of a part of blocks configured by shared parameters, and the rest by varied parameters exported by an exemplar-domain aware parameter network, for explicitly imitating the functionalities of extraction and mapping. The principle behind this is that, for images from multiple domains, the content features can be obtained by an extractor, while (re-)stylization is achieved by mapping the extracted features specifically to different purposes (domains and exemplars). In addition, a discriminator is equipped during the training phase to guarantee the output satisfying the distribution of the target domain. Our EDIT can flexibly and effectively work on multiple domains and arbitrary exemplars in a unified neat model. We conduct experiments to show the efficacy of our design, and reveal its advances over other state-of-the-art methods both quantitatively and qualitatively. |
format | Online Article Text |
id | pubmed-8147429 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-81474292021-05-26 Unsupervised Exemplar-Domain Aware Image-to-Image Translation Fu, Yuanbin Ma, Jiayi Guo, Xiaojie Entropy (Basel) Article Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, namely exemplar-domain aware image-to-image translator (EDIT for short). From a logical perspective, the translator needs to perform two main functions, i.e., feature extraction and style transfer. With consideration of logical network partition, the generator of our EDIT comprises of a part of blocks configured by shared parameters, and the rest by varied parameters exported by an exemplar-domain aware parameter network, for explicitly imitating the functionalities of extraction and mapping. The principle behind this is that, for images from multiple domains, the content features can be obtained by an extractor, while (re-)stylization is achieved by mapping the extracted features specifically to different purposes (domains and exemplars). In addition, a discriminator is equipped during the training phase to guarantee the output satisfying the distribution of the target domain. Our EDIT can flexibly and effectively work on multiple domains and arbitrary exemplars in a unified neat model. We conduct experiments to show the efficacy of our design, and reveal its advances over other state-of-the-art methods both quantitatively and qualitatively. MDPI 2021-05-02 /pmc/articles/PMC8147429/ /pubmed/34063192 http://dx.doi.org/10.3390/e23050565 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Fu, Yuanbin Ma, Jiayi Guo, Xiaojie Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_full | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_fullStr | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_full_unstemmed | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_short | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_sort | unsupervised exemplar-domain aware image-to-image translation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8147429/ https://www.ncbi.nlm.nih.gov/pubmed/34063192 http://dx.doi.org/10.3390/e23050565 |
work_keys_str_mv | AT fuyuanbin unsupervisedexemplardomainawareimagetoimagetranslation AT majiayi unsupervisedexemplardomainawareimagetoimagetranslation AT guoxiaojie unsupervisedexemplardomainawareimagetoimagetranslation |