Cargando…
GD-StarGAN: Multi-domain image-to-image translation in garment design
In the field of fashion design, designing garment image according to texture is actually changing the shape of texture image, and image-to-image translation based on Generative Adversarial Network (GAN) can do this well. This can help fashion designers save a lot of time and energy. GAN-based image-...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7173925/ https://www.ncbi.nlm.nih.gov/pubmed/32315361 http://dx.doi.org/10.1371/journal.pone.0231719 |
_version_ | 1783524535160537088 |
---|---|
author | Shen, Yangyun Huang, Runnan Huang, Wenkai |
author_facet | Shen, Yangyun Huang, Runnan Huang, Wenkai |
author_sort | Shen, Yangyun |
collection | PubMed |
description | In the field of fashion design, designing garment image according to texture is actually changing the shape of texture image, and image-to-image translation based on Generative Adversarial Network (GAN) can do this well. This can help fashion designers save a lot of time and energy. GAN-based image-to-image translation has made great progress in recent years. One of the image-to-image translation models––StarGAN, has realized the function of multi-domain image-to-image translation by using only a single generator and a single discriminator. This paper details the use of StarGAN to complete the task of garment design. Users only need to input an image and a label for the garment type to generate garment images with the texture of the input image. However, it was found that the quality of the generated images is not satisfactory. Therefore, this paper introduces some improvements on the structure of the StarGAN generator and the loss function of StarGAN, and a model was obtained that can be better applied to garment design. It is called GD-StarGAN. This paper will demonstrate that GD-StarGAN is much better than StarGAN when it comes to garment design, especially in texture, by using a set of seven categories of garment datasets. |
format | Online Article Text |
id | pubmed-7173925 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-71739252020-04-27 GD-StarGAN: Multi-domain image-to-image translation in garment design Shen, Yangyun Huang, Runnan Huang, Wenkai PLoS One Research Article In the field of fashion design, designing garment image according to texture is actually changing the shape of texture image, and image-to-image translation based on Generative Adversarial Network (GAN) can do this well. This can help fashion designers save a lot of time and energy. GAN-based image-to-image translation has made great progress in recent years. One of the image-to-image translation models––StarGAN, has realized the function of multi-domain image-to-image translation by using only a single generator and a single discriminator. This paper details the use of StarGAN to complete the task of garment design. Users only need to input an image and a label for the garment type to generate garment images with the texture of the input image. However, it was found that the quality of the generated images is not satisfactory. Therefore, this paper introduces some improvements on the structure of the StarGAN generator and the loss function of StarGAN, and a model was obtained that can be better applied to garment design. It is called GD-StarGAN. This paper will demonstrate that GD-StarGAN is much better than StarGAN when it comes to garment design, especially in texture, by using a set of seven categories of garment datasets. Public Library of Science 2020-04-21 /pmc/articles/PMC7173925/ /pubmed/32315361 http://dx.doi.org/10.1371/journal.pone.0231719 Text en © 2020 Shen et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Shen, Yangyun Huang, Runnan Huang, Wenkai GD-StarGAN: Multi-domain image-to-image translation in garment design |
title | GD-StarGAN: Multi-domain image-to-image translation in garment design |
title_full | GD-StarGAN: Multi-domain image-to-image translation in garment design |
title_fullStr | GD-StarGAN: Multi-domain image-to-image translation in garment design |
title_full_unstemmed | GD-StarGAN: Multi-domain image-to-image translation in garment design |
title_short | GD-StarGAN: Multi-domain image-to-image translation in garment design |
title_sort | gd-stargan: multi-domain image-to-image translation in garment design |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7173925/ https://www.ncbi.nlm.nih.gov/pubmed/32315361 http://dx.doi.org/10.1371/journal.pone.0231719 |
work_keys_str_mv | AT shenyangyun gdstarganmultidomainimagetoimagetranslationingarmentdesign AT huangrunnan gdstarganmultidomainimagetoimagetranslationingarmentdesign AT huangwenkai gdstarganmultidomainimagetoimagetranslationingarmentdesign |