Cargando…

UVIRT—Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features

Virtual Try-on is the ability to realistically superimpose clothing onto a target person. Due to its importance to the multi-billion dollar e-commerce industry, the problem has received significant attention in recent years. To date, most virtual try-on methods have been supervised approaches, namel...

Descripción completa

Detalles Bibliográficos
Autores principales: Tsunashima, Hideki, Arase, Kosuke, Lam, Antony, Kataoka, Hirokatsu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7582289/
https://www.ncbi.nlm.nih.gov/pubmed/33023177
http://dx.doi.org/10.3390/s20195647
_version_ 1783599156663681024
author Tsunashima, Hideki
Arase, Kosuke
Lam, Antony
Kataoka, Hirokatsu
author_facet Tsunashima, Hideki
Arase, Kosuke
Lam, Antony
Kataoka, Hirokatsu
author_sort Tsunashima, Hideki
collection PubMed
description Virtual Try-on is the ability to realistically superimpose clothing onto a target person. Due to its importance to the multi-billion dollar e-commerce industry, the problem has received significant attention in recent years. To date, most virtual try-on methods have been supervised approaches, namely using annotated data, such as clothes parsing semantic segmentation masks and paired images. These approaches incur a very high cost in annotation. Even existing weakly-supervised virtual try-on methods still use annotated data or pre-trained networks as auxiliary information and the costs of the annotation are still significantly high. Plus, the strategy using pre-trained networks is not appropriate in the practical scenarios due to latency. In this paper we propose Unsupervised VIRtual Try-on using disentangled representation (UVIRT). After UVIRT extracts a clothes and a person feature from a person image and a clothes image respectively, it exchanges a clothes and a person feature. Finally, UVIRT achieve virtual try-on. This is all achieved in an unsupervised manner so UVIRT has the advantage that it does not require any annotated data, pre-trained networks nor even category labels. In the experiments, we qualitatively and quantitatively compare between supervised methods and our UVIRT method on the MPV dataset (which has paired images) and on a Consumer-to-Consumer (C2C) marketplace dataset (which has unpaired images). As a result, UVIRT outperform the supervised method on the C2C marketplace dataset, and achieve comparable results on the MPV dataset, which has paired images in comparison with the conventional supervised method.
format Online
Article
Text
id pubmed-7582289
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75822892020-10-28 UVIRT—Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features Tsunashima, Hideki Arase, Kosuke Lam, Antony Kataoka, Hirokatsu Sensors (Basel) Article Virtual Try-on is the ability to realistically superimpose clothing onto a target person. Due to its importance to the multi-billion dollar e-commerce industry, the problem has received significant attention in recent years. To date, most virtual try-on methods have been supervised approaches, namely using annotated data, such as clothes parsing semantic segmentation masks and paired images. These approaches incur a very high cost in annotation. Even existing weakly-supervised virtual try-on methods still use annotated data or pre-trained networks as auxiliary information and the costs of the annotation are still significantly high. Plus, the strategy using pre-trained networks is not appropriate in the practical scenarios due to latency. In this paper we propose Unsupervised VIRtual Try-on using disentangled representation (UVIRT). After UVIRT extracts a clothes and a person feature from a person image and a clothes image respectively, it exchanges a clothes and a person feature. Finally, UVIRT achieve virtual try-on. This is all achieved in an unsupervised manner so UVIRT has the advantage that it does not require any annotated data, pre-trained networks nor even category labels. In the experiments, we qualitatively and quantitatively compare between supervised methods and our UVIRT method on the MPV dataset (which has paired images) and on a Consumer-to-Consumer (C2C) marketplace dataset (which has unpaired images). As a result, UVIRT outperform the supervised method on the C2C marketplace dataset, and achieve comparable results on the MPV dataset, which has paired images in comparison with the conventional supervised method. MDPI 2020-10-02 /pmc/articles/PMC7582289/ /pubmed/33023177 http://dx.doi.org/10.3390/s20195647 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tsunashima, Hideki
Arase, Kosuke
Lam, Antony
Kataoka, Hirokatsu
UVIRT—Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features
title UVIRT—Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features
title_full UVIRT—Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features
title_fullStr UVIRT—Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features
title_full_unstemmed UVIRT—Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features
title_short UVIRT—Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features
title_sort uvirt—unsupervised virtual try-on using disentangled clothing and person features
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7582289/
https://www.ncbi.nlm.nih.gov/pubmed/33023177
http://dx.doi.org/10.3390/s20195647
work_keys_str_mv AT tsunashimahideki uvirtunsupervisedvirtualtryonusingdisentangledclothingandpersonfeatures
AT arasekosuke uvirtunsupervisedvirtualtryonusingdisentangledclothingandpersonfeatures
AT lamantony uvirtunsupervisedvirtualtryonusingdisentangledclothingandpersonfeatures
AT kataokahirokatsu uvirtunsupervisedvirtualtryonusingdisentangledclothingandpersonfeatures