Cargando…
Dress-up: deep neural framework for image-based human appearance transfer
The fashion industry is at the brink of radical transformation. The emergence of Artificial Intelligence (AI) in fashion applications creates many opportunities for this industry and make fashion a better space for everyone. Interesting to this matter, we proposed a virtual try-on interface to stimu...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9652136/ https://www.ncbi.nlm.nih.gov/pubmed/36404934 http://dx.doi.org/10.1007/s11042-022-14127-w |
_version_ | 1784828402219352064 |
---|---|
author | Ghodhbani, Hajer Neji, Mohamed Qahtani, Abdulrahman M. Almutiry, Omar Dhahri, Habib Alimi, Adel M. |
author_facet | Ghodhbani, Hajer Neji, Mohamed Qahtani, Abdulrahman M. Almutiry, Omar Dhahri, Habib Alimi, Adel M. |
author_sort | Ghodhbani, Hajer |
collection | PubMed |
description | The fashion industry is at the brink of radical transformation. The emergence of Artificial Intelligence (AI) in fashion applications creates many opportunities for this industry and make fashion a better space for everyone. Interesting to this matter, we proposed a virtual try-on interface to stimulate consumers purchase intentions and facilitate their online buying decision process. Thus, we present, in this paper, our flexible person generation system for virtual try-on that aiming to treat the task of human appearance transfer across images while preserving texture details and structural coherence of the generated outfit. This challenging task has drawn increasing attention and made huge development of intelligent fashion applications. However, it requires different challenges, especially in the case of a wide divergences between the source and target images. To solve this problem, we proposed a flexible person generation framework called Dress-up to treat the 2D virtual try-on task. Dress-up is an end-to-end generation pipeline with three modules based on the task of image-to-image translation aiming to sequentially interchange garments between images, and produce dressing effects not achievable by existing works. The core idea of our solution is to explicitly encode the body pose and the target clothes by a pre-processing module based on the semantic segmentation process. Then, a conditional adversarial network is implemented to generate target segmentation feeding respectively, to the alignment and translation networks to generate the final output results. The novelty of this work lies in realizing the appearance transfer across images with high quality by reconstructing garments on a person in different orders and looks from simlpy semantic maps and 2D images without using 3D modeling. Our system can produce dressing effects and provide significant results over the state-of-the-art methods on the widely used DeepFashion dataset. Extensive evaluations show that Dress-up outperforms other recent methods in terms of output quality, and handles a wide range of editing functions for which there is no direct supervision. Different types of results were computed to verify the performance of our proposed framework and show that the robustness and effectiveness are high by utilizing our method. |
format | Online Article Text |
id | pubmed-9652136 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-96521362022-11-14 Dress-up: deep neural framework for image-based human appearance transfer Ghodhbani, Hajer Neji, Mohamed Qahtani, Abdulrahman M. Almutiry, Omar Dhahri, Habib Alimi, Adel M. Multimed Tools Appl Article The fashion industry is at the brink of radical transformation. The emergence of Artificial Intelligence (AI) in fashion applications creates many opportunities for this industry and make fashion a better space for everyone. Interesting to this matter, we proposed a virtual try-on interface to stimulate consumers purchase intentions and facilitate their online buying decision process. Thus, we present, in this paper, our flexible person generation system for virtual try-on that aiming to treat the task of human appearance transfer across images while preserving texture details and structural coherence of the generated outfit. This challenging task has drawn increasing attention and made huge development of intelligent fashion applications. However, it requires different challenges, especially in the case of a wide divergences between the source and target images. To solve this problem, we proposed a flexible person generation framework called Dress-up to treat the 2D virtual try-on task. Dress-up is an end-to-end generation pipeline with three modules based on the task of image-to-image translation aiming to sequentially interchange garments between images, and produce dressing effects not achievable by existing works. The core idea of our solution is to explicitly encode the body pose and the target clothes by a pre-processing module based on the semantic segmentation process. Then, a conditional adversarial network is implemented to generate target segmentation feeding respectively, to the alignment and translation networks to generate the final output results. The novelty of this work lies in realizing the appearance transfer across images with high quality by reconstructing garments on a person in different orders and looks from simlpy semantic maps and 2D images without using 3D modeling. Our system can produce dressing effects and provide significant results over the state-of-the-art methods on the widely used DeepFashion dataset. Extensive evaluations show that Dress-up outperforms other recent methods in terms of output quality, and handles a wide range of editing functions for which there is no direct supervision. Different types of results were computed to verify the performance of our proposed framework and show that the robustness and effectiveness are high by utilizing our method. Springer US 2022-11-12 2023 /pmc/articles/PMC9652136/ /pubmed/36404934 http://dx.doi.org/10.1007/s11042-022-14127-w Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Ghodhbani, Hajer Neji, Mohamed Qahtani, Abdulrahman M. Almutiry, Omar Dhahri, Habib Alimi, Adel M. Dress-up: deep neural framework for image-based human appearance transfer |
title | Dress-up: deep neural framework for image-based human appearance transfer |
title_full | Dress-up: deep neural framework for image-based human appearance transfer |
title_fullStr | Dress-up: deep neural framework for image-based human appearance transfer |
title_full_unstemmed | Dress-up: deep neural framework for image-based human appearance transfer |
title_short | Dress-up: deep neural framework for image-based human appearance transfer |
title_sort | dress-up: deep neural framework for image-based human appearance transfer |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9652136/ https://www.ncbi.nlm.nih.gov/pubmed/36404934 http://dx.doi.org/10.1007/s11042-022-14127-w |
work_keys_str_mv | AT ghodhbanihajer dressupdeepneuralframeworkforimagebasedhumanappearancetransfer AT nejimohamed dressupdeepneuralframeworkforimagebasedhumanappearancetransfer AT qahtaniabdulrahmanm dressupdeepneuralframeworkforimagebasedhumanappearancetransfer AT almutiryomar dressupdeepneuralframeworkforimagebasedhumanappearancetransfer AT dhahrihabib dressupdeepneuralframeworkforimagebasedhumanappearancetransfer AT alimiadelm dressupdeepneuralframeworkforimagebasedhumanappearancetransfer |