Cargando…
Human Mesh Reconstruction with Generative Adversarial Networks from Single RGB Images
Applications related to smart cities require virtual cities in the experimental development stage. To build a virtual city that are close to a real city, a large number of various types of human models need to be created. To reduce the cost of acquiring models, this paper proposes a method to recons...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7917667/ https://www.ncbi.nlm.nih.gov/pubmed/33672934 http://dx.doi.org/10.3390/s21041350 |
_version_ | 1783657750026256384 |
---|---|
author | Gao, Rui Wen, Mingyun Park, Jisun Cho, Kyungeun |
author_facet | Gao, Rui Wen, Mingyun Park, Jisun Cho, Kyungeun |
author_sort | Gao, Rui |
collection | PubMed |
description | Applications related to smart cities require virtual cities in the experimental development stage. To build a virtual city that are close to a real city, a large number of various types of human models need to be created. To reduce the cost of acquiring models, this paper proposes a method to reconstruct 3D human meshes from single images captured using a normal camera. It presents a method for reconstructing the complete mesh of the human body from a single RGB image and a generative adversarial network consisting of a newly designed shape–pose-based generator (based on deep convolutional neural networks) and an enhanced multi-source discriminator. Using a machine learning approach, the reliance on multiple sensors is reduced and 3D human meshes can be recovered using a single camera, thereby reducing the cost of building smart cities. The proposed method achieves an accuracy of 92.1% in body shape recovery; it can also process 34 images per second. The method proposed in this paper approach significantly improves the performance compared with previous state-of-the-art approaches. Given a single view image of various humans, our results can be used to generate various 3D human models, which can facilitate 3D human modeling work to simulate virtual cities. Since our method can also restore the poses of the humans in the image, it is possible to create various human poses by given corresponding images with specific human poses. |
format | Online Article Text |
id | pubmed-7917667 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-79176672021-03-02 Human Mesh Reconstruction with Generative Adversarial Networks from Single RGB Images Gao, Rui Wen, Mingyun Park, Jisun Cho, Kyungeun Sensors (Basel) Article Applications related to smart cities require virtual cities in the experimental development stage. To build a virtual city that are close to a real city, a large number of various types of human models need to be created. To reduce the cost of acquiring models, this paper proposes a method to reconstruct 3D human meshes from single images captured using a normal camera. It presents a method for reconstructing the complete mesh of the human body from a single RGB image and a generative adversarial network consisting of a newly designed shape–pose-based generator (based on deep convolutional neural networks) and an enhanced multi-source discriminator. Using a machine learning approach, the reliance on multiple sensors is reduced and 3D human meshes can be recovered using a single camera, thereby reducing the cost of building smart cities. The proposed method achieves an accuracy of 92.1% in body shape recovery; it can also process 34 images per second. The method proposed in this paper approach significantly improves the performance compared with previous state-of-the-art approaches. Given a single view image of various humans, our results can be used to generate various 3D human models, which can facilitate 3D human modeling work to simulate virtual cities. Since our method can also restore the poses of the humans in the image, it is possible to create various human poses by given corresponding images with specific human poses. MDPI 2021-02-14 /pmc/articles/PMC7917667/ /pubmed/33672934 http://dx.doi.org/10.3390/s21041350 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Gao, Rui Wen, Mingyun Park, Jisun Cho, Kyungeun Human Mesh Reconstruction with Generative Adversarial Networks from Single RGB Images |
title | Human Mesh Reconstruction with Generative Adversarial Networks from Single RGB Images |
title_full | Human Mesh Reconstruction with Generative Adversarial Networks from Single RGB Images |
title_fullStr | Human Mesh Reconstruction with Generative Adversarial Networks from Single RGB Images |
title_full_unstemmed | Human Mesh Reconstruction with Generative Adversarial Networks from Single RGB Images |
title_short | Human Mesh Reconstruction with Generative Adversarial Networks from Single RGB Images |
title_sort | human mesh reconstruction with generative adversarial networks from single rgb images |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7917667/ https://www.ncbi.nlm.nih.gov/pubmed/33672934 http://dx.doi.org/10.3390/s21041350 |
work_keys_str_mv | AT gaorui humanmeshreconstructionwithgenerativeadversarialnetworksfromsinglergbimages AT wenmingyun humanmeshreconstructionwithgenerativeadversarialnetworksfromsinglergbimages AT parkjisun humanmeshreconstructionwithgenerativeadversarialnetworksfromsinglergbimages AT chokyungeun humanmeshreconstructionwithgenerativeadversarialnetworksfromsinglergbimages |