Cargando…
Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation †
Despite progress in the past decades, 3D shape acquisition techniques are still a threshold for various 3D face-based applications and have therefore attracted extensive research. Moreover, advanced 2D data generation models based on deep networks may not be directly applicable to 3D objects because...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9964279/ https://www.ncbi.nlm.nih.gov/pubmed/36850534 http://dx.doi.org/10.3390/s23041937 |
_version_ | 1784896464183361536 |
---|---|
author | Luo, Guoliang Xiong, Guoming Huang, Xiaojun Zhao, Xin Tong, Yang Chen, Qiang Zhu, Zhiliang Lei, Haopeng Lin, Juncong |
author_facet | Luo, Guoliang Xiong, Guoming Huang, Xiaojun Zhao, Xin Tong, Yang Chen, Qiang Zhu, Zhiliang Lei, Haopeng Lin, Juncong |
author_sort | Luo, Guoliang |
collection | PubMed |
description | Despite progress in the past decades, 3D shape acquisition techniques are still a threshold for various 3D face-based applications and have therefore attracted extensive research. Moreover, advanced 2D data generation models based on deep networks may not be directly applicable to 3D objects because of the different dimensionality of 2D and 3D data. In this work, we propose two novel sampling methods to represent 3D faces as matrix-like structured data that can better fit deep networks, namely (1) a geometric sampling method for the structured representation of 3D faces based on the intersection of iso-geodesic curves and radial curves, and (2) a depth-like map sampling method using the average depth of grid cells on the front surface. The above sampling methods can bridge the gap between unstructured 3D face models and powerful deep networks for an unsupervised generative 3D face model. In particular, the above approaches can obtain the structured representation of 3D faces, which enables us to adapt the 3D faces to the Deep Convolution Generative Adversarial Network (DCGAN) for 3D face generation to obtain better 3D faces with different expressions. We demonstrated the effectiveness of our generative model by producing a large variety of 3D faces with different expressions using the two novel down-sampling methods mentioned above. |
format | Online Article Text |
id | pubmed-9964279 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-99642792023-02-26 Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation † Luo, Guoliang Xiong, Guoming Huang, Xiaojun Zhao, Xin Tong, Yang Chen, Qiang Zhu, Zhiliang Lei, Haopeng Lin, Juncong Sensors (Basel) Article Despite progress in the past decades, 3D shape acquisition techniques are still a threshold for various 3D face-based applications and have therefore attracted extensive research. Moreover, advanced 2D data generation models based on deep networks may not be directly applicable to 3D objects because of the different dimensionality of 2D and 3D data. In this work, we propose two novel sampling methods to represent 3D faces as matrix-like structured data that can better fit deep networks, namely (1) a geometric sampling method for the structured representation of 3D faces based on the intersection of iso-geodesic curves and radial curves, and (2) a depth-like map sampling method using the average depth of grid cells on the front surface. The above sampling methods can bridge the gap between unstructured 3D face models and powerful deep networks for an unsupervised generative 3D face model. In particular, the above approaches can obtain the structured representation of 3D faces, which enables us to adapt the 3D faces to the Deep Convolution Generative Adversarial Network (DCGAN) for 3D face generation to obtain better 3D faces with different expressions. We demonstrated the effectiveness of our generative model by producing a large variety of 3D faces with different expressions using the two novel down-sampling methods mentioned above. MDPI 2023-02-09 /pmc/articles/PMC9964279/ /pubmed/36850534 http://dx.doi.org/10.3390/s23041937 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Luo, Guoliang Xiong, Guoming Huang, Xiaojun Zhao, Xin Tong, Yang Chen, Qiang Zhu, Zhiliang Lei, Haopeng Lin, Juncong Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation † |
title | Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation † |
title_full | Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation † |
title_fullStr | Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation † |
title_full_unstemmed | Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation † |
title_short | Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation † |
title_sort | geometry sampling-based adaption to dcgan for 3d face generation † |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9964279/ https://www.ncbi.nlm.nih.gov/pubmed/36850534 http://dx.doi.org/10.3390/s23041937 |
work_keys_str_mv | AT luoguoliang geometrysamplingbasedadaptiontodcganfor3dfacegeneration AT xiongguoming geometrysamplingbasedadaptiontodcganfor3dfacegeneration AT huangxiaojun geometrysamplingbasedadaptiontodcganfor3dfacegeneration AT zhaoxin geometrysamplingbasedadaptiontodcganfor3dfacegeneration AT tongyang geometrysamplingbasedadaptiontodcganfor3dfacegeneration AT chenqiang geometrysamplingbasedadaptiontodcganfor3dfacegeneration AT zhuzhiliang geometrysamplingbasedadaptiontodcganfor3dfacegeneration AT leihaopeng geometrysamplingbasedadaptiontodcganfor3dfacegeneration AT linjuncong geometrysamplingbasedadaptiontodcganfor3dfacegeneration |