Cargando…
TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild
We present TED-Face, a new method for recovering high-fidelity 3D facial geometry and appearance with enhanced textures from single-view images. While vision-based face reconstruction has received intensive research in the past decades due to its broad applications, it remains a challenging problem...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10385218/ https://www.ncbi.nlm.nih.gov/pubmed/37514819 http://dx.doi.org/10.3390/s23146525 |
_version_ | 1785081351217610752 |
---|---|
author | Huang, Ying Fang, Lin Hu, Shanfeng |
author_facet | Huang, Ying Fang, Lin Hu, Shanfeng |
author_sort | Huang, Ying |
collection | PubMed |
description | We present TED-Face, a new method for recovering high-fidelity 3D facial geometry and appearance with enhanced textures from single-view images. While vision-based face reconstruction has received intensive research in the past decades due to its broad applications, it remains a challenging problem because human eyes are particularly sensitive to numerically minute yet perceptually significant details. Previous methods that seek to minimize reconstruction errors within a low-dimensional face space can suffer from this issue and generate close yet low-fidelity approximations. The loss of high-frequency texture details is a key factor in their process, which we propose to address by learning to recover both dense radiance residuals and sparse facial texture features from a single image, in addition to the variables solved by previous work—shape, appearance, illumination, and camera. We integrate the estimation of all these factors in a single unified deep neural network and train it on several popular face reconstruction datasets. We also introduce two new metrics, visual fidelity (VIF) and structural similarity (SSIM), to compensate for the fact that reconstruction error is not a consistent perceptual metric of quality. On the popular FaceWarehouse facial reconstruction benchmark, our proposed system achieves a VIF score of 0.4802 and an SSIM score of 0.9622, improving over the state-of-the-art Deep3D method by 6.69% and 0.86%, respectively. On the widely used LS3D-300W dataset, we obtain a VIF score of 0.3922 and an SSIM score of 0.9079 for indoor images, and the scores for outdoor images are 0.4100 and 0.9160, respectively, which also represent an improvement over those of Deep3D. These results show that our method is able to recover visually more realistic facial appearance details compared with previous methods. |
format | Online Article Text |
id | pubmed-10385218 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-103852182023-07-30 TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild Huang, Ying Fang, Lin Hu, Shanfeng Sensors (Basel) Article We present TED-Face, a new method for recovering high-fidelity 3D facial geometry and appearance with enhanced textures from single-view images. While vision-based face reconstruction has received intensive research in the past decades due to its broad applications, it remains a challenging problem because human eyes are particularly sensitive to numerically minute yet perceptually significant details. Previous methods that seek to minimize reconstruction errors within a low-dimensional face space can suffer from this issue and generate close yet low-fidelity approximations. The loss of high-frequency texture details is a key factor in their process, which we propose to address by learning to recover both dense radiance residuals and sparse facial texture features from a single image, in addition to the variables solved by previous work—shape, appearance, illumination, and camera. We integrate the estimation of all these factors in a single unified deep neural network and train it on several popular face reconstruction datasets. We also introduce two new metrics, visual fidelity (VIF) and structural similarity (SSIM), to compensate for the fact that reconstruction error is not a consistent perceptual metric of quality. On the popular FaceWarehouse facial reconstruction benchmark, our proposed system achieves a VIF score of 0.4802 and an SSIM score of 0.9622, improving over the state-of-the-art Deep3D method by 6.69% and 0.86%, respectively. On the widely used LS3D-300W dataset, we obtain a VIF score of 0.3922 and an SSIM score of 0.9079 for indoor images, and the scores for outdoor images are 0.4100 and 0.9160, respectively, which also represent an improvement over those of Deep3D. These results show that our method is able to recover visually more realistic facial appearance details compared with previous methods. MDPI 2023-07-19 /pmc/articles/PMC10385218/ /pubmed/37514819 http://dx.doi.org/10.3390/s23146525 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Huang, Ying Fang, Lin Hu, Shanfeng TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild |
title | TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild |
title_full | TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild |
title_fullStr | TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild |
title_full_unstemmed | TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild |
title_short | TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild |
title_sort | ted-face: texture-enhanced deep face reconstruction in the wild |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10385218/ https://www.ncbi.nlm.nih.gov/pubmed/37514819 http://dx.doi.org/10.3390/s23146525 |
work_keys_str_mv | AT huangying tedfacetextureenhanceddeepfacereconstructioninthewild AT fanglin tedfacetextureenhanceddeepfacereconstructioninthewild AT hushanfeng tedfacetextureenhanceddeepfacereconstructioninthewild |