Cargando…

A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models

In the past few years, 3D Morphing Model (3DMM)-based methods have achieved remarkable results in single-image 3D face reconstruction. However, high-fidelity 3D face texture generation has been successfully achieved with this method, which mostly uses the power of deep convolutional neural networks...

Descripción completa

Detalles Bibliográficos
Autores principales: You, Xingyi, Wang, Yue, Zhao, Xiaohu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10422318/
https://www.ncbi.nlm.nih.gov/pubmed/37571497
http://dx.doi.org/10.3390/s23156713
_version_ 1785089179630174208
author You, Xingyi
Wang, Yue
Zhao, Xiaohu
author_facet You, Xingyi
Wang, Yue
Zhao, Xiaohu
author_sort You, Xingyi
collection PubMed
description In the past few years, 3D Morphing Model (3DMM)-based methods have achieved remarkable results in single-image 3D face reconstruction. However, high-fidelity 3D face texture generation has been successfully achieved with this method, which mostly uses the power of deep convolutional neural networks during the parameter fitting process, which leads to an increase in the number of network layers and computational burden of the network model and reduces the computational speed. Currently, existing methods increase computational speed by using lightweight networks for parameter fitting, but at the expense of reconstruction accuracy. In order to solve the above problems, we improved the 3D deformation model and proposed an efficient and lightweight network model: Mobile-FaceRNet. First, we combine depthwise separable convolution and multi-scale representation methods to fit the parameters of a 3D deformable model (3DMM); then, we introduce a residual attention module during network training to enhance the network’s attention to important features, guaranteeing high-fidelity facial texture reconstruction quality; and, finally, a new perceptual loss function is designed to better address smoothness and image similarity for the smoothing constraints. Experimental results show that the method proposed in this paper can not only achieve high-precision reconstruction under the premise of lightweight, but it is also more robust to influences such as attitude and occlusion.
format Online
Article
Text
id pubmed-10422318
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-104223182023-08-13 A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models You, Xingyi Wang, Yue Zhao, Xiaohu Sensors (Basel) Article In the past few years, 3D Morphing Model (3DMM)-based methods have achieved remarkable results in single-image 3D face reconstruction. However, high-fidelity 3D face texture generation has been successfully achieved with this method, which mostly uses the power of deep convolutional neural networks during the parameter fitting process, which leads to an increase in the number of network layers and computational burden of the network model and reduces the computational speed. Currently, existing methods increase computational speed by using lightweight networks for parameter fitting, but at the expense of reconstruction accuracy. In order to solve the above problems, we improved the 3D deformation model and proposed an efficient and lightweight network model: Mobile-FaceRNet. First, we combine depthwise separable convolution and multi-scale representation methods to fit the parameters of a 3D deformable model (3DMM); then, we introduce a residual attention module during network training to enhance the network’s attention to important features, guaranteeing high-fidelity facial texture reconstruction quality; and, finally, a new perceptual loss function is designed to better address smoothness and image similarity for the smoothing constraints. Experimental results show that the method proposed in this paper can not only achieve high-precision reconstruction under the premise of lightweight, but it is also more robust to influences such as attitude and occlusion. MDPI 2023-07-27 /pmc/articles/PMC10422318/ /pubmed/37571497 http://dx.doi.org/10.3390/s23156713 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
You, Xingyi
Wang, Yue
Zhao, Xiaohu
A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models
title A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models
title_full A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models
title_fullStr A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models
title_full_unstemmed A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models
title_short A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models
title_sort lightweight monocular 3d face reconstruction method based on improved 3d morphing models
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10422318/
https://www.ncbi.nlm.nih.gov/pubmed/37571497
http://dx.doi.org/10.3390/s23156713
work_keys_str_mv AT youxingyi alightweightmonocular3dfacereconstructionmethodbasedonimproved3dmorphingmodels
AT wangyue alightweightmonocular3dfacereconstructionmethodbasedonimproved3dmorphingmodels
AT zhaoxiaohu alightweightmonocular3dfacereconstructionmethodbasedonimproved3dmorphingmodels
AT youxingyi lightweightmonocular3dfacereconstructionmethodbasedonimproved3dmorphingmodels
AT wangyue lightweightmonocular3dfacereconstructionmethodbasedonimproved3dmorphingmodels
AT zhaoxiaohu lightweightmonocular3dfacereconstructionmethodbasedonimproved3dmorphingmodels