Cargando…
68 landmarks are efficient for 3D face alignment: what about more?: 3D face alignment method applied to face recognition
This paper proposes a 3D face alignment of 2D face images in the wild with noisy landmarks. The objective is to recognize individuals from their single profile image. We first proceed by extracting more than 68 landmarks using a bag of features. This allows us to obtain a bag of visible and invisibl...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10066970/ https://www.ncbi.nlm.nih.gov/pubmed/37362713 http://dx.doi.org/10.1007/s11042-023-14770-x |
Sumario: | This paper proposes a 3D face alignment of 2D face images in the wild with noisy landmarks. The objective is to recognize individuals from their single profile image. We first proceed by extracting more than 68 landmarks using a bag of features. This allows us to obtain a bag of visible and invisible facial keypoints. Then, we reconstruct a 3D face model and get a triangular mesh by meshing the obtained keypoints. For each face, the number of keypoints is not the same, which makes this step very challenging. Later, we process the 3D face using butterfly and BPA algorithms to make correlation and regularity between 3D face regions. Indeed, 2D-to-3D annotations give much higher quality to the 3D reconstructed face model without the need for any additional 3D Morphable models. Finally, we carry out alignment and pose correction steps to get frontal pose by fitting the rendered 3D reconstructed face to 2D face and performing pose normalization to achieve good rates in face recognition. The recognition step is based on deep learning and it is performed using DCNNs, which are very powerful and modern, for feature learning and face identification. To verify the proposed method, three popular benchmarks, YTF, LFW, and BIWI databases, are tested. Compared to the best recognition results reported on these benchmarks, our proposed method achieves comparable or even better recognition performances. |
---|