Cargando…

Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model

Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the...

Descripción completa

Detalles Bibliográficos
Autores principales: Ariano, Luigi, Ferrari, Claudio, Berretti, Stefano, Del Bimbo, Alberto
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7830313/
https://www.ncbi.nlm.nih.gov/pubmed/33467595
http://dx.doi.org/10.3390/s21020589
_version_ 1783641381362728960
author Ariano, Luigi
Ferrari, Claudio
Berretti, Stefano
Del Bimbo, Alberto
author_facet Ariano, Luigi
Ferrari, Claudio
Berretti, Stefano
Del Bimbo, Alberto
author_sort Ariano, Luigi
collection PubMed
description Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross-dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation.
format Online
Article
Text
id pubmed-7830313
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-78303132021-01-26 Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model Ariano, Luigi Ferrari, Claudio Berretti, Stefano Del Bimbo, Alberto Sensors (Basel) Article Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross-dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation. MDPI 2021-01-15 /pmc/articles/PMC7830313/ /pubmed/33467595 http://dx.doi.org/10.3390/s21020589 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ariano, Luigi
Ferrari, Claudio
Berretti, Stefano
Del Bimbo, Alberto
Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model
title Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model
title_full Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model
title_fullStr Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model
title_full_unstemmed Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model
title_short Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model
title_sort action unit detection by learning the deformation coefficients of a 3d morphable model
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7830313/
https://www.ncbi.nlm.nih.gov/pubmed/33467595
http://dx.doi.org/10.3390/s21020589
work_keys_str_mv AT arianoluigi actionunitdetectionbylearningthedeformationcoefficientsofa3dmorphablemodel
AT ferrariclaudio actionunitdetectionbylearningthedeformationcoefficientsofa3dmorphablemodel
AT berrettistefano actionunitdetectionbylearningthedeformationcoefficientsofa3dmorphablemodel
AT delbimboalberto actionunitdetectionbylearningthedeformationcoefficientsofa3dmorphablemodel