Cargando…
Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer
Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the g...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404916/ https://www.ncbi.nlm.nih.gov/pubmed/34460778 http://dx.doi.org/10.3390/jimaging7080142 |
_version_ | 1783746232942854144 |
---|---|
author | Yap, Chuin Hong Cunningham, Ryan Davison, Adrian K. Yap, Moi Hoon |
author_facet | Yap, Chuin Hong Cunningham, Ryan Davison, Adrian K. Yap, Moi Hoon |
author_sort | Yap, Chuin Hong |
collection | PubMed |
description | Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method—StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson’s correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task. |
format | Online Article Text |
id | pubmed-8404916 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-84049162021-10-28 Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer Yap, Chuin Hong Cunningham, Ryan Davison, Adrian K. Yap, Moi Hoon J Imaging Article Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method—StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson’s correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task. MDPI 2021-08-11 /pmc/articles/PMC8404916/ /pubmed/34460778 http://dx.doi.org/10.3390/jimaging7080142 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Yap, Chuin Hong Cunningham, Ryan Davison, Adrian K. Yap, Moi Hoon Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer |
title | Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer |
title_full | Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer |
title_fullStr | Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer |
title_full_unstemmed | Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer |
title_short | Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer |
title_sort | synthesising facial macro- and micro-expressions using reference guided style transfer |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404916/ https://www.ncbi.nlm.nih.gov/pubmed/34460778 http://dx.doi.org/10.3390/jimaging7080142 |
work_keys_str_mv | AT yapchuinhong synthesisingfacialmacroandmicroexpressionsusingreferenceguidedstyletransfer AT cunninghamryan synthesisingfacialmacroandmicroexpressionsusingreferenceguidedstyletransfer AT davisonadriank synthesisingfacialmacroandmicroexpressionsusingreferenceguidedstyletransfer AT yapmoihoon synthesisingfacialmacroandmicroexpressionsusingreferenceguidedstyletransfer |