Cargando…
A Two-Stage Deep Generative Model for Masked Face Synthesis
Research on face recognition with masked faces has been increasingly important due to the prolonged COVID-19 pandemic. To make face recognition practical and robust, a large amount of face image data should be acquired for training purposes. However, it is difficult to obtain masked face images for...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9607215/ https://www.ncbi.nlm.nih.gov/pubmed/36298252 http://dx.doi.org/10.3390/s22207903 |
_version_ | 1784818486940270592 |
---|---|
author | Lee, Seungho |
author_facet | Lee, Seungho |
author_sort | Lee, Seungho |
collection | PubMed |
description | Research on face recognition with masked faces has been increasingly important due to the prolonged COVID-19 pandemic. To make face recognition practical and robust, a large amount of face image data should be acquired for training purposes. However, it is difficult to obtain masked face images for each human subject. To cope with this difficulty, this paper proposes a simple yet practical method to synthesize a realistic masked face for an unseen face image. For this, a cascade of two convolutional auto-encoders (CAEs) has been designed. The former CAE generates a pose-alike face wearing a mask pattern, which is expected to fit the input face in terms of pose view. The output of the former CAE is readily fed into the secondary CAE for extracting a segmentation map that localizes the mask region on the face. Using the segmentation map, the mask pattern can be successfully fused with the input face by means of simple image processing techniques. The proposed method relies on face appearance reconstruction without any facial landmark detection or localization techniques. Extensive experiments with the GTAV Face database and Labeled Faces in the Wild (LFW) database show that the two complementary generators could rapidly and accurately produce synthetic faces even for challenging input faces (e.g., low-resolution face of 25 × 25 pixels with out-of-plane rotations). |
format | Online Article Text |
id | pubmed-9607215 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-96072152022-10-28 A Two-Stage Deep Generative Model for Masked Face Synthesis Lee, Seungho Sensors (Basel) Article Research on face recognition with masked faces has been increasingly important due to the prolonged COVID-19 pandemic. To make face recognition practical and robust, a large amount of face image data should be acquired for training purposes. However, it is difficult to obtain masked face images for each human subject. To cope with this difficulty, this paper proposes a simple yet practical method to synthesize a realistic masked face for an unseen face image. For this, a cascade of two convolutional auto-encoders (CAEs) has been designed. The former CAE generates a pose-alike face wearing a mask pattern, which is expected to fit the input face in terms of pose view. The output of the former CAE is readily fed into the secondary CAE for extracting a segmentation map that localizes the mask region on the face. Using the segmentation map, the mask pattern can be successfully fused with the input face by means of simple image processing techniques. The proposed method relies on face appearance reconstruction without any facial landmark detection or localization techniques. Extensive experiments with the GTAV Face database and Labeled Faces in the Wild (LFW) database show that the two complementary generators could rapidly and accurately produce synthetic faces even for challenging input faces (e.g., low-resolution face of 25 × 25 pixels with out-of-plane rotations). MDPI 2022-10-17 /pmc/articles/PMC9607215/ /pubmed/36298252 http://dx.doi.org/10.3390/s22207903 Text en © 2022 by the author. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Lee, Seungho A Two-Stage Deep Generative Model for Masked Face Synthesis |
title | A Two-Stage Deep Generative Model for Masked Face Synthesis |
title_full | A Two-Stage Deep Generative Model for Masked Face Synthesis |
title_fullStr | A Two-Stage Deep Generative Model for Masked Face Synthesis |
title_full_unstemmed | A Two-Stage Deep Generative Model for Masked Face Synthesis |
title_short | A Two-Stage Deep Generative Model for Masked Face Synthesis |
title_sort | two-stage deep generative model for masked face synthesis |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9607215/ https://www.ncbi.nlm.nih.gov/pubmed/36298252 http://dx.doi.org/10.3390/s22207903 |
work_keys_str_mv | AT leeseungho atwostagedeepgenerativemodelformaskedfacesynthesis AT leeseungho twostagedeepgenerativemodelformaskedfacesynthesis |