Cargando…
Dilated Skip Convolution for Facial Landmark Detection
Facial landmark detection has gained enormous interest for face-related applications due to its success in facial analysis tasks such as facial recognition, cartoon generation, face tracking and facial expression analysis. Many studies have been proposed and implemented to deal with the challenging...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6960628/ https://www.ncbi.nlm.nih.gov/pubmed/31817213 http://dx.doi.org/10.3390/s19245350 |
_version_ | 1783487815130021888 |
---|---|
author | Chim, Seyha Lee, Jin-Gu Park, Ho-Hyun |
author_facet | Chim, Seyha Lee, Jin-Gu Park, Ho-Hyun |
author_sort | Chim, Seyha |
collection | PubMed |
description | Facial landmark detection has gained enormous interest for face-related applications due to its success in facial analysis tasks such as facial recognition, cartoon generation, face tracking and facial expression analysis. Many studies have been proposed and implemented to deal with the challenging problems of localizing facial landmarks from given images, including large appearance variations and partial occlusion. Studies have differed in the way they use the facial appearances and shape information of input images. In our work, we consider facial information within both global and local contexts. We aim to obtain local pixel-level accuracy for local-context information in the first stage and integrate this with knowledge of spatial relationships between each key point in a whole image for global-context information in the second stage. Thus, the pipeline of our architecture consists of two main components: (1) a deep network for local-context subnet that generates detection heatmaps via fully convolutional DenseNets with additional kernel convolution filters and (2) a dilated skip convolution subnet—a combination of dilated convolutions and skip-connections networks—that are in charge of robustly refining the local appearance heatmaps. Through this proposed architecture, we demonstrate that our approach achieves state-of-the-art performance on challenging datasets—including LFPW, HELEN, 300W and AFLW2000-3D—by leveraging fully convolutional DenseNets, skip-connections and dilated convolution architecture without further post-processing. |
format | Online Article Text |
id | pubmed-6960628 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-69606282020-01-23 Dilated Skip Convolution for Facial Landmark Detection Chim, Seyha Lee, Jin-Gu Park, Ho-Hyun Sensors (Basel) Article Facial landmark detection has gained enormous interest for face-related applications due to its success in facial analysis tasks such as facial recognition, cartoon generation, face tracking and facial expression analysis. Many studies have been proposed and implemented to deal with the challenging problems of localizing facial landmarks from given images, including large appearance variations and partial occlusion. Studies have differed in the way they use the facial appearances and shape information of input images. In our work, we consider facial information within both global and local contexts. We aim to obtain local pixel-level accuracy for local-context information in the first stage and integrate this with knowledge of spatial relationships between each key point in a whole image for global-context information in the second stage. Thus, the pipeline of our architecture consists of two main components: (1) a deep network for local-context subnet that generates detection heatmaps via fully convolutional DenseNets with additional kernel convolution filters and (2) a dilated skip convolution subnet—a combination of dilated convolutions and skip-connections networks—that are in charge of robustly refining the local appearance heatmaps. Through this proposed architecture, we demonstrate that our approach achieves state-of-the-art performance on challenging datasets—including LFPW, HELEN, 300W and AFLW2000-3D—by leveraging fully convolutional DenseNets, skip-connections and dilated convolution architecture without further post-processing. MDPI 2019-12-04 /pmc/articles/PMC6960628/ /pubmed/31817213 http://dx.doi.org/10.3390/s19245350 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Chim, Seyha Lee, Jin-Gu Park, Ho-Hyun Dilated Skip Convolution for Facial Landmark Detection |
title | Dilated Skip Convolution for Facial Landmark Detection |
title_full | Dilated Skip Convolution for Facial Landmark Detection |
title_fullStr | Dilated Skip Convolution for Facial Landmark Detection |
title_full_unstemmed | Dilated Skip Convolution for Facial Landmark Detection |
title_short | Dilated Skip Convolution for Facial Landmark Detection |
title_sort | dilated skip convolution for facial landmark detection |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6960628/ https://www.ncbi.nlm.nih.gov/pubmed/31817213 http://dx.doi.org/10.3390/s19245350 |
work_keys_str_mv | AT chimseyha dilatedskipconvolutionforfaciallandmarkdetection AT leejingu dilatedskipconvolutionforfaciallandmarkdetection AT parkhohyun dilatedskipconvolutionforfaciallandmarkdetection |