Cargando…
Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks
In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user’s intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considere...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506593/ https://www.ncbi.nlm.nih.gov/pubmed/32878209 http://dx.doi.org/10.3390/s20174935 |
_version_ | 1783585050118324224 |
---|---|
author | Kim, Jung-Hwa Jeong, Jin-Woo |
author_facet | Kim, Jung-Hwa Jeong, Jin-Woo |
author_sort | Kim, Jung-Hwa |
collection | PubMed |
description | In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user’s intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered one of the most effective indicators among recently developed interaction techniques (e.g., face orientation estimation, body tracking, and gesture recognition). Previous gaze estimation techniques, however, are known to be effective only in a controlled lab environment under normal lighting conditions. In this study, we propose a novel deep learning-based approach to achieve a successful gaze estimation under various low-light conditions, which is anticipated to be more practical for smart interaction scenarios. The proposed approach utilizes a generative adversarial network (GAN) to enhance users’ eye images captured under low-light conditions, thereby restoring missing information for gaze estimation. Afterward, the GAN-recovered images are fed into the convolutional neural network architecture as input data to estimate the direction of the user gaze. Our experimental results on the modified MPIIGaze dataset demonstrate that the proposed approach achieves an average performance improvement of 4.53%–8.9% under low and dark light conditions, which is a promising step toward further research. |
format | Online Article Text |
id | pubmed-7506593 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-75065932020-09-26 Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks Kim, Jung-Hwa Jeong, Jin-Woo Sensors (Basel) Article In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user’s intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered one of the most effective indicators among recently developed interaction techniques (e.g., face orientation estimation, body tracking, and gesture recognition). Previous gaze estimation techniques, however, are known to be effective only in a controlled lab environment under normal lighting conditions. In this study, we propose a novel deep learning-based approach to achieve a successful gaze estimation under various low-light conditions, which is anticipated to be more practical for smart interaction scenarios. The proposed approach utilizes a generative adversarial network (GAN) to enhance users’ eye images captured under low-light conditions, thereby restoring missing information for gaze estimation. Afterward, the GAN-recovered images are fed into the convolutional neural network architecture as input data to estimate the direction of the user gaze. Our experimental results on the modified MPIIGaze dataset demonstrate that the proposed approach achieves an average performance improvement of 4.53%–8.9% under low and dark light conditions, which is a promising step toward further research. MDPI 2020-08-31 /pmc/articles/PMC7506593/ /pubmed/32878209 http://dx.doi.org/10.3390/s20174935 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Kim, Jung-Hwa Jeong, Jin-Woo Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks |
title | Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks |
title_full | Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks |
title_fullStr | Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks |
title_full_unstemmed | Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks |
title_short | Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks |
title_sort | gaze in the dark: gaze estimation in a low-light environment with generative adversarial networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506593/ https://www.ncbi.nlm.nih.gov/pubmed/32878209 http://dx.doi.org/10.3390/s20174935 |
work_keys_str_mv | AT kimjunghwa gazeinthedarkgazeestimationinalowlightenvironmentwithgenerativeadversarialnetworks AT jeongjinwoo gazeinthedarkgazeestimationinalowlightenvironmentwithgenerativeadversarialnetworks |