Cargando…

Gaze Estimation Approach Using Deep Differential Residual Network

Gaze estimation, which is a method to determine where a person is looking at given the person’s full face, is a valuable clue for understanding human intention. Similarly to other domains of computer vision, deep learning (DL) methods have gained recognition in the gaze estimation domain. However, t...

Descripción completa

Detalles Bibliográficos
Autores principales: Huang, Longzhao, Li, Yujie, Wang, Xu, Wang, Haoyu, Bouridane, Ahmed, Chaddad, Ahmad
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9322334/
https://www.ncbi.nlm.nih.gov/pubmed/35891141
http://dx.doi.org/10.3390/s22145462
_version_ 1784756276025098240
author Huang, Longzhao
Li, Yujie
Wang, Xu
Wang, Haoyu
Bouridane, Ahmed
Chaddad, Ahmad
author_facet Huang, Longzhao
Li, Yujie
Wang, Xu
Wang, Haoyu
Bouridane, Ahmed
Chaddad, Ahmad
author_sort Huang, Longzhao
collection PubMed
description Gaze estimation, which is a method to determine where a person is looking at given the person’s full face, is a valuable clue for understanding human intention. Similarly to other domains of computer vision, deep learning (DL) methods have gained recognition in the gaze estimation domain. However, there are still gaze calibration problems in the gaze estimation domain, thus preventing existing methods from further improving the performances. An effective solution is to directly predict the difference information of two human eyes, such as the differential network (Diff-Nn). However, this solution results in a loss of accuracy when using only one inference image. We propose a differential residual model (DRNet) combined with a new loss function to make use of the difference information of two eye images. We treat the difference information as auxiliary information. We assess the proposed model (DRNet) mainly using two public datasets (1) MpiiGaze and (2) Eyediap. Considering only the eye features, DRNet outperforms the state-of-the-art gaze estimation methods with angular-error of 4.57 and 6.14 using MpiiGaze and Eyediap datasets, respectively. Furthermore, the experimental results also demonstrate that DRNet is extremely robust to noise images.
format Online
Article
Text
id pubmed-9322334
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93223342022-07-27 Gaze Estimation Approach Using Deep Differential Residual Network Huang, Longzhao Li, Yujie Wang, Xu Wang, Haoyu Bouridane, Ahmed Chaddad, Ahmad Sensors (Basel) Article Gaze estimation, which is a method to determine where a person is looking at given the person’s full face, is a valuable clue for understanding human intention. Similarly to other domains of computer vision, deep learning (DL) methods have gained recognition in the gaze estimation domain. However, there are still gaze calibration problems in the gaze estimation domain, thus preventing existing methods from further improving the performances. An effective solution is to directly predict the difference information of two human eyes, such as the differential network (Diff-Nn). However, this solution results in a loss of accuracy when using only one inference image. We propose a differential residual model (DRNet) combined with a new loss function to make use of the difference information of two eye images. We treat the difference information as auxiliary information. We assess the proposed model (DRNet) mainly using two public datasets (1) MpiiGaze and (2) Eyediap. Considering only the eye features, DRNet outperforms the state-of-the-art gaze estimation methods with angular-error of 4.57 and 6.14 using MpiiGaze and Eyediap datasets, respectively. Furthermore, the experimental results also demonstrate that DRNet is extremely robust to noise images. MDPI 2022-07-21 /pmc/articles/PMC9322334/ /pubmed/35891141 http://dx.doi.org/10.3390/s22145462 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Huang, Longzhao
Li, Yujie
Wang, Xu
Wang, Haoyu
Bouridane, Ahmed
Chaddad, Ahmad
Gaze Estimation Approach Using Deep Differential Residual Network
title Gaze Estimation Approach Using Deep Differential Residual Network
title_full Gaze Estimation Approach Using Deep Differential Residual Network
title_fullStr Gaze Estimation Approach Using Deep Differential Residual Network
title_full_unstemmed Gaze Estimation Approach Using Deep Differential Residual Network
title_short Gaze Estimation Approach Using Deep Differential Residual Network
title_sort gaze estimation approach using deep differential residual network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9322334/
https://www.ncbi.nlm.nih.gov/pubmed/35891141
http://dx.doi.org/10.3390/s22145462
work_keys_str_mv AT huanglongzhao gazeestimationapproachusingdeepdifferentialresidualnetwork
AT liyujie gazeestimationapproachusingdeepdifferentialresidualnetwork
AT wangxu gazeestimationapproachusingdeepdifferentialresidualnetwork
AT wanghaoyu gazeestimationapproachusingdeepdifferentialresidualnetwork
AT bouridaneahmed gazeestimationapproachusingdeepdifferentialresidualnetwork
AT chaddadahmad gazeestimationapproachusingdeepdifferentialresidualnetwork