Cargando…
Convolutional Neural Network-Based Technique for Gaze Estimation on Mobile Devices
Eye tracking is becoming a very popular, useful, and important technology. Many eye tracking technologies are currently expensive and only available to large corporations. Some of them necessitate explicit personal calibration, which makes them unsuitable for use in real-world or uncontrolled enviro...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8826079/ https://www.ncbi.nlm.nih.gov/pubmed/35156012 http://dx.doi.org/10.3389/frai.2021.796825 |
_version_ | 1784647356812099584 |
---|---|
author | Akinyelu, Andronicus A. Blignaut, Pieter |
author_facet | Akinyelu, Andronicus A. Blignaut, Pieter |
author_sort | Akinyelu, Andronicus A. |
collection | PubMed |
description | Eye tracking is becoming a very popular, useful, and important technology. Many eye tracking technologies are currently expensive and only available to large corporations. Some of them necessitate explicit personal calibration, which makes them unsuitable for use in real-world or uncontrolled environments. Explicit personal calibration can also be cumbersome and degrades the user experience. To address these issues, this study proposes a Convolutional Neural Network (CNN) based calibration-free technique for improved gaze estimation in unconstrained environments. The proposed technique consists of two components, namely a face component and a 39-point facial landmark component. The face component is used to extract the gaze estimation features from the eyes, while the 39-point facial landmark component is used to encode the shape and location of the eyes (within the face) into the network. Adding this information can make the network learn free-head and eye movements. Another CNN model was designed in this study primarily for the sake of comparison. The CNN model accepts only the face images as input. Different experiments were performed, and the experimental result reveals that the proposed technique outperforms the second model. Fine-tuning was also performed using the VGG16 pre-trained model. Experimental results show that the fine-tuned results of the proposed technique perform better than the fine-tuned results of the second model. Overall, the results show that 39-point facial landmarks can be used to improve the performance of CNN-based gaze estimation models. |
format | Online Article Text |
id | pubmed-8826079 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-88260792022-02-10 Convolutional Neural Network-Based Technique for Gaze Estimation on Mobile Devices Akinyelu, Andronicus A. Blignaut, Pieter Front Artif Intell Artificial Intelligence Eye tracking is becoming a very popular, useful, and important technology. Many eye tracking technologies are currently expensive and only available to large corporations. Some of them necessitate explicit personal calibration, which makes them unsuitable for use in real-world or uncontrolled environments. Explicit personal calibration can also be cumbersome and degrades the user experience. To address these issues, this study proposes a Convolutional Neural Network (CNN) based calibration-free technique for improved gaze estimation in unconstrained environments. The proposed technique consists of two components, namely a face component and a 39-point facial landmark component. The face component is used to extract the gaze estimation features from the eyes, while the 39-point facial landmark component is used to encode the shape and location of the eyes (within the face) into the network. Adding this information can make the network learn free-head and eye movements. Another CNN model was designed in this study primarily for the sake of comparison. The CNN model accepts only the face images as input. Different experiments were performed, and the experimental result reveals that the proposed technique outperforms the second model. Fine-tuning was also performed using the VGG16 pre-trained model. Experimental results show that the fine-tuned results of the proposed technique perform better than the fine-tuned results of the second model. Overall, the results show that 39-point facial landmarks can be used to improve the performance of CNN-based gaze estimation models. Frontiers Media S.A. 2022-01-26 /pmc/articles/PMC8826079/ /pubmed/35156012 http://dx.doi.org/10.3389/frai.2021.796825 Text en Copyright © 2022 Akinyelu and Blignaut. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Artificial Intelligence Akinyelu, Andronicus A. Blignaut, Pieter Convolutional Neural Network-Based Technique for Gaze Estimation on Mobile Devices |
title | Convolutional Neural Network-Based Technique for Gaze Estimation on Mobile Devices |
title_full | Convolutional Neural Network-Based Technique for Gaze Estimation on Mobile Devices |
title_fullStr | Convolutional Neural Network-Based Technique for Gaze Estimation on Mobile Devices |
title_full_unstemmed | Convolutional Neural Network-Based Technique for Gaze Estimation on Mobile Devices |
title_short | Convolutional Neural Network-Based Technique for Gaze Estimation on Mobile Devices |
title_sort | convolutional neural network-based technique for gaze estimation on mobile devices |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8826079/ https://www.ncbi.nlm.nih.gov/pubmed/35156012 http://dx.doi.org/10.3389/frai.2021.796825 |
work_keys_str_mv | AT akinyeluandronicusa convolutionalneuralnetworkbasedtechniqueforgazeestimationonmobiledevices AT blignautpieter convolutionalneuralnetworkbasedtechniqueforgazeestimationonmobiledevices |