Cargando…
Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data
With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126076/ https://www.ncbi.nlm.nih.gov/pubmed/35650384 http://dx.doi.org/10.3758/s13428-022-01833-4 |
_version_ | 1785030159759310848 |
---|---|
author | Deane, Oliver Toth, Eszter Yeo, Sang-Hoon |
author_facet | Deane, Oliver Toth, Eszter Yeo, Sang-Hoon |
author_sort | Deane, Oliver |
collection | PubMed |
description | With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied upon the time-consuming manual annotation of eye-tracking data, while previous attempts at automation lack the necessary versatility for in-the-wild navigation trials consisting of complex and dynamic scenes. Here, we propose a system capable of informing researchers of where and what a user’s gaze is focused upon at any one time. The system achieves this by first running footage recorded on a head-mounted camera through a deep-learning-based object detection algorithm called Masked Region-based Convolutional Neural Network (Mask R-CNN). The algorithm’s output is combined with frame-by-frame gaze coordinates measured by an eye-tracking device synchronized with the head-mounted camera to detect and annotate, without any manual intervention, what a user looked at for each frame of the provided footage. The effectiveness of the presented methodology was legitimized by a comparison between the system output and that of manual coders. High levels of agreement between the two validated the system as a preferable data collection technique as it was capable of processing data at a significantly faster rate than its human counterpart. Support for the system’s practicality was then further demonstrated via a case study exploring the mediatory effects of gaze behaviors on an environment-driven attentional bias. |
format | Online Article Text |
id | pubmed-10126076 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-101260762023-04-26 Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data Deane, Oliver Toth, Eszter Yeo, Sang-Hoon Behav Res Methods Article With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied upon the time-consuming manual annotation of eye-tracking data, while previous attempts at automation lack the necessary versatility for in-the-wild navigation trials consisting of complex and dynamic scenes. Here, we propose a system capable of informing researchers of where and what a user’s gaze is focused upon at any one time. The system achieves this by first running footage recorded on a head-mounted camera through a deep-learning-based object detection algorithm called Masked Region-based Convolutional Neural Network (Mask R-CNN). The algorithm’s output is combined with frame-by-frame gaze coordinates measured by an eye-tracking device synchronized with the head-mounted camera to detect and annotate, without any manual intervention, what a user looked at for each frame of the provided footage. The effectiveness of the presented methodology was legitimized by a comparison between the system output and that of manual coders. High levels of agreement between the two validated the system as a preferable data collection technique as it was capable of processing data at a significantly faster rate than its human counterpart. Support for the system’s practicality was then further demonstrated via a case study exploring the mediatory effects of gaze behaviors on an environment-driven attentional bias. Springer US 2022-06-01 2023 /pmc/articles/PMC10126076/ /pubmed/35650384 http://dx.doi.org/10.3758/s13428-022-01833-4 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Deane, Oliver Toth, Eszter Yeo, Sang-Hoon Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data |
title | Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data |
title_full | Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data |
title_fullStr | Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data |
title_full_unstemmed | Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data |
title_short | Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data |
title_sort | deep-saga: a deep-learning-based system for automatic gaze annotation from eye-tracking data |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126076/ https://www.ncbi.nlm.nih.gov/pubmed/35650384 http://dx.doi.org/10.3758/s13428-022-01833-4 |
work_keys_str_mv | AT deaneoliver deepsagaadeeplearningbasedsystemforautomaticgazeannotationfromeyetrackingdata AT totheszter deepsagaadeeplearningbasedsystemforautomaticgazeannotationfromeyetrackingdata AT yeosanghoon deepsagaadeeplearningbasedsystemforautomaticgazeannotationfromeyetrackingdata |