Cargando…

Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities

The study of gaze behavior has primarily been constrained to controlled environments in which the head is fixed. Consequently, little effort has been invested in the development of algorithms for the categorization of gaze events (e.g. fixations, pursuits, saccade, gaze shifts) while the head is fre...

Descripción completa

Detalles Bibliográficos
Autores principales: Kothari, Rakshit, Yang, Zhizhuo, Kanan, Christopher, Bailey, Reynold, Pelz, Jeff B., Diaz, Gabriel J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7018838/
https://www.ncbi.nlm.nih.gov/pubmed/32054884
http://dx.doi.org/10.1038/s41598-020-59251-5
_version_ 1783497405638901760
author Kothari, Rakshit
Yang, Zhizhuo
Kanan, Christopher
Bailey, Reynold
Pelz, Jeff B.
Diaz, Gabriel J.
author_facet Kothari, Rakshit
Yang, Zhizhuo
Kanan, Christopher
Bailey, Reynold
Pelz, Jeff B.
Diaz, Gabriel J.
author_sort Kothari, Rakshit
collection PubMed
description The study of gaze behavior has primarily been constrained to controlled environments in which the head is fixed. Consequently, little effort has been invested in the development of algorithms for the categorization of gaze events (e.g. fixations, pursuits, saccade, gaze shifts) while the head is free, and thus contributes to the velocity signals upon which classification algorithms typically operate. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye + head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye + head rotational velocities (deg/s), infrared eye images and scene imagery (RGB + D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.74 sample based Cohen’s κ. This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve ~87% human performance in detecting fixations and saccades but fall short (50%) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best performing model revealed that classification can be done using only the magnitudes of eye and head movements, potentially removing the need for calibration between the head and eye tracking systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification.
format Online
Article
Text
id pubmed-7018838
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-70188382020-02-21 Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities Kothari, Rakshit Yang, Zhizhuo Kanan, Christopher Bailey, Reynold Pelz, Jeff B. Diaz, Gabriel J. Sci Rep Article The study of gaze behavior has primarily been constrained to controlled environments in which the head is fixed. Consequently, little effort has been invested in the development of algorithms for the categorization of gaze events (e.g. fixations, pursuits, saccade, gaze shifts) while the head is free, and thus contributes to the velocity signals upon which classification algorithms typically operate. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye + head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye + head rotational velocities (deg/s), infrared eye images and scene imagery (RGB + D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.74 sample based Cohen’s κ. This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve ~87% human performance in detecting fixations and saccades but fall short (50%) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best performing model revealed that classification can be done using only the magnitudes of eye and head movements, potentially removing the need for calibration between the head and eye tracking systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification. Nature Publishing Group UK 2020-02-13 /pmc/articles/PMC7018838/ /pubmed/32054884 http://dx.doi.org/10.1038/s41598-020-59251-5 Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Kothari, Rakshit
Yang, Zhizhuo
Kanan, Christopher
Bailey, Reynold
Pelz, Jeff B.
Diaz, Gabriel J.
Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities
title Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities
title_full Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities
title_fullStr Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities
title_full_unstemmed Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities
title_short Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities
title_sort gaze-in-wild: a dataset for studying eye and head coordination in everyday activities
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7018838/
https://www.ncbi.nlm.nih.gov/pubmed/32054884
http://dx.doi.org/10.1038/s41598-020-59251-5
work_keys_str_mv AT kotharirakshit gazeinwildadatasetforstudyingeyeandheadcoordinationineverydayactivities
AT yangzhizhuo gazeinwildadatasetforstudyingeyeandheadcoordinationineverydayactivities
AT kananchristopher gazeinwildadatasetforstudyingeyeandheadcoordinationineverydayactivities
AT baileyreynold gazeinwildadatasetforstudyingeyeandheadcoordinationineverydayactivities
AT pelzjeffb gazeinwildadatasetforstudyingeyeandheadcoordinationineverydayactivities
AT diazgabrielj gazeinwildadatasetforstudyingeyeandheadcoordinationineverydayactivities