Cargando…

Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4

Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometim...

Descripción completa

Detalles Bibliográficos
Autores principales: Kumari, Niharika, Ruf, Verena, Mukhametov, Sergey, Schmidt, Albrecht, Kuhn, Jochen, Küchemann, Stefan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8621024/
https://www.ncbi.nlm.nih.gov/pubmed/34833742
http://dx.doi.org/10.3390/s21227668
_version_ 1784605357898727424
author Kumari, Niharika
Ruf, Verena
Mukhametov, Sergey
Schmidt, Albrecht
Kuhn, Jochen
Küchemann, Stefan
author_facet Kumari, Niharika
Ruf, Verena
Mukhametov, Sergey
Schmidt, Albrecht
Kuhn, Jochen
Küchemann, Stefan
author_sort Kumari, Niharika
collection PubMed
description Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.
format Online
Article
Text
id pubmed-8621024
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-86210242021-11-27 Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4 Kumari, Niharika Ruf, Verena Mukhametov, Sergey Schmidt, Albrecht Kuhn, Jochen Küchemann, Stefan Sensors (Basel) Article Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered. MDPI 2021-11-18 /pmc/articles/PMC8621024/ /pubmed/34833742 http://dx.doi.org/10.3390/s21227668 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kumari, Niharika
Ruf, Verena
Mukhametov, Sergey
Schmidt, Albrecht
Kuhn, Jochen
Küchemann, Stefan
Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_full Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_fullStr Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_full_unstemmed Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_short Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
title_sort mobile eye-tracking data analysis using object detection via yolo v4
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8621024/
https://www.ncbi.nlm.nih.gov/pubmed/34833742
http://dx.doi.org/10.3390/s21227668
work_keys_str_mv AT kumariniharika mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT rufverena mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT mukhametovsergey mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT schmidtalbrecht mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT kuhnjochen mobileeyetrackingdataanalysisusingobjectdetectionviayolov4
AT kuchemannstefan mobileeyetrackingdataanalysisusingobjectdetectionviayolov4