Cargando…

Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels

Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to ga...

Descripción completa

Detalles Bibliográficos
Autores principales: Yoo, Sangbong, Jeong, Seongmin, Jang, Yun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8309511/
https://www.ncbi.nlm.nih.gov/pubmed/34300425
http://dx.doi.org/10.3390/s21144686
_version_ 1783728538801668096
author Yoo, Sangbong
Jeong, Seongmin
Jang, Yun
author_facet Yoo, Sangbong
Jeong, Seongmin
Jang, Yun
author_sort Yoo, Sangbong
collection PubMed
description Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
format Online
Article
Text
id pubmed-8309511
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-83095112021-07-25 Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels Yoo, Sangbong Jeong, Seongmin Jang, Yun Sensors (Basel) Article Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization. MDPI 2021-07-08 /pmc/articles/PMC8309511/ /pubmed/34300425 http://dx.doi.org/10.3390/s21144686 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yoo, Sangbong
Jeong, Seongmin
Jang, Yun
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels
title Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels
title_full Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels
title_fullStr Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels
title_full_unstemmed Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels
title_short Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels
title_sort gaze behavior effect on gaze data visualization at different abstraction levels
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8309511/
https://www.ncbi.nlm.nih.gov/pubmed/34300425
http://dx.doi.org/10.3390/s21144686
work_keys_str_mv AT yoosangbong gazebehavioreffectongazedatavisualizationatdifferentabstractionlevels
AT jeongseongmin gazebehavioreffectongazedatavisualizationatdifferentabstractionlevels
AT jangyun gazebehavioreffectongazedatavisualizationatdifferentabstractionlevels