Cargando…

GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data

Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). Thi...

Descripción completa

Detalles Bibliográficos
Autores principales: Saez de Urabain, Irati R., Johnson, Mark H., Smith, Tim J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4333362/
https://www.ncbi.nlm.nih.gov/pubmed/24671827
http://dx.doi.org/10.3758/s13428-014-0456-0
_version_ 1782358025417261056
author Saez de Urabain, Irati R.
Johnson, Mark H.
Smith, Tim J.
author_facet Saez de Urabain, Irati R.
Johnson, Mark H.
Smith, Tim J.
author_sort Saez de Urabain, Irati R.
collection PubMed
description Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms’ outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data.
format Online
Article
Text
id pubmed-4333362
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-43333622015-02-24 GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data Saez de Urabain, Irati R. Johnson, Mark H. Smith, Tim J. Behav Res Methods Article Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms’ outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data. Springer US 2014-03-27 2015 /pmc/articles/PMC4333362/ /pubmed/24671827 http://dx.doi.org/10.3758/s13428-014-0456-0 Text en © The Author(s) 2014 https://creativecommons.org/licenses/by/4.0/ Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
spellingShingle Article
Saez de Urabain, Irati R.
Johnson, Mark H.
Smith, Tim J.
GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data
title GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data
title_full GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data
title_fullStr GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data
title_full_unstemmed GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data
title_short GraFIX: A semiautomatic approach for parsing low- and high-quality eye-tracking data
title_sort grafix: a semiautomatic approach for parsing low- and high-quality eye-tracking data
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4333362/
https://www.ncbi.nlm.nih.gov/pubmed/24671827
http://dx.doi.org/10.3758/s13428-014-0456-0
work_keys_str_mv AT saezdeurabainiratir grafixasemiautomaticapproachforparsinglowandhighqualityeyetrackingdata
AT johnsonmarkh grafixasemiautomaticapproachforparsinglowandhighqualityeyetrackingdata
AT smithtimj grafixasemiautomaticapproachforparsinglowandhighqualityeyetrackingdata