Cargando…
Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching
In this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or p...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Bern Open Publishing
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8005322/ https://www.ncbi.nlm.nih.gov/pubmed/33828806 http://dx.doi.org/10.16910/jemr.13.4.5 |
_version_ | 1783672103572078592 |
---|---|
author | Agtzidis, Ioannis Startsev, Mikhail Dorr, Michael |
author_facet | Agtzidis, Ioannis Startsev, Mikhail Dorr, Michael |
author_sort | Agtzidis, Ioannis |
collection | PubMed |
description | In this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or physically implausible signals). In order to achieve more consistent annotations, the gaze samples were labelled by a novice rater based on rudimentary algorithmic suggestions, and subsequently corrected by an expert rater. Overall, we annotated eye movement events in the recordings corresponding to 50 randomly selected test set clips and 6 training set clips from Hollywood2, which were viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data. In these labels, 62.4% of the samples were attributed to fixations, 9.1% – to saccades, and, notably, 24.2% – to pursuit (the remainder marked as noise). After evaluation of 15 published eye movement classification algorithms on our newly collected annotated data set, we found that the most recent algorithms perform very well on average, and even reach human-level labelling quality for fixations and saccades, but all have a much larger room for improvement when it comes to smooth pursuit classification. The data set is made available at https://gin.g-node.org/ioannis.agtzidis/hollywood2_em. |
format | Online Article Text |
id | pubmed-8005322 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Bern Open Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-80053222021-04-06 Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching Agtzidis, Ioannis Startsev, Mikhail Dorr, Michael J Eye Mov Res Research Article In this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or physically implausible signals). In order to achieve more consistent annotations, the gaze samples were labelled by a novice rater based on rudimentary algorithmic suggestions, and subsequently corrected by an expert rater. Overall, we annotated eye movement events in the recordings corresponding to 50 randomly selected test set clips and 6 training set clips from Hollywood2, which were viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data. In these labels, 62.4% of the samples were attributed to fixations, 9.1% – to saccades, and, notably, 24.2% – to pursuit (the remainder marked as noise). After evaluation of 15 published eye movement classification algorithms on our newly collected annotated data set, we found that the most recent algorithms perform very well on average, and even reach human-level labelling quality for fixations and saccades, but all have a much larger room for improvement when it comes to smooth pursuit classification. The data set is made available at https://gin.g-node.org/ioannis.agtzidis/hollywood2_em. Bern Open Publishing 2020-07-27 /pmc/articles/PMC8005322/ /pubmed/33828806 http://dx.doi.org/10.16910/jemr.13.4.5 Text en This work is licensed under a Creative Commons Attribution 4.0 International License, ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use and redistribution provided that the original author and source are credited. |
spellingShingle | Research Article Agtzidis, Ioannis Startsev, Mikhail Dorr, Michael Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
title | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
title_full | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
title_fullStr | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
title_full_unstemmed | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
title_short | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
title_sort | two hours in hollywood: a manually annotated ground truth data set of eye movements during movie clip watching |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8005322/ https://www.ncbi.nlm.nih.gov/pubmed/33828806 http://dx.doi.org/10.16910/jemr.13.4.5 |
work_keys_str_mv | AT agtzidisioannis twohoursinhollywoodamanuallyannotatedgroundtruthdatasetofeyemovementsduringmovieclipwatching AT startsevmikhail twohoursinhollywoodamanuallyannotatedgroundtruthdatasetofeyemovementsduringmovieclipwatching AT dorrmichael twohoursinhollywoodamanuallyannotatedgroundtruthdatasetofeyemovementsduringmovieclipwatching |