Cargando…
Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest
The assessment of gaze behaviour is essential for understanding the psychology of communication. Mobile eye-tracking glasses are useful to measure gaze behaviour during dynamic interactions. Eye-tracking data can be analysed by using manually annotated areas-of-interest. Computer vision algorithms m...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8516759/ https://www.ncbi.nlm.nih.gov/pubmed/33742418 http://dx.doi.org/10.3758/s13428-021-01544-2 |
_version_ | 1784583861720580096 |
---|---|
author | Jongerius, Chiara Callemein, T. Goedemé, T. Van Beeck, K. Romijn, J. A. Smets, E. M. A. Hillen, M. A. |
author_facet | Jongerius, Chiara Callemein, T. Goedemé, T. Van Beeck, K. Romijn, J. A. Smets, E. M. A. Hillen, M. A. |
author_sort | Jongerius, Chiara |
collection | PubMed |
description | The assessment of gaze behaviour is essential for understanding the psychology of communication. Mobile eye-tracking glasses are useful to measure gaze behaviour during dynamic interactions. Eye-tracking data can be analysed by using manually annotated areas-of-interest. Computer vision algorithms may alternatively be used to reduce the amount of manual effort, but also the subjectivity and complexity of these analyses. Using additional re-identification (Re-ID) algorithms, different participants in the interaction can be distinguished. The aim of this study was to compare the results of manual annotation of mobile eye-tracking data with the results of a computer vision algorithm. We selected the first minute of seven randomly selected eye-tracking videos of consultations between physicians and patients in a Dutch Internal Medicine out-patient clinic. Three human annotators and a computer vision algorithm annotated mobile eye-tracking data, after which interrater reliability was assessed between the areas-of-interest annotated by the annotators and the computer vision algorithm. Additionally, we explored interrater reliability when using lengthy videos and different area-of-interest shapes. In total, we analysed more than 65 min of eye-tracking videos manually and with the algorithm. Overall, the absolute normalized difference between the manual and the algorithm annotations of face-gaze was less than 2%. Our results show high interrater agreements between human annotators and the algorithm with Cohen’s kappa ranging from 0.85 to 0.98. We conclude that computer vision algorithms produce comparable results to those of human annotators. Analyses by the algorithm are not subject to annotator fatigue or subjectivity and can therefore advance eye-tracking analyses. |
format | Online Article Text |
id | pubmed-8516759 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-85167592021-10-29 Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest Jongerius, Chiara Callemein, T. Goedemé, T. Van Beeck, K. Romijn, J. A. Smets, E. M. A. Hillen, M. A. Behav Res Methods Article The assessment of gaze behaviour is essential for understanding the psychology of communication. Mobile eye-tracking glasses are useful to measure gaze behaviour during dynamic interactions. Eye-tracking data can be analysed by using manually annotated areas-of-interest. Computer vision algorithms may alternatively be used to reduce the amount of manual effort, but also the subjectivity and complexity of these analyses. Using additional re-identification (Re-ID) algorithms, different participants in the interaction can be distinguished. The aim of this study was to compare the results of manual annotation of mobile eye-tracking data with the results of a computer vision algorithm. We selected the first minute of seven randomly selected eye-tracking videos of consultations between physicians and patients in a Dutch Internal Medicine out-patient clinic. Three human annotators and a computer vision algorithm annotated mobile eye-tracking data, after which interrater reliability was assessed between the areas-of-interest annotated by the annotators and the computer vision algorithm. Additionally, we explored interrater reliability when using lengthy videos and different area-of-interest shapes. In total, we analysed more than 65 min of eye-tracking videos manually and with the algorithm. Overall, the absolute normalized difference between the manual and the algorithm annotations of face-gaze was less than 2%. Our results show high interrater agreements between human annotators and the algorithm with Cohen’s kappa ranging from 0.85 to 0.98. We conclude that computer vision algorithms produce comparable results to those of human annotators. Analyses by the algorithm are not subject to annotator fatigue or subjectivity and can therefore advance eye-tracking analyses. Springer US 2021-03-19 2021 /pmc/articles/PMC8516759/ /pubmed/33742418 http://dx.doi.org/10.3758/s13428-021-01544-2 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Jongerius, Chiara Callemein, T. Goedemé, T. Van Beeck, K. Romijn, J. A. Smets, E. M. A. Hillen, M. A. Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest |
title | Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest |
title_full | Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest |
title_fullStr | Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest |
title_full_unstemmed | Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest |
title_short | Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest |
title_sort | eye-tracking glasses in face-to-face interactions: manual versus automated assessment of areas-of-interest |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8516759/ https://www.ncbi.nlm.nih.gov/pubmed/33742418 http://dx.doi.org/10.3758/s13428-021-01544-2 |
work_keys_str_mv | AT jongeriuschiara eyetrackingglassesinfacetofaceinteractionsmanualversusautomatedassessmentofareasofinterest AT callemeint eyetrackingglassesinfacetofaceinteractionsmanualversusautomatedassessmentofareasofinterest AT goedemet eyetrackingglassesinfacetofaceinteractionsmanualversusautomatedassessmentofareasofinterest AT vanbeeckk eyetrackingglassesinfacetofaceinteractionsmanualversusautomatedassessmentofareasofinterest AT romijnja eyetrackingglassesinfacetofaceinteractionsmanualversusautomatedassessmentofareasofinterest AT smetsema eyetrackingglassesinfacetofaceinteractionsmanualversusautomatedassessmentofareasofinterest AT hillenma eyetrackingglassesinfacetofaceinteractionsmanualversusautomatedassessmentofareasofinterest |