Cargando…

iCatcher+: Robust and Automated Annotation of Infants’ and Young Children’s Gaze Behavior From Videos Collected in Laboratory, Field, and Online Studies

Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits because the behaviors of interest, such as gaze duration and d...

Descripción completa

Detalles Bibliográficos
Autores principales: Erel, Yotam, Shannon, Katherine Adams, Chu, Junyi, Scott, Kim, Struhl, Melissa Kline, Cao, Peng, Tan, Xincheng, Hart, Peter, Raz, Gal, Piccolo, Sabrina, Mei, Catherine, Potter, Christine, Jaffe-Dax, Sagi, Lew-Williams, Casey, Tenenbaum, Joshua, Fairchild, Katherine, Bermano, Amit, Liu, Shari
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10471135/
https://www.ncbi.nlm.nih.gov/pubmed/37655047
http://dx.doi.org/10.1177/25152459221147250
_version_ 1785099812299866112
author Erel, Yotam
Shannon, Katherine Adams
Chu, Junyi
Scott, Kim
Struhl, Melissa Kline
Cao, Peng
Tan, Xincheng
Hart, Peter
Raz, Gal
Piccolo, Sabrina
Mei, Catherine
Potter, Christine
Jaffe-Dax, Sagi
Lew-Williams, Casey
Tenenbaum, Joshua
Fairchild, Katherine
Bermano, Amit
Liu, Shari
author_facet Erel, Yotam
Shannon, Katherine Adams
Chu, Junyi
Scott, Kim
Struhl, Melissa Kline
Cao, Peng
Tan, Xincheng
Hart, Peter
Raz, Gal
Piccolo, Sabrina
Mei, Catherine
Potter, Christine
Jaffe-Dax, Sagi
Lew-Williams, Casey
Tenenbaum, Joshua
Fairchild, Katherine
Bermano, Amit
Liu, Shari
author_sort Erel, Yotam
collection PubMed
description Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits because the behaviors of interest, such as gaze duration and direction, still have to be extracted from video through a laborious process of manual annotation, even when these data are collected online. Recent advances in computer vision raise the possibility of automated annotation of these video data. In this article, we built on a system for automatic gaze annotation in young children, iCatcher, by engineering improvements and then training and testing the system (referred to hereafter as iCatcher+) on three data sets with substantial video and participant variability (214 videos collected in U.S. lab and field sites, 143 videos collected in Senegal field sites, and 265 videos collected via webcams in homes; participant age range = 4 months–3.5 years). When trained on each of these data sets, iCatcher+ performed with near human-level accuracy on held-out videos on distinguishing “LEFT” versus “RIGHT” and “ON” versus “OFF” looking behavior across all data sets. This high performance was achieved at the level of individual frames, experimental trials, and study videos; held across participant demographics (e.g., age, race/ethnicity), participant behavior (e.g., movement, head position), and video characteristics (e.g., luminance); and generalized to a fourth, entirely held-out online data set. We close by discussing next steps required to fully automate the life cycle of online infant and child behavioral studies, representing a key step toward enabling robust and high-throughput developmental research.
format Online
Article
Text
id pubmed-10471135
institution National Center for Biotechnology Information
language English
publishDate 2023
record_format MEDLINE/PubMed
spelling pubmed-104711352023-08-31 iCatcher+: Robust and Automated Annotation of Infants’ and Young Children’s Gaze Behavior From Videos Collected in Laboratory, Field, and Online Studies Erel, Yotam Shannon, Katherine Adams Chu, Junyi Scott, Kim Struhl, Melissa Kline Cao, Peng Tan, Xincheng Hart, Peter Raz, Gal Piccolo, Sabrina Mei, Catherine Potter, Christine Jaffe-Dax, Sagi Lew-Williams, Casey Tenenbaum, Joshua Fairchild, Katherine Bermano, Amit Liu, Shari Adv Methods Pract Psychol Sci Article Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits because the behaviors of interest, such as gaze duration and direction, still have to be extracted from video through a laborious process of manual annotation, even when these data are collected online. Recent advances in computer vision raise the possibility of automated annotation of these video data. In this article, we built on a system for automatic gaze annotation in young children, iCatcher, by engineering improvements and then training and testing the system (referred to hereafter as iCatcher+) on three data sets with substantial video and participant variability (214 videos collected in U.S. lab and field sites, 143 videos collected in Senegal field sites, and 265 videos collected via webcams in homes; participant age range = 4 months–3.5 years). When trained on each of these data sets, iCatcher+ performed with near human-level accuracy on held-out videos on distinguishing “LEFT” versus “RIGHT” and “ON” versus “OFF” looking behavior across all data sets. This high performance was achieved at the level of individual frames, experimental trials, and study videos; held across participant demographics (e.g., age, race/ethnicity), participant behavior (e.g., movement, head position), and video characteristics (e.g., luminance); and generalized to a fourth, entirely held-out online data set. We close by discussing next steps required to fully automate the life cycle of online infant and child behavioral studies, representing a key step toward enabling robust and high-throughput developmental research. 2023 2023-04-18 /pmc/articles/PMC10471135/ /pubmed/37655047 http://dx.doi.org/10.1177/25152459221147250 Text en https://creativecommons.org/licenses/by-nc/4.0/Creative Commons Noncommercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-Noncommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/), which permits noncommercial use, reproduction, and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
spellingShingle Article
Erel, Yotam
Shannon, Katherine Adams
Chu, Junyi
Scott, Kim
Struhl, Melissa Kline
Cao, Peng
Tan, Xincheng
Hart, Peter
Raz, Gal
Piccolo, Sabrina
Mei, Catherine
Potter, Christine
Jaffe-Dax, Sagi
Lew-Williams, Casey
Tenenbaum, Joshua
Fairchild, Katherine
Bermano, Amit
Liu, Shari
iCatcher+: Robust and Automated Annotation of Infants’ and Young Children’s Gaze Behavior From Videos Collected in Laboratory, Field, and Online Studies
title iCatcher+: Robust and Automated Annotation of Infants’ and Young Children’s Gaze Behavior From Videos Collected in Laboratory, Field, and Online Studies
title_full iCatcher+: Robust and Automated Annotation of Infants’ and Young Children’s Gaze Behavior From Videos Collected in Laboratory, Field, and Online Studies
title_fullStr iCatcher+: Robust and Automated Annotation of Infants’ and Young Children’s Gaze Behavior From Videos Collected in Laboratory, Field, and Online Studies
title_full_unstemmed iCatcher+: Robust and Automated Annotation of Infants’ and Young Children’s Gaze Behavior From Videos Collected in Laboratory, Field, and Online Studies
title_short iCatcher+: Robust and Automated Annotation of Infants’ and Young Children’s Gaze Behavior From Videos Collected in Laboratory, Field, and Online Studies
title_sort icatcher+: robust and automated annotation of infants’ and young children’s gaze behavior from videos collected in laboratory, field, and online studies
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10471135/
https://www.ncbi.nlm.nih.gov/pubmed/37655047
http://dx.doi.org/10.1177/25152459221147250
work_keys_str_mv AT erelyotam icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT shannonkatherineadams icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT chujunyi icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT scottkim icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT struhlmelissakline icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT caopeng icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT tanxincheng icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT hartpeter icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT razgal icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT piccolosabrina icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT meicatherine icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT potterchristine icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT jaffedaxsagi icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT lewwilliamscasey icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT tenenbaumjoshua icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT fairchildkatherine icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT bermanoamit icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies
AT liushari icatcherrobustandautomatedannotationofinfantsandyoungchildrensgazebehaviorfromvideoscollectedinlaboratoryfieldandonlinestudies