Cargando…

A model of face selection in viewing video stories

When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behavi...

Descripción completa

Detalles Bibliográficos
Autores principales: Suda, Yuki, Kitazawa, Shigeru
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4297980/
https://www.ncbi.nlm.nih.gov/pubmed/25597621
http://dx.doi.org/10.1038/srep07666
_version_ 1782353203271041024
author Suda, Yuki
Kitazawa, Shigeru
author_facet Suda, Yuki
Kitazawa, Shigeru
author_sort Suda, Yuki
collection PubMed
description When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment.
format Online
Article
Text
id pubmed-4297980
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Nature Publishing Group
record_format MEDLINE/PubMed
spelling pubmed-42979802015-01-26 A model of face selection in viewing video stories Suda, Yuki Kitazawa, Shigeru Sci Rep Article When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. Nature Publishing Group 2015-01-19 /pmc/articles/PMC4297980/ /pubmed/25597621 http://dx.doi.org/10.1038/srep07666 Text en Copyright © 2015, Macmillan Publishers Limited. All rights reserved http://creativecommons.org/licenses/by-nc-nd/4.0/ This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder in order to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/
spellingShingle Article
Suda, Yuki
Kitazawa, Shigeru
A model of face selection in viewing video stories
title A model of face selection in viewing video stories
title_full A model of face selection in viewing video stories
title_fullStr A model of face selection in viewing video stories
title_full_unstemmed A model of face selection in viewing video stories
title_short A model of face selection in viewing video stories
title_sort model of face selection in viewing video stories
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4297980/
https://www.ncbi.nlm.nih.gov/pubmed/25597621
http://dx.doi.org/10.1038/srep07666
work_keys_str_mv AT sudayuki amodeloffaceselectioninviewingvideostories
AT kitazawashigeru amodeloffaceselectioninviewingvideostories
AT sudayuki modeloffaceselectioninviewingvideostories
AT kitazawashigeru modeloffaceselectioninviewingvideostories