Cargando…

Predicting the Valence of a Scene from Observers’ Eye Movements

Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emo...

Descripción completa

Detalles Bibliográficos
Autores principales: R.-Tavakoli, Hamed, Atyabi, Adham, Rantanen, Antti, Laukka, Seppo J., Nefti-Meziani, Samia, Heikkilä, Janne
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4583411/
https://www.ncbi.nlm.nih.gov/pubmed/26407322
http://dx.doi.org/10.1371/journal.pone.0138198
_version_ 1782391843008282624
author R.-Tavakoli, Hamed
Atyabi, Adham
Rantanen, Antti
Laukka, Seppo J.
Nefti-Meziani, Samia
Heikkilä, Janne
author_facet R.-Tavakoli, Hamed
Atyabi, Adham
Rantanen, Antti
Laukka, Seppo J.
Nefti-Meziani, Samia
Heikkilä, Janne
author_sort R.-Tavakoli, Hamed
collection PubMed
description Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images.
format Online
Article
Text
id pubmed-4583411
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-45834112015-10-02 Predicting the Valence of a Scene from Observers’ Eye Movements R.-Tavakoli, Hamed Atyabi, Adham Rantanen, Antti Laukka, Seppo J. Nefti-Meziani, Samia Heikkilä, Janne PLoS One Research Article Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. Public Library of Science 2015-09-25 /pmc/articles/PMC4583411/ /pubmed/26407322 http://dx.doi.org/10.1371/journal.pone.0138198 Text en © 2015 R.-Tavakoli et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
R.-Tavakoli, Hamed
Atyabi, Adham
Rantanen, Antti
Laukka, Seppo J.
Nefti-Meziani, Samia
Heikkilä, Janne
Predicting the Valence of a Scene from Observers’ Eye Movements
title Predicting the Valence of a Scene from Observers’ Eye Movements
title_full Predicting the Valence of a Scene from Observers’ Eye Movements
title_fullStr Predicting the Valence of a Scene from Observers’ Eye Movements
title_full_unstemmed Predicting the Valence of a Scene from Observers’ Eye Movements
title_short Predicting the Valence of a Scene from Observers’ Eye Movements
title_sort predicting the valence of a scene from observers’ eye movements
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4583411/
https://www.ncbi.nlm.nih.gov/pubmed/26407322
http://dx.doi.org/10.1371/journal.pone.0138198
work_keys_str_mv AT rtavakolihamed predictingthevalenceofascenefromobserverseyemovements
AT atyabiadham predictingthevalenceofascenefromobserverseyemovements
AT rantanenantti predictingthevalenceofascenefromobserverseyemovements
AT laukkaseppoj predictingthevalenceofascenefromobserverseyemovements
AT neftimezianisamia predictingthevalenceofascenefromobserverseyemovements
AT heikkilajanne predictingthevalenceofascenefromobserverseyemovements