Cargando…
CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors
Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this,...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7795677/ https://www.ncbi.nlm.nih.gov/pubmed/33374281 http://dx.doi.org/10.3390/s21010052 |
_version_ | 1783634501489917952 |
---|---|
author | Zhang, Tianyi El Ali, Abdallah Wang, Chen Hanjalic, Alan Cesar, Pablo |
author_facet | Zhang, Tianyi El Ali, Abdallah Wang, Chen Hanjalic, Alan Cesar, Pablo |
author_sort | Zhang, Tianyi |
collection | PubMed |
description | Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: [Formula: see text] and [Formula: see text] for V-A on CASE, and [Formula: see text] and [Formula: see text] for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance. |
format | Online Article Text |
id | pubmed-7795677 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-77956772021-01-10 CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors Zhang, Tianyi El Ali, Abdallah Wang, Chen Hanjalic, Alan Cesar, Pablo Sensors (Basel) Article Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: [Formula: see text] and [Formula: see text] for V-A on CASE, and [Formula: see text] and [Formula: see text] for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance. MDPI 2020-12-24 /pmc/articles/PMC7795677/ /pubmed/33374281 http://dx.doi.org/10.3390/s21010052 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhang, Tianyi El Ali, Abdallah Wang, Chen Hanjalic, Alan Cesar, Pablo CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors |
title | CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors |
title_full | CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors |
title_fullStr | CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors |
title_full_unstemmed | CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors |
title_short | CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors |
title_sort | corrnet: fine-grained emotion recognition for video watching using wearable physiological sensors |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7795677/ https://www.ncbi.nlm.nih.gov/pubmed/33374281 http://dx.doi.org/10.3390/s21010052 |
work_keys_str_mv | AT zhangtianyi corrnetfinegrainedemotionrecognitionforvideowatchingusingwearablephysiologicalsensors AT elaliabdallah corrnetfinegrainedemotionrecognitionforvideowatchingusingwearablephysiologicalsensors AT wangchen corrnetfinegrainedemotionrecognitionforvideowatchingusingwearablephysiologicalsensors AT hanjalicalan corrnetfinegrainedemotionrecognitionforvideowatchingusingwearablephysiologicalsensors AT cesarpablo corrnetfinegrainedemotionrecognitionforvideowatchingusingwearablephysiologicalsensors |