Cargando…
Deep-Learning-Based Multimodal Emotion Classification for Music Videos
Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, suggesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an af...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8309938/ https://www.ncbi.nlm.nih.gov/pubmed/34300666 http://dx.doi.org/10.3390/s21144927 |
_version_ | 1783728641079771136 |
---|---|
author | Pandeya, Yagya Raj Bhattarai, Bhuwan Lee, Joonwhoan |
author_facet | Pandeya, Yagya Raj Bhattarai, Bhuwan Lee, Joonwhoan |
author_sort | Pandeya, Yagya Raj |
collection | PubMed |
description | Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, suggesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis. We applied the audio–video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy. In sum, our empirical findings are as follows: (1) Multimodal representations efficiently capture all acoustic and visual emotional clues included in each music video, (2) the computational cost of each neural network is significantly reduced by factorizing the standard 2D/3D convolution into separate channels and spatiotemporal interactions, and (3) information-sharing methods incorporated into multimodal representations are helpful in guiding individual information flow and boosting overall performance. We tested our findings across several unimodal and multimodal networks against various evaluation metrics and visual analyzers. Our best classifier attained 74% accuracy, an f1-score of 0.73, and an area under the curve score of 0.926. |
format | Online Article Text |
id | pubmed-8309938 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-83099382021-07-25 Deep-Learning-Based Multimodal Emotion Classification for Music Videos Pandeya, Yagya Raj Bhattarai, Bhuwan Lee, Joonwhoan Sensors (Basel) Article Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, suggesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis. We applied the audio–video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy. In sum, our empirical findings are as follows: (1) Multimodal representations efficiently capture all acoustic and visual emotional clues included in each music video, (2) the computational cost of each neural network is significantly reduced by factorizing the standard 2D/3D convolution into separate channels and spatiotemporal interactions, and (3) information-sharing methods incorporated into multimodal representations are helpful in guiding individual information flow and boosting overall performance. We tested our findings across several unimodal and multimodal networks against various evaluation metrics and visual analyzers. Our best classifier attained 74% accuracy, an f1-score of 0.73, and an area under the curve score of 0.926. MDPI 2021-07-20 /pmc/articles/PMC8309938/ /pubmed/34300666 http://dx.doi.org/10.3390/s21144927 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Pandeya, Yagya Raj Bhattarai, Bhuwan Lee, Joonwhoan Deep-Learning-Based Multimodal Emotion Classification for Music Videos |
title | Deep-Learning-Based Multimodal Emotion Classification for Music Videos |
title_full | Deep-Learning-Based Multimodal Emotion Classification for Music Videos |
title_fullStr | Deep-Learning-Based Multimodal Emotion Classification for Music Videos |
title_full_unstemmed | Deep-Learning-Based Multimodal Emotion Classification for Music Videos |
title_short | Deep-Learning-Based Multimodal Emotion Classification for Music Videos |
title_sort | deep-learning-based multimodal emotion classification for music videos |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8309938/ https://www.ncbi.nlm.nih.gov/pubmed/34300666 http://dx.doi.org/10.3390/s21144927 |
work_keys_str_mv | AT pandeyayagyaraj deeplearningbasedmultimodalemotionclassificationformusicvideos AT bhattaraibhuwan deeplearningbasedmultimodalemotionclassificationformusicvideos AT leejoonwhoan deeplearningbasedmultimodalemotionclassificationformusicvideos |