Cargando…

Music video emotion classification using slow–fast audio–video network and unsupervised feature representation

Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instruments, and...

Descripción completa

Detalles Bibliográficos
Autores principales: Pandeya, Yagya Raj, Bhattarai, Bhuwan, Lee, Joonwhoan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8494760/
https://www.ncbi.nlm.nih.gov/pubmed/34615904
http://dx.doi.org/10.1038/s41598-021-98856-2
_version_ 1784579385607585792
author Pandeya, Yagya Raj
Bhattarai, Bhuwan
Lee, Joonwhoan
author_facet Pandeya, Yagya Raj
Bhattarai, Bhuwan
Lee, Joonwhoan
author_sort Pandeya, Yagya Raj
collection PubMed
description Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instruments, and visual representations. This can be one reason why there has been a limited study in this domain and no standard dataset has been produced before now. In this study, we proposed an unsupervised method for music video emotion analysis using music video contents on the Internet. We also produced a labelled dataset and compared the supervised and unsupervised methods for emotion classification. The music and video information are processed through a multimodal architecture with audio–video information exchange and boosting method. The general 2D and 3D convolution networks compared with the slow–fast network with filter and channel separable convolution in multimodal architecture. Several supervised and unsupervised networks were trained in an end-to-end manner and results were evaluated using various evaluation metrics. The proposed method used a large dataset for unsupervised emotion classification and interpreted the results quantitatively and qualitatively in the music video that had never been applied in the past. The result shows a large increment in classification score using unsupervised features and information sharing techniques on audio and video network. Our best classifier attained 77% accuracy, an f1-score of 0.77, and an area under the curve score of 0.94 with minimum computational cost.
format Online
Article
Text
id pubmed-8494760
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-84947602021-10-08 Music video emotion classification using slow–fast audio–video network and unsupervised feature representation Pandeya, Yagya Raj Bhattarai, Bhuwan Lee, Joonwhoan Sci Rep Article Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instruments, and visual representations. This can be one reason why there has been a limited study in this domain and no standard dataset has been produced before now. In this study, we proposed an unsupervised method for music video emotion analysis using music video contents on the Internet. We also produced a labelled dataset and compared the supervised and unsupervised methods for emotion classification. The music and video information are processed through a multimodal architecture with audio–video information exchange and boosting method. The general 2D and 3D convolution networks compared with the slow–fast network with filter and channel separable convolution in multimodal architecture. Several supervised and unsupervised networks were trained in an end-to-end manner and results were evaluated using various evaluation metrics. The proposed method used a large dataset for unsupervised emotion classification and interpreted the results quantitatively and qualitatively in the music video that had never been applied in the past. The result shows a large increment in classification score using unsupervised features and information sharing techniques on audio and video network. Our best classifier attained 77% accuracy, an f1-score of 0.77, and an area under the curve score of 0.94 with minimum computational cost. Nature Publishing Group UK 2021-10-06 /pmc/articles/PMC8494760/ /pubmed/34615904 http://dx.doi.org/10.1038/s41598-021-98856-2 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Pandeya, Yagya Raj
Bhattarai, Bhuwan
Lee, Joonwhoan
Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_full Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_fullStr Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_full_unstemmed Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_short Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
title_sort music video emotion classification using slow–fast audio–video network and unsupervised feature representation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8494760/
https://www.ncbi.nlm.nih.gov/pubmed/34615904
http://dx.doi.org/10.1038/s41598-021-98856-2
work_keys_str_mv AT pandeyayagyaraj musicvideoemotionclassificationusingslowfastaudiovideonetworkandunsupervisedfeaturerepresentation
AT bhattaraibhuwan musicvideoemotionclassificationusingslowfastaudiovideonetworkandunsupervisedfeaturerepresentation
AT leejoonwhoan musicvideoemotionclassificationusingslowfastaudiovideonetworkandunsupervisedfeaturerepresentation