Cargando…
Music video emotion classification using slow–fast audio–video network and unsupervised feature representation
Affective computing has suffered by the precise annotation because the emotions are highly subjective and vague. The music video emotion is complex due to the diverse textual, acoustic, and visual information which can take the form of lyrics, singer voice, sounds from the different instruments, and...
Autores principales: | Pandeya, Yagya Raj, Bhattarai, Bhuwan, Lee, Joonwhoan |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8494760/ https://www.ncbi.nlm.nih.gov/pubmed/34615904 http://dx.doi.org/10.1038/s41598-021-98856-2 |
Ejemplares similares
-
Deep-Learning-Based Multimodal Emotion Classification for Music Videos
por: Pandeya, Yagya Raj, et al.
Publicado: (2021) -
An Instance Segmentation Model for Strawberry Diseases Based on Mask R-CNN
por: Afzaal, Usman, et al.
Publicado: (2021) -
Unsupervised Decoding of Long-Term, Naturalistic Human Neural Recordings with Automated Video and Audio Annotations
por: Wang, Nancy X. R., et al.
Publicado: (2016) -
An introduction to video and audio measurement
por: Hodges, Peter
Publicado: (2013) -
Audio y vídeo digital
por: Crespo Viñegra, Julio
Publicado: (2002)