Cargando…
Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention
Emotion recognition has been gaining attention in recent years due to its applications on artificial agents. To achieve a good performance with this task, much research has been conducted on the multi-modality emotion recognition model for leveraging the different strengths of each modality. However...
Autores principales: | Fu, Changzeng, Liu, Chaoran, Ishi, Carlos Toshinori, Ishiguro, Hiroshi |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506856/ https://www.ncbi.nlm.nih.gov/pubmed/32872511 http://dx.doi.org/10.3390/s20174894 |
Ejemplares similares
-
Skeleton-Based Emotion Recognition Based on Two-Stream Self-Attention Enhanced Spatial-Temporal Graph Convolutional Network
por: Shi, Jiaqi, et al.
Publicado: (2020) -
Emotion recognition based on multi-modal physiological signals and transfer learning
por: Fu, Zhongzheng, et al.
Publicado: (2022) -
Modality attention fusion model with hybrid multi-head self-attention for video understanding
por: Zhuang, Xuqiang, et al.
Publicado: (2022) -
Multi-Modal Residual Perceptron Network for Audio–Video Emotion Recognition
por: Chang, Xin, et al.
Publicado: (2021) -
Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning
por: Liu, Dong, et al.
Publicado: (2021)