Cargando…
Multi-Modal Representation via Contrastive Learning with Attention Bottleneck Fusion and Attentive Statistics Features
The integration of information from multiple modalities is a highly active area of research. Previous techniques have predominantly focused on fusing shallow features or high-level representations generated by deep unimodal networks, which only capture a subset of the hierarchical relationships acro...
Autores principales: | Guo, Qinglang, Liao, Yong, Li, Zhe, Liang, Shenglin |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606612/ https://www.ncbi.nlm.nih.gov/pubmed/37895542 http://dx.doi.org/10.3390/e25101421 |
Ejemplares similares
-
Convolutional Models with Multi-Feature Fusion for Effective Link Prediction in Knowledge Graph Embedding
por: Guo, Qinglang, et al.
Publicado: (2023) -
Multimodal Sentiment Analysis Representations Learning via Contrastive Learning with Condense Attention Fusion
por: Wang, Huiru, et al.
Publicado: (2023) -
Modality attention fusion model with hybrid multi-head self-attention for video understanding
por: Zhuang, Xuqiang, et al.
Publicado: (2022) -
Multi-channel feature fusion attention Dehazing network
por: Zou, Changjun, et al.
Publicado: (2023) -
SSD with multi-scale feature fusion and attention mechanism
por: Liu, Qiang, et al.
Publicado: (2023)