Cargando…
Cross-Modal Search for Social Networks via Adversarial Learning
Cross-modal search has become a research hotspot in the recent years. In contrast to traditional cross-modal search, social network cross-modal information search is restricted by data quality for arbitrary text and low-resolution visual features. In addition, the semantic sparseness of cross-modal...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7369674/ https://www.ncbi.nlm.nih.gov/pubmed/32733547 http://dx.doi.org/10.1155/2020/7834953 |
Sumario: | Cross-modal search has become a research hotspot in the recent years. In contrast to traditional cross-modal search, social network cross-modal information search is restricted by data quality for arbitrary text and low-resolution visual features. In addition, the semantic sparseness of cross-modal data from social networks results in the text and visual modalities misleading each other. In this paper, we propose a cross-modal search method for social network data that capitalizes on adversarial learning (cross-modal search with adversarial learning: CMSAL). We adopt self-attention-based neural networks to generate modality-oriented representations for further intermodal correlation learning. A search module is implemented based on adversarial learning, through which the discriminator is designed to measure the distribution of generated features from intramodal and intramodal perspectives. Experiments on real-word datasets from Sina Weibo and Wikipedia, which have similar properties to social networks, show that the proposed method outperforms the state-of-the-art cross-modal search methods. |
---|