Cargando…

Visual Re-Ranking via Adaptive Collaborative Hypergraph Learning for Image Retrieval

Visual re-ranking has received considerable attention in recent years. It aims to enhance the performance of text-based image retrieval by boosting the rank of relevant images using visual information. Hypergraph has been widely used for relevance estimation, where textual results are taken as verti...

Descripción completa

Detalles Bibliográficos
Autores principales: Bouhlel, Noura, Feki, Ghada, Amar, Chokri Ben
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148239/
http://dx.doi.org/10.1007/978-3-030-45439-5_34
Descripción
Sumario:Visual re-ranking has received considerable attention in recent years. It aims to enhance the performance of text-based image retrieval by boosting the rank of relevant images using visual information. Hypergraph has been widely used for relevance estimation, where textual results are taken as vertices and the re-ranking problem is formulated as a transductive learning on the hypergraph. The potential of the hypergraph learning is essentially determined by the hypergraph construction scheme. To this end, in this paper, we introduce a novel data representation technique named adaptive collaborative representation for hypergraph learning. Compared to the conventional collaborative representation, we consider the data locality to adaptively select relevant and close samples for a test sample and discard irrelevant and faraway ones. Moreover, at the feature level, we impose a weight matrix on the representation errors to adaptively highlight the important features and reduce the effect of redundant/noisy ones. Finally, we also add a nonnegativity constraint on the representation coefficients to enhance the hypergraph interpretability. These attractive properties allow constructing a more informative and quality hypergraph, thereby achieving better retrieval performance than other hypergraph models. Extensive experiments on the public MediaEval benchmarks demonstrate that our re-ranking method achieves consistently superior results, compared to state-of-the-art methods.