Cargando…

Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval

Zero-shot sketch-based image retrieval (ZS-SBIR) is an important computer vision problem. The image category in the test phase is a new category that was not visible in the training stage. Because sketches are extremely abstract, the commonly used backbone networks (such as VGG-16 and ResNet-50) can...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhao, Honggang, Liu, Mingyue, Li, Mingyong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10047869/
https://www.ncbi.nlm.nih.gov/pubmed/36981390
http://dx.doi.org/10.3390/e25030502
_version_ 1785014035060621312
author Zhao, Honggang
Liu, Mingyue
Li, Mingyong
author_facet Zhao, Honggang
Liu, Mingyue
Li, Mingyong
author_sort Zhao, Honggang
collection PubMed
description Zero-shot sketch-based image retrieval (ZS-SBIR) is an important computer vision problem. The image category in the test phase is a new category that was not visible in the training stage. Because sketches are extremely abstract, the commonly used backbone networks (such as VGG-16 and ResNet-50) cannot handle both sketches and photos. Semantic similarities between the same features in photos and sketches are difficult to reflect in deep models without textual assistance. To solve this problem, we propose a novel and effective feature embedding model called Attention Map Feature Fusion (AMFF). The AMFF model combines the excellent feature extraction capability of the ResNet-50 network with the excellent representation ability of the attention network. By processing the residuals of the ResNet-50 network, the attention map is finally obtained without introducing external semantic knowledge. Most previous approaches treat the ZS-SBIR problem as a classification problem, which ignores the huge domain gap between sketches and photos. This paper proposes an effective method to optimize the entire network, called domain-aware triplets (DAT). Domain feature discrimination and semantic feature embedding can be learned through DAT. In this paper, we also use the classification loss function to stabilize the training process to avoid getting trapped in a local optimum. Compared with the state-of-the-art methods, our method shows a superior performance. For example, on the Tu-berlin dataset, we achieved 61.2 + 1.2% Prec200. On the Sketchy_c100 dataset, we achieved 62.3 + 3.3% mAPall and 75.5 + 1.5% Prec100.
format Online
Article
Text
id pubmed-10047869
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100478692023-03-29 Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval Zhao, Honggang Liu, Mingyue Li, Mingyong Entropy (Basel) Article Zero-shot sketch-based image retrieval (ZS-SBIR) is an important computer vision problem. The image category in the test phase is a new category that was not visible in the training stage. Because sketches are extremely abstract, the commonly used backbone networks (such as VGG-16 and ResNet-50) cannot handle both sketches and photos. Semantic similarities between the same features in photos and sketches are difficult to reflect in deep models without textual assistance. To solve this problem, we propose a novel and effective feature embedding model called Attention Map Feature Fusion (AMFF). The AMFF model combines the excellent feature extraction capability of the ResNet-50 network with the excellent representation ability of the attention network. By processing the residuals of the ResNet-50 network, the attention map is finally obtained without introducing external semantic knowledge. Most previous approaches treat the ZS-SBIR problem as a classification problem, which ignores the huge domain gap between sketches and photos. This paper proposes an effective method to optimize the entire network, called domain-aware triplets (DAT). Domain feature discrimination and semantic feature embedding can be learned through DAT. In this paper, we also use the classification loss function to stabilize the training process to avoid getting trapped in a local optimum. Compared with the state-of-the-art methods, our method shows a superior performance. For example, on the Tu-berlin dataset, we achieved 61.2 + 1.2% Prec200. On the Sketchy_c100 dataset, we achieved 62.3 + 3.3% mAPall and 75.5 + 1.5% Prec100. MDPI 2023-03-14 /pmc/articles/PMC10047869/ /pubmed/36981390 http://dx.doi.org/10.3390/e25030502 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhao, Honggang
Liu, Mingyue
Li, Mingyong
Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval
title Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval
title_full Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval
title_fullStr Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval
title_full_unstemmed Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval
title_short Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval
title_sort feature fusion and metric learning network for zero-shot sketch-based image retrieval
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10047869/
https://www.ncbi.nlm.nih.gov/pubmed/36981390
http://dx.doi.org/10.3390/e25030502
work_keys_str_mv AT zhaohonggang featurefusionandmetriclearningnetworkforzeroshotsketchbasedimageretrieval
AT liumingyue featurefusionandmetriclearningnetworkforzeroshotsketchbasedimageretrieval
AT limingyong featurefusionandmetriclearningnetworkforzeroshotsketchbasedimageretrieval