Cargando…
Hybrid Bag-of-Visual-Words and FeatureWiz Selection for Content-Based Visual Information Retrieval
Recently, content-based image retrieval (CBIR) based on bag-of-visual-words (BoVW) model has been one of the most promising and increasingly active research areas. In this paper, we propose a new CBIR framework based on the visual words fusion of multiple feature descriptors to achieve an improved r...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9919877/ https://www.ncbi.nlm.nih.gov/pubmed/36772705 http://dx.doi.org/10.3390/s23031653 |
Sumario: | Recently, content-based image retrieval (CBIR) based on bag-of-visual-words (BoVW) model has been one of the most promising and increasingly active research areas. In this paper, we propose a new CBIR framework based on the visual words fusion of multiple feature descriptors to achieve an improved retrieval performance, where interest points are separately extracted from an image using features from accelerated segment test (FAST) and speeded-up robust features (SURF). The extracted keypoints are then fused together in a single keypoint feature vector and the improved RootSIFT algorithm is applied to describe the region surrounding each keypoint. Afterward, the FeatureWiz algorithm is employed to reduce features and select the best features for the BoVW learning model. To create the codebook, K-means clustering is applied to quantize visual features into a smaller set of visual words. Finally, the feature vectors extracted from the BoVW model are fed into a support vector machines (SVMs) classifier for image retrieval. An inverted index technique based on cosine distance metric is applied to sort the retrieved images to the similarity of the query image. Experiments on three benchmark datasets (Corel-1000, Caltech-10 and Oxford Flower-17) show that the presented CBIR technique can deliver comparable results to other state-of-the-art techniques, by achieving average accuracies of 92.94%, 98.40% and 84.94% on these datasets, respectively. |
---|