Cargando…

An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model

For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features...

Descripción completa

Detalles Bibliográficos
Autores principales: Jabeen, Safia, Mehmood, Zahid, Mahmood, Toqeer, Saba, Tanzila, Rehman, Amjad, Mahmood, Muhammad Tariq
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5919049/
https://www.ncbi.nlm.nih.gov/pubmed/29694429
http://dx.doi.org/10.1371/journal.pone.0194526
_version_ 1783317551751626752
author Jabeen, Safia
Mehmood, Zahid
Mahmood, Toqeer
Saba, Tanzila
Rehman, Amjad
Mahmood, Muhammad Tariq
author_facet Jabeen, Safia
Mehmood, Zahid
Mahmood, Toqeer
Saba, Tanzila
Rehman, Amjad
Mahmood, Muhammad Tariq
author_sort Jabeen, Safia
collection PubMed
description For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.
format Online
Article
Text
id pubmed-5919049
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-59190492018-05-05 An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model Jabeen, Safia Mehmood, Zahid Mahmood, Toqeer Saba, Tanzila Rehman, Amjad Mahmood, Muhammad Tariq PLoS One Research Article For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques. Public Library of Science 2018-04-25 /pmc/articles/PMC5919049/ /pubmed/29694429 http://dx.doi.org/10.1371/journal.pone.0194526 Text en © 2018 Jabeen et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Jabeen, Safia
Mehmood, Zahid
Mahmood, Toqeer
Saba, Tanzila
Rehman, Amjad
Mahmood, Muhammad Tariq
An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model
title An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model
title_full An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model
title_fullStr An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model
title_full_unstemmed An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model
title_short An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model
title_sort effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5919049/
https://www.ncbi.nlm.nih.gov/pubmed/29694429
http://dx.doi.org/10.1371/journal.pone.0194526
work_keys_str_mv AT jabeensafia aneffectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT mehmoodzahid aneffectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT mahmoodtoqeer aneffectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT sabatanzila aneffectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT rehmanamjad aneffectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT mahmoodmuhammadtariq aneffectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT jabeensafia effectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT mehmoodzahid effectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT mahmoodtoqeer effectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT sabatanzila effectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT rehmanamjad effectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel
AT mahmoodmuhammadtariq effectivecontentbasedimageretrievaltechniqueforimagevisualsrepresentationbasedonthebagofvisualwordsmodel