Cargando…

Multi-Color Space Network for Salient Object Detection

The salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features f...

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Kyungjun, Jeong, Jechang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9101518/
https://www.ncbi.nlm.nih.gov/pubmed/35591278
http://dx.doi.org/10.3390/s22093588
_version_ 1784707106436284416
author Lee, Kyungjun
Jeong, Jechang
author_facet Lee, Kyungjun
Jeong, Jechang
author_sort Lee, Kyungjun
collection PubMed
description The salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features from them, and train a network. However, owing to the variety of factors that affect visual saliency, securing sufficient features from a single color space is difficult. Therefore, in this paper, we propose a multi-color space network (MCSNet) to detect salient objects using various saliency cues. First, the images were converted to HSV and grayscale color spaces to obtain saliency cues other than those provided by RGB color information. Each saliency cue was fed into two parallel VGG backbone networks to extract features. Contextual information was obtained from the extracted features using atrous spatial pyramid pooling (ASPP). The features obtained from both paths were passed through the attention module, and channel and spatial features were highlighted. Finally, the final saliency map was generated using a step-by-step residual refinement module (RRM). Furthermore, the network was trained with a bidirectional loss to supervise saliency detection results. Experiments on five public benchmark datasets showed that our proposed network achieved superior performance in terms of both subjective results and objective metrics.
format Online
Article
Text
id pubmed-9101518
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-91015182022-05-14 Multi-Color Space Network for Salient Object Detection Lee, Kyungjun Jeong, Jechang Sensors (Basel) Article The salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features from them, and train a network. However, owing to the variety of factors that affect visual saliency, securing sufficient features from a single color space is difficult. Therefore, in this paper, we propose a multi-color space network (MCSNet) to detect salient objects using various saliency cues. First, the images were converted to HSV and grayscale color spaces to obtain saliency cues other than those provided by RGB color information. Each saliency cue was fed into two parallel VGG backbone networks to extract features. Contextual information was obtained from the extracted features using atrous spatial pyramid pooling (ASPP). The features obtained from both paths were passed through the attention module, and channel and spatial features were highlighted. Finally, the final saliency map was generated using a step-by-step residual refinement module (RRM). Furthermore, the network was trained with a bidirectional loss to supervise saliency detection results. Experiments on five public benchmark datasets showed that our proposed network achieved superior performance in terms of both subjective results and objective metrics. MDPI 2022-05-09 /pmc/articles/PMC9101518/ /pubmed/35591278 http://dx.doi.org/10.3390/s22093588 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Lee, Kyungjun
Jeong, Jechang
Multi-Color Space Network for Salient Object Detection
title Multi-Color Space Network for Salient Object Detection
title_full Multi-Color Space Network for Salient Object Detection
title_fullStr Multi-Color Space Network for Salient Object Detection
title_full_unstemmed Multi-Color Space Network for Salient Object Detection
title_short Multi-Color Space Network for Salient Object Detection
title_sort multi-color space network for salient object detection
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9101518/
https://www.ncbi.nlm.nih.gov/pubmed/35591278
http://dx.doi.org/10.3390/s22093588
work_keys_str_mv AT leekyungjun multicolorspacenetworkforsalientobjectdetection
AT jeongjechang multicolorspacenetworkforsalientobjectdetection