Cargando…
Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism
To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5004979/ https://www.ncbi.nlm.nih.gov/pubmed/27575684 http://dx.doi.org/10.1371/journal.pone.0161808 |
_version_ | 1782450843412332544 |
---|---|
author | Zhong, Bineng Zhang, Jun Wang, Pengfei Du, Jixiang Chen, Duansheng |
author_facet | Zhong, Bineng Zhang, Jun Wang, Pengfei Du, Jixiang Chen, Duansheng |
author_sort | Zhong, Bineng |
collection | PubMed |
description | To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method. |
format | Online Article Text |
id | pubmed-5004979 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-50049792016-09-12 Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism Zhong, Bineng Zhang, Jun Wang, Pengfei Du, Jixiang Chen, Duansheng PLoS One Research Article To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method. Public Library of Science 2016-08-30 /pmc/articles/PMC5004979/ /pubmed/27575684 http://dx.doi.org/10.1371/journal.pone.0161808 Text en © 2016 Zhong et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Zhong, Bineng Zhang, Jun Wang, Pengfei Du, Jixiang Chen, Duansheng Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism |
title | Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism |
title_full | Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism |
title_fullStr | Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism |
title_full_unstemmed | Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism |
title_short | Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism |
title_sort | jointly feature learning and selection for robust tracking via a gating mechanism |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5004979/ https://www.ncbi.nlm.nih.gov/pubmed/27575684 http://dx.doi.org/10.1371/journal.pone.0161808 |
work_keys_str_mv | AT zhongbineng jointlyfeaturelearningandselectionforrobusttrackingviaagatingmechanism AT zhangjun jointlyfeaturelearningandselectionforrobusttrackingviaagatingmechanism AT wangpengfei jointlyfeaturelearningandselectionforrobusttrackingviaagatingmechanism AT dujixiang jointlyfeaturelearningandselectionforrobusttrackingviaagatingmechanism AT chenduansheng jointlyfeaturelearningandselectionforrobusttrackingviaagatingmechanism |