Cargando…
Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features
Foreground detection, which extracts moving objects from videos, is an important and fundamental problem of video analysis. Classic methods often build background models based on some hand-craft features. Recent deep neural network (DNN) based methods can learn more effective image features by train...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6308466/ https://www.ncbi.nlm.nih.gov/pubmed/30518131 http://dx.doi.org/10.3390/s18124269 |
_version_ | 1783383195444576256 |
---|---|
author | Wang, Yao Yu, Zujun Zhu, Liqiang |
author_facet | Wang, Yao Yu, Zujun Zhu, Liqiang |
author_sort | Wang, Yao |
collection | PubMed |
description | Foreground detection, which extracts moving objects from videos, is an important and fundamental problem of video analysis. Classic methods often build background models based on some hand-craft features. Recent deep neural network (DNN) based methods can learn more effective image features by training, but most of them do not use temporal feature or use simple hand-craft temporal features. In this paper, we propose a new dual multi-scale 3D fully-convolutional neural network for foreground detection problems. It uses an encoder–decoder structure to establish a mapping from image sequences to pixel-wise classification results. We also propose a two-stage training procedure, which trains the encoder and decoder separately to improve the training results. With multi-scale architecture, the network can learning deep and hierarchical multi-scale features in both spatial and temporal domains, which is proved to have good invariance for both spatial and temporal scales. We used the CDnet dataset, which is currently the largest foreground detection dataset, to evaluate our method. The experiment results show that the proposed method achieves state-of-the-art results in most test scenes, comparing to current DNN based methods. |
format | Online Article Text |
id | pubmed-6308466 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-63084662019-01-04 Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features Wang, Yao Yu, Zujun Zhu, Liqiang Sensors (Basel) Article Foreground detection, which extracts moving objects from videos, is an important and fundamental problem of video analysis. Classic methods often build background models based on some hand-craft features. Recent deep neural network (DNN) based methods can learn more effective image features by training, but most of them do not use temporal feature or use simple hand-craft temporal features. In this paper, we propose a new dual multi-scale 3D fully-convolutional neural network for foreground detection problems. It uses an encoder–decoder structure to establish a mapping from image sequences to pixel-wise classification results. We also propose a two-stage training procedure, which trains the encoder and decoder separately to improve the training results. With multi-scale architecture, the network can learning deep and hierarchical multi-scale features in both spatial and temporal domains, which is proved to have good invariance for both spatial and temporal scales. We used the CDnet dataset, which is currently the largest foreground detection dataset, to evaluate our method. The experiment results show that the proposed method achieves state-of-the-art results in most test scenes, comparing to current DNN based methods. MDPI 2018-12-04 /pmc/articles/PMC6308466/ /pubmed/30518131 http://dx.doi.org/10.3390/s18124269 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wang, Yao Yu, Zujun Zhu, Liqiang Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features |
title | Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features |
title_full | Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features |
title_fullStr | Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features |
title_full_unstemmed | Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features |
title_short | Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features |
title_sort | foreground detection with deeply learned multi-scale spatial-temporal features |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6308466/ https://www.ncbi.nlm.nih.gov/pubmed/30518131 http://dx.doi.org/10.3390/s18124269 |
work_keys_str_mv | AT wangyao foregrounddetectionwithdeeplylearnedmultiscalespatialtemporalfeatures AT yuzujun foregrounddetectionwithdeeplylearnedmultiscalespatialtemporalfeatures AT zhuliqiang foregrounddetectionwithdeeplylearnedmultiscalespatialtemporalfeatures |