Cargando…

Fast Visual Tracking Based on Convolutional Networks

Recently, an upsurge of deep learning has provided a new direction for the field of computer vision and visual tracking. However, expensive offline training time and the large number of images required by deep learning have greatly hindered progress. This paper aims to further improve the computatio...

Descripción completa

Detalles Bibliográficos
Autores principales: Huang, Ren-Jie, Tsao, Chun-Yu, Kuo, Yi-Pin, Lai, Yi-Chung, Liu, Chi Chung, Tu, Zhe-Wei, Wang, Jung-Hua, Chang, Chung-Cheng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6111798/
https://www.ncbi.nlm.nih.gov/pubmed/30042339
http://dx.doi.org/10.3390/s18082405
Descripción
Sumario:Recently, an upsurge of deep learning has provided a new direction for the field of computer vision and visual tracking. However, expensive offline training time and the large number of images required by deep learning have greatly hindered progress. This paper aims to further improve the computational performance of CNT which is reported to deliver 5 fps performance in visual tracking, we propose a method called Fast-CNT which differs from CNT in three aspects: firstly, an adaptive k value (rather than a constant 100) is determined for an input video; secondly, background filters used in CNT are omitted in this work to save computation time without affecting performance; thirdly, SURF feature points are used in conjunction with the particle filter to address the drift problem in CNT. Extensive experimental results on land and undersea video sequences show that Fast-CNT outperforms CNT by 2~10 times in terms of computational efficiency.