Cargando…

ViTT: Vision Transformer Tracker

This paper presents a new model for multi-object tracking (MOT) with a transformer. MOT is a spatiotemporal correlation task among interest objects and one of the crucial technologies of multi-unmanned aerial vehicles (Multi-UAV). The transformer is a self-attentional codec architecture that has bee...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhu, Xiaoning, Jia, Yannan, Jian, Sun, Gu, Lize, Pu, Zhang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8402321/
https://www.ncbi.nlm.nih.gov/pubmed/34451049
http://dx.doi.org/10.3390/s21165608
_version_ 1783745761758937088
author Zhu, Xiaoning
Jia, Yannan
Jian, Sun
Gu, Lize
Pu, Zhang
author_facet Zhu, Xiaoning
Jia, Yannan
Jian, Sun
Gu, Lize
Pu, Zhang
author_sort Zhu, Xiaoning
collection PubMed
description This paper presents a new model for multi-object tracking (MOT) with a transformer. MOT is a spatiotemporal correlation task among interest objects and one of the crucial technologies of multi-unmanned aerial vehicles (Multi-UAV). The transformer is a self-attentional codec architecture that has been successfully used in natural language processing and is emerging in computer vision. This study proposes the Vision Transformer Tracker (ViTT), which uses a transformer encoder as the backbone and takes images directly as input. Compared with convolution networks, it can model global context at every encoder layer from the beginning, which addresses the challenges of occlusion and complex scenarios. The model simultaneously outputs object locations and corresponding appearance embeddings in a shared network through multi-task learning. Our work demonstrates the superiority and effectiveness of transformer-based networks in complex computer vision tasks and paves the way for applying the pure transformer in MOT. We evaluated the proposed model on the MOT16 dataset, achieving 65.7% MOTA, and obtained a competitive result compared with other typical multi-object trackers.
format Online
Article
Text
id pubmed-8402321
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-84023212021-08-29 ViTT: Vision Transformer Tracker Zhu, Xiaoning Jia, Yannan Jian, Sun Gu, Lize Pu, Zhang Sensors (Basel) Article This paper presents a new model for multi-object tracking (MOT) with a transformer. MOT is a spatiotemporal correlation task among interest objects and one of the crucial technologies of multi-unmanned aerial vehicles (Multi-UAV). The transformer is a self-attentional codec architecture that has been successfully used in natural language processing and is emerging in computer vision. This study proposes the Vision Transformer Tracker (ViTT), which uses a transformer encoder as the backbone and takes images directly as input. Compared with convolution networks, it can model global context at every encoder layer from the beginning, which addresses the challenges of occlusion and complex scenarios. The model simultaneously outputs object locations and corresponding appearance embeddings in a shared network through multi-task learning. Our work demonstrates the superiority and effectiveness of transformer-based networks in complex computer vision tasks and paves the way for applying the pure transformer in MOT. We evaluated the proposed model on the MOT16 dataset, achieving 65.7% MOTA, and obtained a competitive result compared with other typical multi-object trackers. MDPI 2021-08-20 /pmc/articles/PMC8402321/ /pubmed/34451049 http://dx.doi.org/10.3390/s21165608 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhu, Xiaoning
Jia, Yannan
Jian, Sun
Gu, Lize
Pu, Zhang
ViTT: Vision Transformer Tracker
title ViTT: Vision Transformer Tracker
title_full ViTT: Vision Transformer Tracker
title_fullStr ViTT: Vision Transformer Tracker
title_full_unstemmed ViTT: Vision Transformer Tracker
title_short ViTT: Vision Transformer Tracker
title_sort vitt: vision transformer tracker
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8402321/
https://www.ncbi.nlm.nih.gov/pubmed/34451049
http://dx.doi.org/10.3390/s21165608
work_keys_str_mv AT zhuxiaoning vittvisiontransformertracker
AT jiayannan vittvisiontransformertracker
AT jiansun vittvisiontransformertracker
AT gulize vittvisiontransformertracker
AT puzhang vittvisiontransformertracker