Cargando…

Exploring Self-Supervised Vision Transformers for Gait Recognition in the Wild

The manner of walking (gait) is a powerful biometric that is used as a unique fingerprinting method, allowing unobtrusive behavioral analytics to be performed at a distance without subject cooperation. As opposed to more traditional biometric authentication methods, gait analysis does not require ex...

Descripción completa

Detalles Bibliográficos
Autores principales: Cosma, Adrian, Catruna, Andy, Radoi, Emilian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007350/
https://www.ncbi.nlm.nih.gov/pubmed/36904884
http://dx.doi.org/10.3390/s23052680
_version_ 1784905498970030080
author Cosma, Adrian
Catruna, Andy
Radoi, Emilian
author_facet Cosma, Adrian
Catruna, Andy
Radoi, Emilian
author_sort Cosma, Adrian
collection PubMed
description The manner of walking (gait) is a powerful biometric that is used as a unique fingerprinting method, allowing unobtrusive behavioral analytics to be performed at a distance without subject cooperation. As opposed to more traditional biometric authentication methods, gait analysis does not require explicit cooperation of the subject and can be performed in low-resolution settings, without requiring the subject’s face to be unobstructed/clearly visible. Most current approaches are developed in a controlled setting, with clean, gold-standard annotated data, which powered the development of neural architectures for recognition and classification. Only recently has gait analysis ventured into using more diverse, large-scale, and realistic datasets to pretrained networks in a self-supervised manner. Self-supervised training regime enables learning diverse and robust gait representations without expensive manual human annotations. Prompted by the ubiquitous use of the transformer model in all areas of deep learning, including computer vision, in this work, we explore the use of five different vision transformer architectures directly applied to self-supervised gait recognition. We adapt and pretrain the simple ViT, CaiT, CrossFormer, Token2Token, and TwinsSVT on two different large-scale gait datasets: GREW and DenseGait. We provide extensive results for zero-shot and fine-tuning on two benchmark gait recognition datasets, CASIA-B and FVG, and explore the relationship between the amount of spatial and temporal gait information used by the visual transformer. Our results show that in designing transformer models for processing motion, using a hierarchical approach (i.e., CrossFormer models) on finer-grained movement fairs comparatively better than previous whole-skeleton approaches.
format Online
Article
Text
id pubmed-10007350
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100073502023-03-12 Exploring Self-Supervised Vision Transformers for Gait Recognition in the Wild Cosma, Adrian Catruna, Andy Radoi, Emilian Sensors (Basel) Article The manner of walking (gait) is a powerful biometric that is used as a unique fingerprinting method, allowing unobtrusive behavioral analytics to be performed at a distance without subject cooperation. As opposed to more traditional biometric authentication methods, gait analysis does not require explicit cooperation of the subject and can be performed in low-resolution settings, without requiring the subject’s face to be unobstructed/clearly visible. Most current approaches are developed in a controlled setting, with clean, gold-standard annotated data, which powered the development of neural architectures for recognition and classification. Only recently has gait analysis ventured into using more diverse, large-scale, and realistic datasets to pretrained networks in a self-supervised manner. Self-supervised training regime enables learning diverse and robust gait representations without expensive manual human annotations. Prompted by the ubiquitous use of the transformer model in all areas of deep learning, including computer vision, in this work, we explore the use of five different vision transformer architectures directly applied to self-supervised gait recognition. We adapt and pretrain the simple ViT, CaiT, CrossFormer, Token2Token, and TwinsSVT on two different large-scale gait datasets: GREW and DenseGait. We provide extensive results for zero-shot and fine-tuning on two benchmark gait recognition datasets, CASIA-B and FVG, and explore the relationship between the amount of spatial and temporal gait information used by the visual transformer. Our results show that in designing transformer models for processing motion, using a hierarchical approach (i.e., CrossFormer models) on finer-grained movement fairs comparatively better than previous whole-skeleton approaches. MDPI 2023-03-01 /pmc/articles/PMC10007350/ /pubmed/36904884 http://dx.doi.org/10.3390/s23052680 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Cosma, Adrian
Catruna, Andy
Radoi, Emilian
Exploring Self-Supervised Vision Transformers for Gait Recognition in the Wild
title Exploring Self-Supervised Vision Transformers for Gait Recognition in the Wild
title_full Exploring Self-Supervised Vision Transformers for Gait Recognition in the Wild
title_fullStr Exploring Self-Supervised Vision Transformers for Gait Recognition in the Wild
title_full_unstemmed Exploring Self-Supervised Vision Transformers for Gait Recognition in the Wild
title_short Exploring Self-Supervised Vision Transformers for Gait Recognition in the Wild
title_sort exploring self-supervised vision transformers for gait recognition in the wild
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007350/
https://www.ncbi.nlm.nih.gov/pubmed/36904884
http://dx.doi.org/10.3390/s23052680
work_keys_str_mv AT cosmaadrian exploringselfsupervisedvisiontransformersforgaitrecognitioninthewild
AT catrunaandy exploringselfsupervisedvisiontransformersforgaitrecognitioninthewild
AT radoiemilian exploringselfsupervisedvisiontransformersforgaitrecognitioninthewild