Cargando…

WildGait: Learning Gait Representations from Raw Surveillance Streams

SIMPLE SUMMARY: In this work, we explore self-supervised pretraining for gait recognition. We gather the largest dataset to date of real-world gait sequences automatically annotated through pose tracking (UWG), which offers realistic confounding factors as opposed to current datasets. Results highli...

Descripción completa

Detalles Bibliográficos
Autores principales: Cosma, Adrian, Radoi, Ion Emilian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8705742/
https://www.ncbi.nlm.nih.gov/pubmed/34960479
http://dx.doi.org/10.3390/s21248387
_version_ 1784622021925142528
author Cosma, Adrian
Radoi, Ion Emilian
author_facet Cosma, Adrian
Radoi, Ion Emilian
author_sort Cosma, Adrian
collection PubMed
description SIMPLE SUMMARY: In this work, we explore self-supervised pretraining for gait recognition. We gather the largest dataset to date of real-world gait sequences automatically annotated through pose tracking (UWG), which offers realistic confounding factors as opposed to current datasets. Results highlight the great performance in scenarios with low amounts of training data, and state-of-the-art accuracy on skeleton-based gait recognition when utilizing all available training data. ABSTRACT: The use of gait for person identification has important advantages such as being non-invasive, unobtrusive, not requiring cooperation and being less likely to be obscured compared to other biometrics. Existing methods for gait recognition require cooperative gait scenarios, in which a single person is walking multiple times in a straight line in front of a camera. We address the challenges of real-world scenarios in which camera feeds capture multiple people, who in most cases pass in front of the camera only once. We address privacy concerns by using only motion information of walking individuals, with no identifiable appearance-based information. As such, we propose a self-supervised learning framework, WildGait, which consists of pre-training a Spatio-Temporal Graph Convolutional Network on a large number of automatically annotated skeleton sequences obtained from raw, real-world surveillance streams to learn useful gait signatures. We collected and compiled the largest pretraining dataset to date of anonymized walking skeletons called Uncooperative Wild Gait, containing over 38k tracklets of anonymized walking 2D skeletons. We make the dataset available to the research community. Our results surpass the current state-of-the-art pose-based gait recognition solutions. Our proposed method is reliable in training gait recognition methods in unconstrained environments, especially in settings with scarce amounts of annotated data.
format Online
Article
Text
id pubmed-8705742
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-87057422021-12-25 WildGait: Learning Gait Representations from Raw Surveillance Streams Cosma, Adrian Radoi, Ion Emilian Sensors (Basel) Article SIMPLE SUMMARY: In this work, we explore self-supervised pretraining for gait recognition. We gather the largest dataset to date of real-world gait sequences automatically annotated through pose tracking (UWG), which offers realistic confounding factors as opposed to current datasets. Results highlight the great performance in scenarios with low amounts of training data, and state-of-the-art accuracy on skeleton-based gait recognition when utilizing all available training data. ABSTRACT: The use of gait for person identification has important advantages such as being non-invasive, unobtrusive, not requiring cooperation and being less likely to be obscured compared to other biometrics. Existing methods for gait recognition require cooperative gait scenarios, in which a single person is walking multiple times in a straight line in front of a camera. We address the challenges of real-world scenarios in which camera feeds capture multiple people, who in most cases pass in front of the camera only once. We address privacy concerns by using only motion information of walking individuals, with no identifiable appearance-based information. As such, we propose a self-supervised learning framework, WildGait, which consists of pre-training a Spatio-Temporal Graph Convolutional Network on a large number of automatically annotated skeleton sequences obtained from raw, real-world surveillance streams to learn useful gait signatures. We collected and compiled the largest pretraining dataset to date of anonymized walking skeletons called Uncooperative Wild Gait, containing over 38k tracklets of anonymized walking 2D skeletons. We make the dataset available to the research community. Our results surpass the current state-of-the-art pose-based gait recognition solutions. Our proposed method is reliable in training gait recognition methods in unconstrained environments, especially in settings with scarce amounts of annotated data. MDPI 2021-12-15 /pmc/articles/PMC8705742/ /pubmed/34960479 http://dx.doi.org/10.3390/s21248387 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Cosma, Adrian
Radoi, Ion Emilian
WildGait: Learning Gait Representations from Raw Surveillance Streams
title WildGait: Learning Gait Representations from Raw Surveillance Streams
title_full WildGait: Learning Gait Representations from Raw Surveillance Streams
title_fullStr WildGait: Learning Gait Representations from Raw Surveillance Streams
title_full_unstemmed WildGait: Learning Gait Representations from Raw Surveillance Streams
title_short WildGait: Learning Gait Representations from Raw Surveillance Streams
title_sort wildgait: learning gait representations from raw surveillance streams
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8705742/
https://www.ncbi.nlm.nih.gov/pubmed/34960479
http://dx.doi.org/10.3390/s21248387
work_keys_str_mv AT cosmaadrian wildgaitlearninggaitrepresentationsfromrawsurveillancestreams
AT radoiionemilian wildgaitlearninggaitrepresentationsfromrawsurveillancestreams