Cargando…
A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition
BACKGROUND: Human gait as an effective behavioral biometric identifier has received much attention in recent years. However, there are challenges which reduce its performance. In this work we aim at improving performance of gait systems under variations in view angles, which present one of the major...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Wolters Kluwer - Medknow
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7528990/ https://www.ncbi.nlm.nih.gov/pubmed/33062606 http://dx.doi.org/10.4103/jmss.JMSS_59_19 |
Sumario: | BACKGROUND: Human gait as an effective behavioral biometric identifier has received much attention in recent years. However, there are challenges which reduce its performance. In this work we aim at improving performance of gait systems under variations in view angles, which present one of the major challenges to gait algorithms. METHODS: We propose employment of a view transformation model based on sparse and redundant (SR) representation. More specifically, our proposed method trains a set of corresponding dictionaries for each viewing angle, which are then used in identification of a probe. In particular, the view transformation is performed by first obtaining the SR representation of the input image using the appropriate dictionary, then multiplying this representation by the dictionary of destination angle to obtain a corresponding image in the intended angle. RESULTS: Experiments performed using CASIA Gait Database, Dataset B, support the satisfactory performance of our method. It is observed that in most tests, the proposed method outperforms the other methods in comparison. This is especially the case for large changes in the view angle, as well as the average recognition rate. CONCLUSION: A comparison with state-of-the-art methods in the literature showcases the superior performance of the proposed method, especially in the case of large variations in view angle. |
---|