Cargando…

A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition

BACKGROUND: Human gait as an effective behavioral biometric identifier has received much attention in recent years. However, there are challenges which reduce its performance. In this work we aim at improving performance of gait systems under variations in view angles, which present one of the major...

Descripción completa

Detalles Bibliográficos
Autores principales: Ghebleh, Abbas, Moghaddam, Mohsen Ebrahimi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Wolters Kluwer - Medknow 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7528990/
https://www.ncbi.nlm.nih.gov/pubmed/33062606
http://dx.doi.org/10.4103/jmss.JMSS_59_19
_version_ 1783589357686358016
author Ghebleh, Abbas
Moghaddam, Mohsen Ebrahimi
author_facet Ghebleh, Abbas
Moghaddam, Mohsen Ebrahimi
author_sort Ghebleh, Abbas
collection PubMed
description BACKGROUND: Human gait as an effective behavioral biometric identifier has received much attention in recent years. However, there are challenges which reduce its performance. In this work we aim at improving performance of gait systems under variations in view angles, which present one of the major challenges to gait algorithms. METHODS: We propose employment of a view transformation model based on sparse and redundant (SR) representation. More specifically, our proposed method trains a set of corresponding dictionaries for each viewing angle, which are then used in identification of a probe. In particular, the view transformation is performed by first obtaining the SR representation of the input image using the appropriate dictionary, then multiplying this representation by the dictionary of destination angle to obtain a corresponding image in the intended angle. RESULTS: Experiments performed using CASIA Gait Database, Dataset B, support the satisfactory performance of our method. It is observed that in most tests, the proposed method outperforms the other methods in comparison. This is especially the case for large changes in the view angle, as well as the average recognition rate. CONCLUSION: A comparison with state-of-the-art methods in the literature showcases the superior performance of the proposed method, especially in the case of large variations in view angle.
format Online
Article
Text
id pubmed-7528990
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Wolters Kluwer - Medknow
record_format MEDLINE/PubMed
spelling pubmed-75289902020-10-13 A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition Ghebleh, Abbas Moghaddam, Mohsen Ebrahimi J Med Signals Sens Original Article BACKGROUND: Human gait as an effective behavioral biometric identifier has received much attention in recent years. However, there are challenges which reduce its performance. In this work we aim at improving performance of gait systems under variations in view angles, which present one of the major challenges to gait algorithms. METHODS: We propose employment of a view transformation model based on sparse and redundant (SR) representation. More specifically, our proposed method trains a set of corresponding dictionaries for each viewing angle, which are then used in identification of a probe. In particular, the view transformation is performed by first obtaining the SR representation of the input image using the appropriate dictionary, then multiplying this representation by the dictionary of destination angle to obtain a corresponding image in the intended angle. RESULTS: Experiments performed using CASIA Gait Database, Dataset B, support the satisfactory performance of our method. It is observed that in most tests, the proposed method outperforms the other methods in comparison. This is especially the case for large changes in the view angle, as well as the average recognition rate. CONCLUSION: A comparison with state-of-the-art methods in the literature showcases the superior performance of the proposed method, especially in the case of large variations in view angle. Wolters Kluwer - Medknow 2020-07-03 /pmc/articles/PMC7528990/ /pubmed/33062606 http://dx.doi.org/10.4103/jmss.JMSS_59_19 Text en Copyright: © 2020 Journal of Medical Signals & Sensors http://creativecommons.org/licenses/by-nc-sa/4.0 This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
spellingShingle Original Article
Ghebleh, Abbas
Moghaddam, Mohsen Ebrahimi
A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition
title A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition
title_full A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition
title_fullStr A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition
title_full_unstemmed A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition
title_short A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition
title_sort view transformation model based on sparse and redundant representation for human gait recognition
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7528990/
https://www.ncbi.nlm.nih.gov/pubmed/33062606
http://dx.doi.org/10.4103/jmss.JMSS_59_19
work_keys_str_mv AT gheblehabbas aviewtransformationmodelbasedonsparseandredundantrepresentationforhumangaitrecognition
AT moghaddammohsenebrahimi aviewtransformationmodelbasedonsparseandredundantrepresentationforhumangaitrecognition
AT gheblehabbas viewtransformationmodelbasedonsparseandredundantrepresentationforhumangaitrecognition
AT moghaddammohsenebrahimi viewtransformationmodelbasedonsparseandredundantrepresentationforhumangaitrecognition