Cargando…
Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model
Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In this pap...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007493/ https://www.ncbi.nlm.nih.gov/pubmed/36905057 http://dx.doi.org/10.3390/s23052853 |
_version_ | 1784905535152193536 |
---|---|
author | Eunice, Jennifer J, Andrew Sei, Yuichi Hemanth, D. Jude |
author_facet | Eunice, Jennifer J, Andrew Sei, Yuichi Hemanth, D. Jude |
author_sort | Eunice, Jennifer |
collection | PubMed |
description | Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In this paper, we propose a systematic approach for gloss prediction in WLSR using the Sign2Pose Gloss prediction transformer model. The primary goal of this work is to enhance WLSR’s gloss prediction accuracy with reduced time and computational overhead. The proposed approach uses hand-crafted features rather than automated feature extraction, which is computationally expensive and less accurate. A modified key frame extraction technique is proposed that uses histogram difference and Euclidean distance metrics to select and drop redundant frames. To enhance the model’s generalization ability, pose vector augmentation using perspective transformation along with joint angle rotation is performed. Further, for normalization, we employed YOLOv3 (You Only Look Once) to detect the signing space and track the hand gestures of the signers in the frames. The proposed model experiments on WLASL datasets achieved the top 1% recognition accuracy of 80.9% in WLASL100 and 64.21% in WLASL300. The performance of the proposed model surpasses state-of-the-art approaches. The integration of key frame extraction, augmentation, and pose estimation improved the performance of the proposed gloss prediction model by increasing the model’s precision in locating minor variations in their body posture. We observed that introducing YOLOv3 improved gloss prediction accuracy and helped prevent model overfitting. Overall, the proposed model showed 17% improved performance in the WLASL 100 dataset. |
format | Online Article Text |
id | pubmed-10007493 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100074932023-03-12 Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model Eunice, Jennifer J, Andrew Sei, Yuichi Hemanth, D. Jude Sensors (Basel) Article Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In this paper, we propose a systematic approach for gloss prediction in WLSR using the Sign2Pose Gloss prediction transformer model. The primary goal of this work is to enhance WLSR’s gloss prediction accuracy with reduced time and computational overhead. The proposed approach uses hand-crafted features rather than automated feature extraction, which is computationally expensive and less accurate. A modified key frame extraction technique is proposed that uses histogram difference and Euclidean distance metrics to select and drop redundant frames. To enhance the model’s generalization ability, pose vector augmentation using perspective transformation along with joint angle rotation is performed. Further, for normalization, we employed YOLOv3 (You Only Look Once) to detect the signing space and track the hand gestures of the signers in the frames. The proposed model experiments on WLASL datasets achieved the top 1% recognition accuracy of 80.9% in WLASL100 and 64.21% in WLASL300. The performance of the proposed model surpasses state-of-the-art approaches. The integration of key frame extraction, augmentation, and pose estimation improved the performance of the proposed gloss prediction model by increasing the model’s precision in locating minor variations in their body posture. We observed that introducing YOLOv3 improved gloss prediction accuracy and helped prevent model overfitting. Overall, the proposed model showed 17% improved performance in the WLASL 100 dataset. MDPI 2023-03-06 /pmc/articles/PMC10007493/ /pubmed/36905057 http://dx.doi.org/10.3390/s23052853 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Eunice, Jennifer J, Andrew Sei, Yuichi Hemanth, D. Jude Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model |
title | Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model |
title_full | Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model |
title_fullStr | Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model |
title_full_unstemmed | Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model |
title_short | Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model |
title_sort | sign2pose: a pose-based approach for gloss prediction using a transformer model |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007493/ https://www.ncbi.nlm.nih.gov/pubmed/36905057 http://dx.doi.org/10.3390/s23052853 |
work_keys_str_mv | AT eunicejennifer sign2poseaposebasedapproachforglosspredictionusingatransformermodel AT jandrew sign2poseaposebasedapproachforglosspredictionusingatransformermodel AT seiyuichi sign2poseaposebasedapproachforglosspredictionusingatransformermodel AT hemanthdjude sign2poseaposebasedapproachforglosspredictionusingatransformermodel |