Cargando…

A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework

Continuous sign language recognition (CSLR) using different types of sensors to precisely recognize sign language in real time is a very challenging but important research direction in sensor technology. Many previous methods are vision-based, with computationally intensive algorithms to process a l...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Zhenxing, Tam, Vincent W. L., Lam, Edmund Y.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8877205/
https://www.ncbi.nlm.nih.gov/pubmed/35208457
http://dx.doi.org/10.3390/mi13020333
_version_ 1784658364804890624
author Zhou, Zhenxing
Tam, Vincent W. L.
Lam, Edmund Y.
author_facet Zhou, Zhenxing
Tam, Vincent W. L.
Lam, Edmund Y.
author_sort Zhou, Zhenxing
collection PubMed
description Continuous sign language recognition (CSLR) using different types of sensors to precisely recognize sign language in real time is a very challenging but important research direction in sensor technology. Many previous methods are vision-based, with computationally intensive algorithms to process a large number of image/video frames possibly contaminated with noises, which can result in a large translation delay. On the other hand, gesture-based CSLR relying on hand movement data captured on wearable devices may require less computation resources and translation time. Thus, it is more efficient to provide instant translation during real-world communication. However, the insufficient amount of information provided by the wearable sensors often affect the overall performance of this system. To tackle this issue, we propose a bidirectional long short-term memory (BLSTM)-based multi-feature framework for conducting gesture-based CSLR precisely with two smart watches. In this framework, multiple sets of input features are extracted from the collected gesture data to provide a diverse spectrum of valuable information to the underlying BLSTM model for CSLR. To demonstrate the effectiveness of the proposed framework, we test it on an extremely challenging and radically new dataset of Hong Kong sign language (HKSL), in which hand movement data are collected from 6 individual signers for 50 different sentences. The experimental results reveal that the proposed framework attains a much lower word error rate compared with other existing machine learning or deep learning approaches for gesture-based CSLR. Based on this framework, we further propose a portable sign language collection and translation platform, which can simplify the procedure of collecting gesture-based sign language dataset and recognize sign language through smart watch data in real time, in order to break the communication barrier for the sign language users.
format Online
Article
Text
id pubmed-8877205
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-88772052022-02-26 A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework Zhou, Zhenxing Tam, Vincent W. L. Lam, Edmund Y. Micromachines (Basel) Article Continuous sign language recognition (CSLR) using different types of sensors to precisely recognize sign language in real time is a very challenging but important research direction in sensor technology. Many previous methods are vision-based, with computationally intensive algorithms to process a large number of image/video frames possibly contaminated with noises, which can result in a large translation delay. On the other hand, gesture-based CSLR relying on hand movement data captured on wearable devices may require less computation resources and translation time. Thus, it is more efficient to provide instant translation during real-world communication. However, the insufficient amount of information provided by the wearable sensors often affect the overall performance of this system. To tackle this issue, we propose a bidirectional long short-term memory (BLSTM)-based multi-feature framework for conducting gesture-based CSLR precisely with two smart watches. In this framework, multiple sets of input features are extracted from the collected gesture data to provide a diverse spectrum of valuable information to the underlying BLSTM model for CSLR. To demonstrate the effectiveness of the proposed framework, we test it on an extremely challenging and radically new dataset of Hong Kong sign language (HKSL), in which hand movement data are collected from 6 individual signers for 50 different sentences. The experimental results reveal that the proposed framework attains a much lower word error rate compared with other existing machine learning or deep learning approaches for gesture-based CSLR. Based on this framework, we further propose a portable sign language collection and translation platform, which can simplify the procedure of collecting gesture-based sign language dataset and recognize sign language through smart watch data in real time, in order to break the communication barrier for the sign language users. MDPI 2022-02-20 /pmc/articles/PMC8877205/ /pubmed/35208457 http://dx.doi.org/10.3390/mi13020333 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhou, Zhenxing
Tam, Vincent W. L.
Lam, Edmund Y.
A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework
title A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework
title_full A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework
title_fullStr A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework
title_full_unstemmed A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework
title_short A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework
title_sort portable sign language collection and translation platform with smart watches using a blstm-based multi-feature framework
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8877205/
https://www.ncbi.nlm.nih.gov/pubmed/35208457
http://dx.doi.org/10.3390/mi13020333
work_keys_str_mv AT zhouzhenxing aportablesignlanguagecollectionandtranslationplatformwithsmartwatchesusingablstmbasedmultifeatureframework
AT tamvincentwl aportablesignlanguagecollectionandtranslationplatformwithsmartwatchesusingablstmbasedmultifeatureframework
AT lamedmundy aportablesignlanguagecollectionandtranslationplatformwithsmartwatchesusingablstmbasedmultifeatureframework
AT zhouzhenxing portablesignlanguagecollectionandtranslationplatformwithsmartwatchesusingablstmbasedmultifeatureframework
AT tamvincentwl portablesignlanguagecollectionandtranslationplatformwithsmartwatchesusingablstmbasedmultifeatureframework
AT lamedmundy portablesignlanguagecollectionandtranslationplatformwithsmartwatchesusingablstmbasedmultifeatureframework