Cargando…

Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation

Several pedestrian navigation solutions have been proposed to date, and most of them are based on smartphones. Real-time recognition of pedestrian mode and smartphone posture is a key issue in navigation. Traditional ML (Machine Learning) classification methods have drawbacks, such as insufficient r...

Descripción completa

Detalles Bibliográficos
Autores principales: Ye, Junhua, Li, Xin, Zhang, Xiangdong, Zhang, Qin, Chen, Wu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7248737/
https://www.ncbi.nlm.nih.gov/pubmed/32366055
http://dx.doi.org/10.3390/s20092574
_version_ 1783538439872839680
author Ye, Junhua
Li, Xin
Zhang, Xiangdong
Zhang, Qin
Chen, Wu
author_facet Ye, Junhua
Li, Xin
Zhang, Xiangdong
Zhang, Qin
Chen, Wu
author_sort Ye, Junhua
collection PubMed
description Several pedestrian navigation solutions have been proposed to date, and most of them are based on smartphones. Real-time recognition of pedestrian mode and smartphone posture is a key issue in navigation. Traditional ML (Machine Learning) classification methods have drawbacks, such as insufficient recognition accuracy and poor timing. This paper presents a real-time recognition scheme for comprehensive human activities, and this scheme combines deep learning algorithms and MEMS (Micro-Electro-Mechanical System) sensors’ measurements. In this study, we performed four main experiments, namely pedestrian motion mode recognition, smartphone posture recognition, real-time comprehensive pedestrian activity recognition, and pedestrian navigation. In the procedure of recognition, we designed and trained deep learning models using LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network) networks based on Tensorflow framework. The accuracy of traditional ML classification methods was also used for comparison. Test results show that the accuracy of motion mode recognition was improved from [Formula: see text] , which was the highest accuracy and obtained by SVM (Support Vector Machine), to [Formula: see text] (LSTM) and [Formula: see text] (CNN); the accuracy of smartphone posture recognition was improved from [Formula: see text] , which is the highest accuracy and obtained by NN (Neural Network), to [Formula: see text] (LSTM) and [Formula: see text] (CNN). We give a model transformation procedure based on the trained CNN network model, and then obtain the converted [Formula: see text] model, which can be run in Android devices for real-time recognition. Real-time recognition experiments were performed in multiple scenes, a recognition model trained by the CNN network was deployed in a Huawei Mate20 smartphone, and the five most used pedestrian activities were designed and verified. The overall accuracy was up to [Formula: see text]. Overall, the improvement of recognition capability based on deep learning algorithms was significant. Therefore, the solution was helpful to recognize comprehensive pedestrian activities during navigation. On the basis of the trained model, a navigation test was performed; mean bias was reduced by more than 1.1 m. Accordingly, the positioning accuracy was improved obviously, which is meaningful to apply DL in the area of pedestrian navigation to make improvements.
format Online
Article
Text
id pubmed-7248737
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-72487372020-08-13 Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation Ye, Junhua Li, Xin Zhang, Xiangdong Zhang, Qin Chen, Wu Sensors (Basel) Article Several pedestrian navigation solutions have been proposed to date, and most of them are based on smartphones. Real-time recognition of pedestrian mode and smartphone posture is a key issue in navigation. Traditional ML (Machine Learning) classification methods have drawbacks, such as insufficient recognition accuracy and poor timing. This paper presents a real-time recognition scheme for comprehensive human activities, and this scheme combines deep learning algorithms and MEMS (Micro-Electro-Mechanical System) sensors’ measurements. In this study, we performed four main experiments, namely pedestrian motion mode recognition, smartphone posture recognition, real-time comprehensive pedestrian activity recognition, and pedestrian navigation. In the procedure of recognition, we designed and trained deep learning models using LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network) networks based on Tensorflow framework. The accuracy of traditional ML classification methods was also used for comparison. Test results show that the accuracy of motion mode recognition was improved from [Formula: see text] , which was the highest accuracy and obtained by SVM (Support Vector Machine), to [Formula: see text] (LSTM) and [Formula: see text] (CNN); the accuracy of smartphone posture recognition was improved from [Formula: see text] , which is the highest accuracy and obtained by NN (Neural Network), to [Formula: see text] (LSTM) and [Formula: see text] (CNN). We give a model transformation procedure based on the trained CNN network model, and then obtain the converted [Formula: see text] model, which can be run in Android devices for real-time recognition. Real-time recognition experiments were performed in multiple scenes, a recognition model trained by the CNN network was deployed in a Huawei Mate20 smartphone, and the five most used pedestrian activities were designed and verified. The overall accuracy was up to [Formula: see text]. Overall, the improvement of recognition capability based on deep learning algorithms was significant. Therefore, the solution was helpful to recognize comprehensive pedestrian activities during navigation. On the basis of the trained model, a navigation test was performed; mean bias was reduced by more than 1.1 m. Accordingly, the positioning accuracy was improved obviously, which is meaningful to apply DL in the area of pedestrian navigation to make improvements. MDPI 2020-04-30 /pmc/articles/PMC7248737/ /pubmed/32366055 http://dx.doi.org/10.3390/s20092574 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ye, Junhua
Li, Xin
Zhang, Xiangdong
Zhang, Qin
Chen, Wu
Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation
title Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation
title_full Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation
title_fullStr Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation
title_full_unstemmed Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation
title_short Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation
title_sort deep learning-based human activity real-time recognition for pedestrian navigation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7248737/
https://www.ncbi.nlm.nih.gov/pubmed/32366055
http://dx.doi.org/10.3390/s20092574
work_keys_str_mv AT yejunhua deeplearningbasedhumanactivityrealtimerecognitionforpedestriannavigation
AT lixin deeplearningbasedhumanactivityrealtimerecognitionforpedestriannavigation
AT zhangxiangdong deeplearningbasedhumanactivityrealtimerecognitionforpedestriannavigation
AT zhangqin deeplearningbasedhumanactivityrealtimerecognitionforpedestriannavigation
AT chenwu deeplearningbasedhumanactivityrealtimerecognitionforpedestriannavigation