Cargando…
Intelligent Localization and Deep Human Activity Recognition through IoT Devices
Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in healthcare...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490618/ https://www.ncbi.nlm.nih.gov/pubmed/37687819 http://dx.doi.org/10.3390/s23177363 |
_version_ | 1785103881584246784 |
---|---|
author | Alazeb, Abdulwahab Azmat, Usman Al Mudawi, Naif Alshahrani, Abdullah Alotaibi, Saud S. Almujally, Nouf Abdullah Jalal, Ahmad |
author_facet | Alazeb, Abdulwahab Azmat, Usman Al Mudawi, Naif Alshahrani, Abdullah Alotaibi, Saud S. Almujally, Nouf Abdullah Jalal, Ahmad |
author_sort | Alazeb, Abdulwahab |
collection | PubMed |
description | Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in healthcare monitoring, behavior analysis, personal safety, and entertainment. A robust model has been proposed in this article that works over IoT data extracted from smartphone and smartwatch sensors to recognize the activities performed by the user and, in the meantime, classify the location at which the human performed that particular activity. The system starts by denoising the input signal using a second-order Butterworth filter and then uses a hamming window to divide the signal into small data chunks. Multiple stacked windows are generated using three windows per stack, which, in turn, prove helpful in producing more reliable features. The stacked data are then transferred to two parallel feature extraction blocks, i.e., human activity recognition and human localization. The respective features are extracted for both modules that reinforce the system’s accuracy. A recursive feature elimination is applied to the features of both categories independently to select the most informative ones among them. After the feature selection, a genetic algorithm is used to generate ten different generations of each feature vector for data augmentation purposes, which directly impacts the system’s performance. Finally, a deep neural decision forest is trained for classifying the activity and the subject’s location while working on both of these attributes in parallel. For the evaluation and testing of the proposed system, two openly accessible benchmark datasets, the ExtraSensory dataset and the Sussex-Huawei Locomotion dataset, were used. The system outperformed the available state-of-the-art systems by recognizing human activities with an accuracy of 88.25% and classifying the location with an accuracy of 90.63% over the ExtraSensory dataset, while, for the Sussex-Huawei Locomotion dataset, the respective results were 96.00% and 90.50% accurate. |
format | Online Article Text |
id | pubmed-10490618 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-104906182023-09-09 Intelligent Localization and Deep Human Activity Recognition through IoT Devices Alazeb, Abdulwahab Azmat, Usman Al Mudawi, Naif Alshahrani, Abdullah Alotaibi, Saud S. Almujally, Nouf Abdullah Jalal, Ahmad Sensors (Basel) Article Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in healthcare monitoring, behavior analysis, personal safety, and entertainment. A robust model has been proposed in this article that works over IoT data extracted from smartphone and smartwatch sensors to recognize the activities performed by the user and, in the meantime, classify the location at which the human performed that particular activity. The system starts by denoising the input signal using a second-order Butterworth filter and then uses a hamming window to divide the signal into small data chunks. Multiple stacked windows are generated using three windows per stack, which, in turn, prove helpful in producing more reliable features. The stacked data are then transferred to two parallel feature extraction blocks, i.e., human activity recognition and human localization. The respective features are extracted for both modules that reinforce the system’s accuracy. A recursive feature elimination is applied to the features of both categories independently to select the most informative ones among them. After the feature selection, a genetic algorithm is used to generate ten different generations of each feature vector for data augmentation purposes, which directly impacts the system’s performance. Finally, a deep neural decision forest is trained for classifying the activity and the subject’s location while working on both of these attributes in parallel. For the evaluation and testing of the proposed system, two openly accessible benchmark datasets, the ExtraSensory dataset and the Sussex-Huawei Locomotion dataset, were used. The system outperformed the available state-of-the-art systems by recognizing human activities with an accuracy of 88.25% and classifying the location with an accuracy of 90.63% over the ExtraSensory dataset, while, for the Sussex-Huawei Locomotion dataset, the respective results were 96.00% and 90.50% accurate. MDPI 2023-08-23 /pmc/articles/PMC10490618/ /pubmed/37687819 http://dx.doi.org/10.3390/s23177363 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Alazeb, Abdulwahab Azmat, Usman Al Mudawi, Naif Alshahrani, Abdullah Alotaibi, Saud S. Almujally, Nouf Abdullah Jalal, Ahmad Intelligent Localization and Deep Human Activity Recognition through IoT Devices |
title | Intelligent Localization and Deep Human Activity Recognition through IoT Devices |
title_full | Intelligent Localization and Deep Human Activity Recognition through IoT Devices |
title_fullStr | Intelligent Localization and Deep Human Activity Recognition through IoT Devices |
title_full_unstemmed | Intelligent Localization and Deep Human Activity Recognition through IoT Devices |
title_short | Intelligent Localization and Deep Human Activity Recognition through IoT Devices |
title_sort | intelligent localization and deep human activity recognition through iot devices |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490618/ https://www.ncbi.nlm.nih.gov/pubmed/37687819 http://dx.doi.org/10.3390/s23177363 |
work_keys_str_mv | AT alazebabdulwahab intelligentlocalizationanddeephumanactivityrecognitionthroughiotdevices AT azmatusman intelligentlocalizationanddeephumanactivityrecognitionthroughiotdevices AT almudawinaif intelligentlocalizationanddeephumanactivityrecognitionthroughiotdevices AT alshahraniabdullah intelligentlocalizationanddeephumanactivityrecognitionthroughiotdevices AT alotaibisauds intelligentlocalizationanddeephumanactivityrecognitionthroughiotdevices AT almujallynoufabdullah intelligentlocalizationanddeephumanactivityrecognitionthroughiotdevices AT jalalahmad intelligentlocalizationanddeephumanactivityrecognitionthroughiotdevices |