Cargando…
A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition
Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majorit...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6263408/ https://www.ncbi.nlm.nih.gov/pubmed/30388855 http://dx.doi.org/10.3390/s18113726 |
_version_ | 1783375287797415936 |
---|---|
author | Almaslukh, Bandar Artoli, Abdel Monim Al-Muhtadi, Jalal |
author_facet | Almaslukh, Bandar Artoli, Abdel Monim Al-Muhtadi, Jalal |
author_sort | Almaslukh, Bandar |
collection | PubMed |
description | Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majority of the related works have proposed a position-dependent HAR, i.e., the target subject has to fix the smartphone in a pre-defined position. Few studies have tackled the problem of position-independent HAR. They have tackled the problem either using handcrafted features that are less influenced by the position of the smartphone or by building a position-aware HAR. The performance of these studies still needs more improvement to produce a reliable smartphone-based HAR. Thus, in this paper, we propose a deep convolution neural network model that provides a robust position-independent HAR system. We build and evaluate the performance of the proposed model using the RealWorld HAR public dataset. We find that our deep learning proposed model increases the overall performance compared to the state-of-the-art traditional machine learning method from 84% to 88% for position-independent HAR. In addition, the position detection performance of our model improves superiorly from 89% to 98%. Finally, the recognition time of the proposed model is evaluated in order to validate the applicability of the model for real-time applications. |
format | Online Article Text |
id | pubmed-6263408 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-62634082018-12-12 A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition Almaslukh, Bandar Artoli, Abdel Monim Al-Muhtadi, Jalal Sensors (Basel) Article Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majority of the related works have proposed a position-dependent HAR, i.e., the target subject has to fix the smartphone in a pre-defined position. Few studies have tackled the problem of position-independent HAR. They have tackled the problem either using handcrafted features that are less influenced by the position of the smartphone or by building a position-aware HAR. The performance of these studies still needs more improvement to produce a reliable smartphone-based HAR. Thus, in this paper, we propose a deep convolution neural network model that provides a robust position-independent HAR system. We build and evaluate the performance of the proposed model using the RealWorld HAR public dataset. We find that our deep learning proposed model increases the overall performance compared to the state-of-the-art traditional machine learning method from 84% to 88% for position-independent HAR. In addition, the position detection performance of our model improves superiorly from 89% to 98%. Finally, the recognition time of the proposed model is evaluated in order to validate the applicability of the model for real-time applications. MDPI 2018-11-01 /pmc/articles/PMC6263408/ /pubmed/30388855 http://dx.doi.org/10.3390/s18113726 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Almaslukh, Bandar Artoli, Abdel Monim Al-Muhtadi, Jalal A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition |
title | A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition |
title_full | A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition |
title_fullStr | A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition |
title_full_unstemmed | A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition |
title_short | A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition |
title_sort | robust deep learning approach for position-independent smartphone-based human activity recognition |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6263408/ https://www.ncbi.nlm.nih.gov/pubmed/30388855 http://dx.doi.org/10.3390/s18113726 |
work_keys_str_mv | AT almaslukhbandar arobustdeeplearningapproachforpositionindependentsmartphonebasedhumanactivityrecognition AT artoliabdelmonim arobustdeeplearningapproachforpositionindependentsmartphonebasedhumanactivityrecognition AT almuhtadijalal arobustdeeplearningapproachforpositionindependentsmartphonebasedhumanactivityrecognition AT almaslukhbandar robustdeeplearningapproachforpositionindependentsmartphonebasedhumanactivityrecognition AT artoliabdelmonim robustdeeplearningapproachforpositionindependentsmartphonebasedhumanactivityrecognition AT almuhtadijalal robustdeeplearningapproachforpositionindependentsmartphonebasedhumanactivityrecognition |