Cargando…
A Fast and Robust Deep Convolutional Neural Networks for Complex Human Activity Recognition Using Smartphone
As a significant role in healthcare and sports applications, human activity recognition (HAR) techniques are capable of monitoring humans’ daily behavior. It has spurred the demand for intelligent sensors and has been giving rise to the explosive growth of wearable and mobile devices. They provide t...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6749356/ https://www.ncbi.nlm.nih.gov/pubmed/31470521 http://dx.doi.org/10.3390/s19173731 |
Sumario: | As a significant role in healthcare and sports applications, human activity recognition (HAR) techniques are capable of monitoring humans’ daily behavior. It has spurred the demand for intelligent sensors and has been giving rise to the explosive growth of wearable and mobile devices. They provide the most availability of human activity data (big data). Powerful algorithms are required to analyze these heterogeneous and high-dimension streaming data efficiently. This paper proposes a novel fast and robust deep convolutional neural network structure (FR-DCNN) for human activity recognition (HAR) using a smartphone. It enhances the effectiveness and extends the information of the collected raw data from the inertial measurement unit (IMU) sensors by integrating a series of signal processing algorithms and a signal selection module. It enables a fast computational method for building the DCNN classifier by adding a data compression module. Experimental results on the sampled 12 complex activities dataset show that the proposed FR-DCNN model is the best method for fast computation and high accuracy recognition. The FR-DCNN model only needs 0.0029 s to predict activity in an online way with 95.27% accuracy. Meanwhile, it only takes 88 s (average) to establish the DCNN classifier on the compressed dataset with less precision loss 94.18%. |
---|