Cargando…
Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network
Studies on deep-learning-based behavioral pattern recognition have recently received considerable attention. However, if there are insufficient data and the activity to be identified is changed, a robust deep learning model cannot be created. This work contributes a generalized deep learning model t...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8747696/ https://www.ncbi.nlm.nih.gov/pubmed/35009717 http://dx.doi.org/10.3390/s22010174 |
_version_ | 1784630890224156672 |
---|---|
author | Kang, Junhyuk Shin, Jieun Shin, Jaewon Lee, Daeho Choi, Ahyoung |
author_facet | Kang, Junhyuk Shin, Jieun Shin, Jaewon Lee, Daeho Choi, Ahyoung |
author_sort | Kang, Junhyuk |
collection | PubMed |
description | Studies on deep-learning-based behavioral pattern recognition have recently received considerable attention. However, if there are insufficient data and the activity to be identified is changed, a robust deep learning model cannot be created. This work contributes a generalized deep learning model that is robust to noise not dependent on input signals by extracting features through a deep learning model for each heterogeneous input signal that can maintain performance while minimizing preprocessing of the input signal. We propose a hybrid deep learning model that takes heterogeneous sensor data, an acceleration sensor, and an image as inputs. For accelerometer data, we use a convolutional neural network (CNN) and convolutional block attention module models (CBAM), and apply bidirectional long short-term memory and a residual neural network. The overall accuracy was 94.8% with a skeleton image and accelerometer data, and 93.1% with a skeleton image, coordinates, and accelerometer data after evaluating nine behaviors using the Berkeley Multimodal Human Action Database (MHAD). Furthermore, the accuracy of the investigation was revealed to be 93.4% with inverted images and 93.2% with white noise added to the accelerometer data. Testing with data that included inversion and noise data indicated that the suggested model was robust, with a performance deterioration of approximately 1%. |
format | Online Article Text |
id | pubmed-8747696 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-87476962022-01-11 Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network Kang, Junhyuk Shin, Jieun Shin, Jaewon Lee, Daeho Choi, Ahyoung Sensors (Basel) Article Studies on deep-learning-based behavioral pattern recognition have recently received considerable attention. However, if there are insufficient data and the activity to be identified is changed, a robust deep learning model cannot be created. This work contributes a generalized deep learning model that is robust to noise not dependent on input signals by extracting features through a deep learning model for each heterogeneous input signal that can maintain performance while minimizing preprocessing of the input signal. We propose a hybrid deep learning model that takes heterogeneous sensor data, an acceleration sensor, and an image as inputs. For accelerometer data, we use a convolutional neural network (CNN) and convolutional block attention module models (CBAM), and apply bidirectional long short-term memory and a residual neural network. The overall accuracy was 94.8% with a skeleton image and accelerometer data, and 93.1% with a skeleton image, coordinates, and accelerometer data after evaluating nine behaviors using the Berkeley Multimodal Human Action Database (MHAD). Furthermore, the accuracy of the investigation was revealed to be 93.4% with inverted images and 93.2% with white noise added to the accelerometer data. Testing with data that included inversion and noise data indicated that the suggested model was robust, with a performance deterioration of approximately 1%. MDPI 2021-12-28 /pmc/articles/PMC8747696/ /pubmed/35009717 http://dx.doi.org/10.3390/s22010174 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Kang, Junhyuk Shin, Jieun Shin, Jaewon Lee, Daeho Choi, Ahyoung Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network |
title | Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network |
title_full | Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network |
title_fullStr | Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network |
title_full_unstemmed | Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network |
title_short | Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network |
title_sort | robust human activity recognition by integrating image and accelerometer sensor data using deep fusion network |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8747696/ https://www.ncbi.nlm.nih.gov/pubmed/35009717 http://dx.doi.org/10.3390/s22010174 |
work_keys_str_mv | AT kangjunhyuk robusthumanactivityrecognitionbyintegratingimageandaccelerometersensordatausingdeepfusionnetwork AT shinjieun robusthumanactivityrecognitionbyintegratingimageandaccelerometersensordatausingdeepfusionnetwork AT shinjaewon robusthumanactivityrecognitionbyintegratingimageandaccelerometersensordatausingdeepfusionnetwork AT leedaeho robusthumanactivityrecognitionbyintegratingimageandaccelerometersensordatausingdeepfusionnetwork AT choiahyoung robusthumanactivityrecognitionbyintegratingimageandaccelerometersensordatausingdeepfusionnetwork |