Cargando…

Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning

In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data col...

Descripción completa

Detalles Bibliográficos
Autores principales: Chung, Seungeun, Lim, Jiyoun, Noh, Kyoung Ju, Kim, Gague, Jeong, Hyuntae
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6479605/
https://www.ncbi.nlm.nih.gov/pubmed/30974845
http://dx.doi.org/10.3390/s19071716
_version_ 1783413383797669888
author Chung, Seungeun
Lim, Jiyoun
Noh, Kyoung Ju
Kim, Gague
Jeong, Hyuntae
author_facet Chung, Seungeun
Lim, Jiyoun
Noh, Kyoung Ju
Kim, Gague
Jeong, Hyuntae
author_sort Chung, Seungeun
collection PubMed
description In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities.
format Online
Article
Text
id pubmed-6479605
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-64796052019-04-29 Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning Chung, Seungeun Lim, Jiyoun Noh, Kyoung Ju Kim, Gague Jeong, Hyuntae Sensors (Basel) Article In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities. MDPI 2019-04-10 /pmc/articles/PMC6479605/ /pubmed/30974845 http://dx.doi.org/10.3390/s19071716 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Chung, Seungeun
Lim, Jiyoun
Noh, Kyoung Ju
Kim, Gague
Jeong, Hyuntae
Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning
title Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning
title_full Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning
title_fullStr Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning
title_full_unstemmed Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning
title_short Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning
title_sort sensor data acquisition and multimodal sensor fusion for human activity recognition using deep learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6479605/
https://www.ncbi.nlm.nih.gov/pubmed/30974845
http://dx.doi.org/10.3390/s19071716
work_keys_str_mv AT chungseungeun sensordataacquisitionandmultimodalsensorfusionforhumanactivityrecognitionusingdeeplearning
AT limjiyoun sensordataacquisitionandmultimodalsensorfusionforhumanactivityrecognitionusingdeeplearning
AT nohkyoungju sensordataacquisitionandmultimodalsensorfusionforhumanactivityrecognitionusingdeeplearning
AT kimgague sensordataacquisitionandmultimodalsensorfusionforhumanactivityrecognitionusingdeeplearning
AT jeonghyuntae sensordataacquisitionandmultimodalsensorfusionforhumanactivityrecognitionusingdeeplearning