Cargando…

Fusing Object Information and Inertial Data for Activity Recognition †

In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choi...

Descripción completa

Detalles Bibliográficos
Autores principales: Diete, Alexander, Stuckenschmidt, Heiner
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6806148/
https://www.ncbi.nlm.nih.gov/pubmed/31547630
http://dx.doi.org/10.3390/s19194119
_version_ 1783461561288884224
author Diete, Alexander
Stuckenschmidt, Heiner
author_facet Diete, Alexander
Stuckenschmidt, Heiner
author_sort Diete, Alexander
collection PubMed
description In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an [Formula: see text]-measure of up to 79.6%.
format Online
Article
Text
id pubmed-6806148
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-68061482019-11-07 Fusing Object Information and Inertial Data for Activity Recognition † Diete, Alexander Stuckenschmidt, Heiner Sensors (Basel) Article In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an [Formula: see text]-measure of up to 79.6%. MDPI 2019-09-23 /pmc/articles/PMC6806148/ /pubmed/31547630 http://dx.doi.org/10.3390/s19194119 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Diete, Alexander
Stuckenschmidt, Heiner
Fusing Object Information and Inertial Data for Activity Recognition †
title Fusing Object Information and Inertial Data for Activity Recognition †
title_full Fusing Object Information and Inertial Data for Activity Recognition †
title_fullStr Fusing Object Information and Inertial Data for Activity Recognition †
title_full_unstemmed Fusing Object Information and Inertial Data for Activity Recognition †
title_short Fusing Object Information and Inertial Data for Activity Recognition †
title_sort fusing object information and inertial data for activity recognition †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6806148/
https://www.ncbi.nlm.nih.gov/pubmed/31547630
http://dx.doi.org/10.3390/s19194119
work_keys_str_mv AT dietealexander fusingobjectinformationandinertialdataforactivityrecognition
AT stuckenschmidtheiner fusingobjectinformationandinertialdataforactivityrecognition