Cargando…

A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its widespread applicability to human systems, including the evaluation of dietary and physical activity and the monitoring of patients and old...

Descripción completa

Detalles Bibliográficos
Autores principales: Yu, Haibin, Jia, Wenyan, Li, Zhen, Gong, Feixiang, Yuan, Ding, Zhang, Hong, Sun, Mingui
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6394646/
https://www.ncbi.nlm.nih.gov/pubmed/30881444
http://dx.doi.org/10.1186/s13634-019-0612-x
_version_ 1783398938791903232
author Yu, Haibin
Jia, Wenyan
Li, Zhen
Gong, Feixiang
Yuan, Ding
Zhang, Hong
Sun, Mingui
author_facet Yu, Haibin
Jia, Wenyan
Li, Zhen
Gong, Feixiang
Yuan, Ding
Zhang, Hong
Sun, Mingui
author_sort Yu, Haibin
collection PubMed
description Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its widespread applicability to human systems, including the evaluation of dietary and physical activity and the monitoring of patients and older adults. In this paper, we present a knowledge-driven multisource fusion framework for the recognition of egocentric activities in daily living (ADL). This framework employs Dezert–Smarandache theory across three information sources: the wearer’s knowledge, images acquired by a wearable camera, and sensor data from wearable inertial measurement units and GPS. A simple likelihood table is designed to provide routine ADL information for each individual. A well-trained convolutional neural network is then used to produce a set of textual tags that, along with routine information and other sensor data, are used to recognize ADLs based on information theory-based statistics and a support vector machine. Our experiments show that the proposed method accurately recognizes 15 predefined ADL classes, including a variety of sedentary activities that have previously been difficult to recognize. When applied to real-life data recorded using a self-constructed wearable device, our method outperforms previous approaches, and an average accuracy of 85.4% is achieved for the 15 ADLs.
format Online
Article
Text
id pubmed-6394646
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-63946462019-03-15 A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition Yu, Haibin Jia, Wenyan Li, Zhen Gong, Feixiang Yuan, Ding Zhang, Hong Sun, Mingui EURASIP J Adv Signal Process Research Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its widespread applicability to human systems, including the evaluation of dietary and physical activity and the monitoring of patients and older adults. In this paper, we present a knowledge-driven multisource fusion framework for the recognition of egocentric activities in daily living (ADL). This framework employs Dezert–Smarandache theory across three information sources: the wearer’s knowledge, images acquired by a wearable camera, and sensor data from wearable inertial measurement units and GPS. A simple likelihood table is designed to provide routine ADL information for each individual. A well-trained convolutional neural network is then used to produce a set of textual tags that, along with routine information and other sensor data, are used to recognize ADLs based on information theory-based statistics and a support vector machine. Our experiments show that the proposed method accurately recognizes 15 predefined ADL classes, including a variety of sedentary activities that have previously been difficult to recognize. When applied to real-life data recorded using a self-constructed wearable device, our method outperforms previous approaches, and an average accuracy of 85.4% is achieved for the 15 ADLs. Springer International Publishing 2019-02-22 2019 /pmc/articles/PMC6394646/ /pubmed/30881444 http://dx.doi.org/10.1186/s13634-019-0612-x Text en © The Author(s). 2019 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Research
Yu, Haibin
Jia, Wenyan
Li, Zhen
Gong, Feixiang
Yuan, Ding
Zhang, Hong
Sun, Mingui
A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition
title A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition
title_full A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition
title_fullStr A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition
title_full_unstemmed A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition
title_short A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition
title_sort multisource fusion framework driven by user-defined knowledge for egocentric activity recognition
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6394646/
https://www.ncbi.nlm.nih.gov/pubmed/30881444
http://dx.doi.org/10.1186/s13634-019-0612-x
work_keys_str_mv AT yuhaibin amultisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT jiawenyan amultisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT lizhen amultisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT gongfeixiang amultisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT yuanding amultisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT zhanghong amultisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT sunmingui amultisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT yuhaibin multisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT jiawenyan multisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT lizhen multisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT gongfeixiang multisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT yuanding multisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT zhanghong multisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition
AT sunmingui multisourcefusionframeworkdrivenbyuserdefinedknowledgeforegocentricactivityrecognition