Cargando…

Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition

Detection of human activities along with the associated context is of key importance for various application areas, including assisted living and well-being. To predict a user’s context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with...

Descripción completa

Detalles Bibliográficos
Autores principales: Saeed, Aaqib, Ozcelebi, Tanir, Lukkien, Johan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6165109/
https://www.ncbi.nlm.nih.gov/pubmed/30200575
http://dx.doi.org/10.3390/s18092967
_version_ 1783359759358885888
author Saeed, Aaqib
Ozcelebi, Tanir
Lukkien, Johan
author_facet Saeed, Aaqib
Ozcelebi, Tanir
Lukkien, Johan
author_sort Saeed, Aaqib
collection PubMed
description Detection of human activities along with the associated context is of key importance for various application areas, including assisted living and well-being. To predict a user’s context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with missing values. The model is likely to encounter missing sensors in real-life conditions as well (such as a user not wearing a smartwatch) and it fails to infer the context if any of the modalities used for training are missing. In this paper, we propose a method based on an adversarial autoencoder for handling missing sensory features and synthesizing realistic samples. We empirically demonstrate the capability of our method in comparison with classical approaches for filling in missing values on a large-scale activity recognition dataset collected in-the-wild. We develop a fully-connected classification network by extending an encoder and systematically evaluate its multi-label classification performance when several modalities are missing. Furthermore, we show class-conditional artificial data generation and its visual and quantitative analysis on context classification task; representing a strong generative power of adversarial autoencoders.
format Online
Article
Text
id pubmed-6165109
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-61651092018-10-10 Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition Saeed, Aaqib Ozcelebi, Tanir Lukkien, Johan Sensors (Basel) Article Detection of human activities along with the associated context is of key importance for various application areas, including assisted living and well-being. To predict a user’s context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with missing values. The model is likely to encounter missing sensors in real-life conditions as well (such as a user not wearing a smartwatch) and it fails to infer the context if any of the modalities used for training are missing. In this paper, we propose a method based on an adversarial autoencoder for handling missing sensory features and synthesizing realistic samples. We empirically demonstrate the capability of our method in comparison with classical approaches for filling in missing values on a large-scale activity recognition dataset collected in-the-wild. We develop a fully-connected classification network by extending an encoder and systematically evaluate its multi-label classification performance when several modalities are missing. Furthermore, we show class-conditional artificial data generation and its visual and quantitative analysis on context classification task; representing a strong generative power of adversarial autoencoders. MDPI 2018-09-06 /pmc/articles/PMC6165109/ /pubmed/30200575 http://dx.doi.org/10.3390/s18092967 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Saeed, Aaqib
Ozcelebi, Tanir
Lukkien, Johan
Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition
title Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition
title_full Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition
title_fullStr Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition
title_full_unstemmed Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition
title_short Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition
title_sort synthesizing and reconstructing missing sensory modalities in behavioral context recognition
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6165109/
https://www.ncbi.nlm.nih.gov/pubmed/30200575
http://dx.doi.org/10.3390/s18092967
work_keys_str_mv AT saeedaaqib synthesizingandreconstructingmissingsensorymodalitiesinbehavioralcontextrecognition
AT ozcelebitanir synthesizingandreconstructingmissingsensorymodalitiesinbehavioralcontextrecognition
AT lukkienjohan synthesizingandreconstructingmissingsensorymodalitiesinbehavioralcontextrecognition