Cargando…
INIM: Inertial Images Construction with Applications to Activity Recognition
Human activity recognition aims to classify the user activity in various applications like healthcare, gesture recognition and indoor navigation. In the latter, smartphone location recognition is gaining more attention as it enhances indoor positioning accuracy. Commonly the smartphone’s inertial se...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8309892/ https://www.ncbi.nlm.nih.gov/pubmed/34300524 http://dx.doi.org/10.3390/s21144787 |
_version_ | 1783728630074966016 |
---|---|
author | Daniel, Nati Klein, Itzik |
author_facet | Daniel, Nati Klein, Itzik |
author_sort | Daniel, Nati |
collection | PubMed |
description | Human activity recognition aims to classify the user activity in various applications like healthcare, gesture recognition and indoor navigation. In the latter, smartphone location recognition is gaining more attention as it enhances indoor positioning accuracy. Commonly the smartphone’s inertial sensor readings are used as input to a machine learning algorithm which performs the classification. There are several approaches to tackle such a task: feature based approaches, one dimensional deep learning algorithms, and two dimensional deep learning architectures. When using deep learning approaches, feature engineering is redundant. In addition, while utilizing two-dimensional deep learning approaches enables to utilize methods from the well-established computer vision domain. In this paper, a framework for smartphone location and human activity recognition, based on the smartphone’s inertial sensors, is proposed. The contributions of this work are a novel time series encoding approach, from inertial signals to inertial images, and transfer learning from computer vision domain to the inertial sensors classification problem. Four different datasets are employed to show the benefits of using the proposed approach. In addition, as the proposed framework performs classification on inertial sensors readings, it can be applied for other classification tasks using inertial data. It can also be adopted to handle other types of sensory data collected for a classification task. |
format | Online Article Text |
id | pubmed-8309892 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-83098922021-07-25 INIM: Inertial Images Construction with Applications to Activity Recognition Daniel, Nati Klein, Itzik Sensors (Basel) Article Human activity recognition aims to classify the user activity in various applications like healthcare, gesture recognition and indoor navigation. In the latter, smartphone location recognition is gaining more attention as it enhances indoor positioning accuracy. Commonly the smartphone’s inertial sensor readings are used as input to a machine learning algorithm which performs the classification. There are several approaches to tackle such a task: feature based approaches, one dimensional deep learning algorithms, and two dimensional deep learning architectures. When using deep learning approaches, feature engineering is redundant. In addition, while utilizing two-dimensional deep learning approaches enables to utilize methods from the well-established computer vision domain. In this paper, a framework for smartphone location and human activity recognition, based on the smartphone’s inertial sensors, is proposed. The contributions of this work are a novel time series encoding approach, from inertial signals to inertial images, and transfer learning from computer vision domain to the inertial sensors classification problem. Four different datasets are employed to show the benefits of using the proposed approach. In addition, as the proposed framework performs classification on inertial sensors readings, it can be applied for other classification tasks using inertial data. It can also be adopted to handle other types of sensory data collected for a classification task. MDPI 2021-07-13 /pmc/articles/PMC8309892/ /pubmed/34300524 http://dx.doi.org/10.3390/s21144787 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Daniel, Nati Klein, Itzik INIM: Inertial Images Construction with Applications to Activity Recognition |
title | INIM: Inertial Images Construction with Applications to Activity Recognition |
title_full | INIM: Inertial Images Construction with Applications to Activity Recognition |
title_fullStr | INIM: Inertial Images Construction with Applications to Activity Recognition |
title_full_unstemmed | INIM: Inertial Images Construction with Applications to Activity Recognition |
title_short | INIM: Inertial Images Construction with Applications to Activity Recognition |
title_sort | inim: inertial images construction with applications to activity recognition |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8309892/ https://www.ncbi.nlm.nih.gov/pubmed/34300524 http://dx.doi.org/10.3390/s21144787 |
work_keys_str_mv | AT danielnati iniminertialimagesconstructionwithapplicationstoactivityrecognition AT kleinitzik iniminertialimagesconstructionwithapplicationstoactivityrecognition |