Cargando…
C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing
Existing public domain multi-modal datasets for human action recognition only include actions of interest that have already been segmented from action streams. These datasets cannot be used to study a more realistic action recognition scenario where actions of interest occur randomly and continuousl...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7287800/ https://www.ncbi.nlm.nih.gov/pubmed/32443857 http://dx.doi.org/10.3390/s20102905 |
_version_ | 1783545133207126016 |
---|---|
author | Wei, Haoran Chopada, Pranav Kehtarnavaz, Nasser |
author_facet | Wei, Haoran Chopada, Pranav Kehtarnavaz, Nasser |
author_sort | Wei, Haoran |
collection | PubMed |
description | Existing public domain multi-modal datasets for human action recognition only include actions of interest that have already been segmented from action streams. These datasets cannot be used to study a more realistic action recognition scenario where actions of interest occur randomly and continuously among actions of non-interest or no actions. It is more challenging to recognize actions of interest in continuous action streams since the starts and ends of these actions are not known and need to be determined in an on-the-fly manner. Furthermore, there exists no public domain multi-modal dataset in which video and inertial data are captured simultaneously for continuous action streams. The main objective of this paper is to describe a dataset that is collected and made publicly available, named Continuous Multimodal Human Action Dataset (C-MHAD), in which video and inertial data stream are captured simultaneously in a continuous way. This dataset is then used in an example recognition technique and the results obtained indicate that the fusion of these two sensing modalities increases the F1 scores compared to using each sensing modality individually. |
format | Online Article Text |
id | pubmed-7287800 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-72878002020-06-15 C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing Wei, Haoran Chopada, Pranav Kehtarnavaz, Nasser Sensors (Basel) Article Existing public domain multi-modal datasets for human action recognition only include actions of interest that have already been segmented from action streams. These datasets cannot be used to study a more realistic action recognition scenario where actions of interest occur randomly and continuously among actions of non-interest or no actions. It is more challenging to recognize actions of interest in continuous action streams since the starts and ends of these actions are not known and need to be determined in an on-the-fly manner. Furthermore, there exists no public domain multi-modal dataset in which video and inertial data are captured simultaneously for continuous action streams. The main objective of this paper is to describe a dataset that is collected and made publicly available, named Continuous Multimodal Human Action Dataset (C-MHAD), in which video and inertial data stream are captured simultaneously in a continuous way. This dataset is then used in an example recognition technique and the results obtained indicate that the fusion of these two sensing modalities increases the F1 scores compared to using each sensing modality individually. MDPI 2020-05-20 /pmc/articles/PMC7287800/ /pubmed/32443857 http://dx.doi.org/10.3390/s20102905 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wei, Haoran Chopada, Pranav Kehtarnavaz, Nasser C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing |
title | C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing |
title_full | C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing |
title_fullStr | C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing |
title_full_unstemmed | C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing |
title_short | C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing |
title_sort | c-mhad: continuous multimodal human action dataset of simultaneous video and inertial sensing |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7287800/ https://www.ncbi.nlm.nih.gov/pubmed/32443857 http://dx.doi.org/10.3390/s20102905 |
work_keys_str_mv | AT weihaoran cmhadcontinuousmultimodalhumanactiondatasetofsimultaneousvideoandinertialsensing AT chopadapranav cmhadcontinuousmultimodalhumanactiondatasetofsimultaneousvideoandinertialsensing AT kehtarnavaznasser cmhadcontinuousmultimodalhumanactiondatasetofsimultaneousvideoandinertialsensing |