Cargando…
The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition
This paper makes the VISTA database, composed of inertial and visual data, publicly available for gesture and activity recognition. The inertial data were acquired with the SensHand, which can capture the movement of wrist, thumb, index and middle fingers, while the RGB-D visual data were acquired s...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9117293/ https://www.ncbi.nlm.nih.gov/pubmed/35585077 http://dx.doi.org/10.1038/s41597-022-01324-3 |
_version_ | 1784710301765074944 |
---|---|
author | Fiorini, Laura Cornacchia Loizzo, Federica Gabriella Sorrentino, Alessandra Rovini, Erika Di Nuovo, Alessandro Cavallo, Filippo |
author_facet | Fiorini, Laura Cornacchia Loizzo, Federica Gabriella Sorrentino, Alessandra Rovini, Erika Di Nuovo, Alessandro Cavallo, Filippo |
author_sort | Fiorini, Laura |
collection | PubMed |
description | This paper makes the VISTA database, composed of inertial and visual data, publicly available for gesture and activity recognition. The inertial data were acquired with the SensHand, which can capture the movement of wrist, thumb, index and middle fingers, while the RGB-D visual data were acquired simultaneously from two different points of view, front and side. The VISTA database was acquired in two experimental phases: in the former, the participants have been asked to perform 10 different actions; in the latter, they had to execute five scenes of daily living, which corresponded to a combination of the actions of the selected actions. In both phase, Pepper interacted with participants. The two camera point of views mimic the different point of view of pepper. Overall, the dataset includes 7682 action instances for the training phase and 3361 action instances for the testing phase. It can be seen as a framework for future studies on artificial intelligence techniques for activity recognition, including inertial-only data, visual-only data, or a sensor fusion approach. |
format | Online Article Text |
id | pubmed-9117293 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-91172932022-05-20 The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition Fiorini, Laura Cornacchia Loizzo, Federica Gabriella Sorrentino, Alessandra Rovini, Erika Di Nuovo, Alessandro Cavallo, Filippo Sci Data Data Descriptor This paper makes the VISTA database, composed of inertial and visual data, publicly available for gesture and activity recognition. The inertial data were acquired with the SensHand, which can capture the movement of wrist, thumb, index and middle fingers, while the RGB-D visual data were acquired simultaneously from two different points of view, front and side. The VISTA database was acquired in two experimental phases: in the former, the participants have been asked to perform 10 different actions; in the latter, they had to execute five scenes of daily living, which corresponded to a combination of the actions of the selected actions. In both phase, Pepper interacted with participants. The two camera point of views mimic the different point of view of pepper. Overall, the dataset includes 7682 action instances for the training phase and 3361 action instances for the testing phase. It can be seen as a framework for future studies on artificial intelligence techniques for activity recognition, including inertial-only data, visual-only data, or a sensor fusion approach. Nature Publishing Group UK 2022-05-18 /pmc/articles/PMC9117293/ /pubmed/35585077 http://dx.doi.org/10.1038/s41597-022-01324-3 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Data Descriptor Fiorini, Laura Cornacchia Loizzo, Federica Gabriella Sorrentino, Alessandra Rovini, Erika Di Nuovo, Alessandro Cavallo, Filippo The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition |
title | The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition |
title_full | The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition |
title_fullStr | The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition |
title_full_unstemmed | The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition |
title_short | The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition |
title_sort | vista datasets, a combination of inertial sensors and depth cameras data for activity recognition |
topic | Data Descriptor |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9117293/ https://www.ncbi.nlm.nih.gov/pubmed/35585077 http://dx.doi.org/10.1038/s41597-022-01324-3 |
work_keys_str_mv | AT fiorinilaura thevistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT cornacchialoizzofedericagabriella thevistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT sorrentinoalessandra thevistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT rovinierika thevistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT dinuovoalessandro thevistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT cavallofilippo thevistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT fiorinilaura vistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT cornacchialoizzofedericagabriella vistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT sorrentinoalessandra vistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT rovinierika vistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT dinuovoalessandro vistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition AT cavallofilippo vistadatasetsacombinationofinertialsensorsanddepthcamerasdataforactivityrecognition |