Cargando…

UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures

In this article, we present a large 3D motion capture dataset of Taijiquan martial art gestures (n = 2200 samples) that includes 13 classes (relative to Taijiquan techniques) executed by 12 participants of various skill levels. Participants levels were ranked by three experts on a scale of [0–10]. T...

Descripción completa

Detalles Bibliográficos
Autores principales: Tits, Mickaël, Laraba, Sohaïb, Caulier, Eric, Tilmanne, Joëlle, Dutoit, Thierry
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6139536/
https://www.ncbi.nlm.nih.gov/pubmed/30225286
http://dx.doi.org/10.1016/j.dib.2018.05.088
_version_ 1783355526258622464
author Tits, Mickaël
Laraba, Sohaïb
Caulier, Eric
Tilmanne, Joëlle
Dutoit, Thierry
author_facet Tits, Mickaël
Laraba, Sohaïb
Caulier, Eric
Tilmanne, Joëlle
Dutoit, Thierry
author_sort Tits, Mickaël
collection PubMed
description In this article, we present a large 3D motion capture dataset of Taijiquan martial art gestures (n = 2200 samples) that includes 13 classes (relative to Taijiquan techniques) executed by 12 participants of various skill levels. Participants levels were ranked by three experts on a scale of [0–10]. The dataset was captured using two motion capture systems simultaneously: 1) Qualisys, a sophisticated optical motion capture system of 11 cameras that tracks 68 retroreflective markers at 179 Hz, and 2) Microsoft Kinect V2, a low-cost markerless time-of-flight depth sensor that tracks 25 locations of a person׳s skeleton at 30 Hz. Data from both systems were synchronized manually. Qualisys data were manually corrected, and then processed to complete any missing data. Data were also manually annotated for segmentation. Both segmented and unsegmented data are provided in this dataset. This article details the recording protocol as well as the processing and annotation procedures. The data were initially recorded for gesture recognition and skill evaluation, but they are also suited for research on synthesis, segmentation, multi-sensor data comparison and fusion, sports science or more general research on human science or motion capture. A preliminary analysis has been conducted by Tits et al. (2017) [1] on a part of the dataset to extract morphology-independent motion features for skill evaluation. Results of this analysis are presented in their communication: “Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation” (10.1145/3077981.3078037) [1]. Data are available for research purpose (license CC BY-NC-SA 4.0), at https://github.com/numediart/UMONS-TAICHI.
format Online
Article
Text
id pubmed-6139536
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-61395362018-09-17 UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures Tits, Mickaël Laraba, Sohaïb Caulier, Eric Tilmanne, Joëlle Dutoit, Thierry Data Brief Engineering In this article, we present a large 3D motion capture dataset of Taijiquan martial art gestures (n = 2200 samples) that includes 13 classes (relative to Taijiquan techniques) executed by 12 participants of various skill levels. Participants levels were ranked by three experts on a scale of [0–10]. The dataset was captured using two motion capture systems simultaneously: 1) Qualisys, a sophisticated optical motion capture system of 11 cameras that tracks 68 retroreflective markers at 179 Hz, and 2) Microsoft Kinect V2, a low-cost markerless time-of-flight depth sensor that tracks 25 locations of a person׳s skeleton at 30 Hz. Data from both systems were synchronized manually. Qualisys data were manually corrected, and then processed to complete any missing data. Data were also manually annotated for segmentation. Both segmented and unsegmented data are provided in this dataset. This article details the recording protocol as well as the processing and annotation procedures. The data were initially recorded for gesture recognition and skill evaluation, but they are also suited for research on synthesis, segmentation, multi-sensor data comparison and fusion, sports science or more general research on human science or motion capture. A preliminary analysis has been conducted by Tits et al. (2017) [1] on a part of the dataset to extract morphology-independent motion features for skill evaluation. Results of this analysis are presented in their communication: “Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation” (10.1145/3077981.3078037) [1]. Data are available for research purpose (license CC BY-NC-SA 4.0), at https://github.com/numediart/UMONS-TAICHI. Elsevier 2018-05-23 /pmc/articles/PMC6139536/ /pubmed/30225286 http://dx.doi.org/10.1016/j.dib.2018.05.088 Text en © 2018 The Authors http://creativecommons.org/licenses/by/4.0/ This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Engineering
Tits, Mickaël
Laraba, Sohaïb
Caulier, Eric
Tilmanne, Joëlle
Dutoit, Thierry
UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures
title UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures
title_full UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures
title_fullStr UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures
title_full_unstemmed UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures
title_short UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures
title_sort umons-taichi: a multimodal motion capture dataset of expertise in taijiquan gestures
topic Engineering
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6139536/
https://www.ncbi.nlm.nih.gov/pubmed/30225286
http://dx.doi.org/10.1016/j.dib.2018.05.088
work_keys_str_mv AT titsmickael umonstaichiamultimodalmotioncapturedatasetofexpertiseintaijiquangestures
AT larabasohaib umonstaichiamultimodalmotioncapturedatasetofexpertiseintaijiquangestures
AT cauliereric umonstaichiamultimodalmotioncapturedatasetofexpertiseintaijiquangestures
AT tilmannejoelle umonstaichiamultimodalmotioncapturedatasetofexpertiseintaijiquangestures
AT dutoitthierry umonstaichiamultimodalmotioncapturedatasetofexpertiseintaijiquangestures