Cargando…

Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research

Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion...

Descripción completa

Detalles Bibliográficos
Autores principales: Trujillo, James P., Vaitonyte, Julija, Simanova, Irina, Özyürek, Asli
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6478643/
https://www.ncbi.nlm.nih.gov/pubmed/30143970
http://dx.doi.org/10.3758/s13428-018-1086-8
Descripción
Sumario:Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human–robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13428-018-1086-8) contains supplementary material, which is available to authorized users.