Cargando…

Multi-view emotional expressions dataset using 2D pose estimation

Human body expressions convey emotional shifts and intentions of action and, in some cases, are even more effective than other emotion models. Despite many datasets of body expressions incorporating motion capture available, there is a lack of more widely distributed datasets regarding naturalized b...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Mingming, Zhou, Yanan, Xu, Xinye, Ren, Ziwei, Zhang, Yihan, Liu, Shenglan, Luo, Wenbo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10516935/
https://www.ncbi.nlm.nih.gov/pubmed/37739952
http://dx.doi.org/10.1038/s41597-023-02551-y
Descripción
Sumario:Human body expressions convey emotional shifts and intentions of action and, in some cases, are even more effective than other emotion models. Despite many datasets of body expressions incorporating motion capture available, there is a lack of more widely distributed datasets regarding naturalized body expressions based on the 2D video. In this paper, therefore, we report the multi-view emotional expressions dataset (MEED) using 2D pose estimation. Twenty-two actors presented six emotional (anger, disgust, fear, happiness, sadness, surprise) and neutral body movements from three viewpoints (left, front, right). A total of 4102 videos were captured. The MEED consists of the corresponding pose estimation results (i.e., 397,809 PNG files and 397,809 JSON files). The size of MEED exceeds 150 GB. We believe this dataset will benefit the research in various fields, including affective computing, human-computer interaction, social neuroscience, and psychiatry.