Cargando…
Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision
The accurate estimation of a 3D human pose is of great importance in many fields, such as human–computer interaction, motion recognition and automatic driving. In view of the difficulty of obtaining 3D ground truth labels for a dataset of 3D pose estimation techniques, we take 2D images as the resea...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10054156/ https://www.ncbi.nlm.nih.gov/pubmed/36991768 http://dx.doi.org/10.3390/s23063057 |
_version_ | 1785015600870850560 |
---|---|
author | Bao, Wenxia Ma, Zhongyu Liang, Dong Yang, Xianjun Niu, Tao |
author_facet | Bao, Wenxia Ma, Zhongyu Liang, Dong Yang, Xianjun Niu, Tao |
author_sort | Bao, Wenxia |
collection | PubMed |
description | The accurate estimation of a 3D human pose is of great importance in many fields, such as human–computer interaction, motion recognition and automatic driving. In view of the difficulty of obtaining 3D ground truth labels for a dataset of 3D pose estimation techniques, we take 2D images as the research object in this paper, and propose a self-supervised 3D pose estimation model called Pose ResNet. ResNet50 is used as the basic network for extract features. First, a convolutional block attention module (CBAM) was introduced to refine selection of significant pixels. Then, a waterfall atrous spatial pooling (WASP) module is used to capture multi-scale contextual information from the extracted features to increase the receptive field. Finally, the features are input into a deconvolution network to acquire the volume heat map, which is later processed by a soft argmax function to obtain the coordinates of the joints. In addition to the two learning strategies of transfer learning and synthetic occlusion, a self-supervised training method is also used in this model, in which the 3D labels are constructed by the epipolar geometry transformation to supervise the training of the network. Without the need for 3D ground truths for the dataset, accurate estimation of the 3D human pose can be realized from a single 2D image. The results show that the mean per joint position error (MPJPE) is 74.6 mm without the need for 3D ground truth labels. Compared with other approaches, the proposed method achieves better results. |
format | Online Article Text |
id | pubmed-10054156 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100541562023-03-30 Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision Bao, Wenxia Ma, Zhongyu Liang, Dong Yang, Xianjun Niu, Tao Sensors (Basel) Article The accurate estimation of a 3D human pose is of great importance in many fields, such as human–computer interaction, motion recognition and automatic driving. In view of the difficulty of obtaining 3D ground truth labels for a dataset of 3D pose estimation techniques, we take 2D images as the research object in this paper, and propose a self-supervised 3D pose estimation model called Pose ResNet. ResNet50 is used as the basic network for extract features. First, a convolutional block attention module (CBAM) was introduced to refine selection of significant pixels. Then, a waterfall atrous spatial pooling (WASP) module is used to capture multi-scale contextual information from the extracted features to increase the receptive field. Finally, the features are input into a deconvolution network to acquire the volume heat map, which is later processed by a soft argmax function to obtain the coordinates of the joints. In addition to the two learning strategies of transfer learning and synthetic occlusion, a self-supervised training method is also used in this model, in which the 3D labels are constructed by the epipolar geometry transformation to supervise the training of the network. Without the need for 3D ground truths for the dataset, accurate estimation of the 3D human pose can be realized from a single 2D image. The results show that the mean per joint position error (MPJPE) is 74.6 mm without the need for 3D ground truth labels. Compared with other approaches, the proposed method achieves better results. MDPI 2023-03-12 /pmc/articles/PMC10054156/ /pubmed/36991768 http://dx.doi.org/10.3390/s23063057 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Bao, Wenxia Ma, Zhongyu Liang, Dong Yang, Xianjun Niu, Tao Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision |
title | Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision |
title_full | Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision |
title_fullStr | Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision |
title_full_unstemmed | Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision |
title_short | Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision |
title_sort | pose resnet: 3d human pose estimation based on self-supervision |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10054156/ https://www.ncbi.nlm.nih.gov/pubmed/36991768 http://dx.doi.org/10.3390/s23063057 |
work_keys_str_mv | AT baowenxia poseresnet3dhumanposeestimationbasedonselfsupervision AT mazhongyu poseresnet3dhumanposeestimationbasedonselfsupervision AT liangdong poseresnet3dhumanposeestimationbasedonselfsupervision AT yangxianjun poseresnet3dhumanposeestimationbasedonselfsupervision AT niutao poseresnet3dhumanposeestimationbasedonselfsupervision |