Cargando…

Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network

Image based human behavior and activity understanding has been a hot topic in the field of computer vision and multimedia. As an important part, skeleton estimation, which is also called pose estimation, has attracted lots of interests. For pose estimation, most of the deep learning approaches mainl...

Descripción completa

Detalles Bibliográficos
Autores principales: Gu, Yanlei, Zhang, Huiyang, Kamijo, Shunsuke
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7146407/
https://www.ncbi.nlm.nih.gov/pubmed/32178461
http://dx.doi.org/10.3390/s20061593
_version_ 1783520194314895360
author Gu, Yanlei
Zhang, Huiyang
Kamijo, Shunsuke
author_facet Gu, Yanlei
Zhang, Huiyang
Kamijo, Shunsuke
author_sort Gu, Yanlei
collection PubMed
description Image based human behavior and activity understanding has been a hot topic in the field of computer vision and multimedia. As an important part, skeleton estimation, which is also called pose estimation, has attracted lots of interests. For pose estimation, most of the deep learning approaches mainly focus on the joint feature. However, the joint feature is not sufficient, especially when the image includes multi-person and the pose is occluded or not fully visible. This paper proposes a novel multi-task framework for the multi-person pose estimation. The proposed framework is developed based on Mask Region-based Convolutional Neural Networks (R-CNN) and extended to integrate the joint feature, body boundary, body orientation and occlusion condition together. In order to further improve the performance of the multi-person pose estimation, this paper proposes to organize the different information in serial multi-task models instead of the widely used parallel multi-task network. The proposed models are trained on the public dataset Common Objects in Context (COCO), which is further augmented by ground truths of body orientation and mutual-occlusion mask. Experiments demonstrate the performance of the proposed method for multi-person pose estimation and body orientation estimation. The proposed method can detect 84.6% of the Percentage of Correct Keypoints (PCK) and has an 83.7% Correct Detection Rate (CDR). Comparisons further illustrate the proposed model can reduce the over-detection compared with other methods.
format Online
Article
Text
id pubmed-7146407
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-71464072020-04-15 Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network Gu, Yanlei Zhang, Huiyang Kamijo, Shunsuke Sensors (Basel) Article Image based human behavior and activity understanding has been a hot topic in the field of computer vision and multimedia. As an important part, skeleton estimation, which is also called pose estimation, has attracted lots of interests. For pose estimation, most of the deep learning approaches mainly focus on the joint feature. However, the joint feature is not sufficient, especially when the image includes multi-person and the pose is occluded or not fully visible. This paper proposes a novel multi-task framework for the multi-person pose estimation. The proposed framework is developed based on Mask Region-based Convolutional Neural Networks (R-CNN) and extended to integrate the joint feature, body boundary, body orientation and occlusion condition together. In order to further improve the performance of the multi-person pose estimation, this paper proposes to organize the different information in serial multi-task models instead of the widely used parallel multi-task network. The proposed models are trained on the public dataset Common Objects in Context (COCO), which is further augmented by ground truths of body orientation and mutual-occlusion mask. Experiments demonstrate the performance of the proposed method for multi-person pose estimation and body orientation estimation. The proposed method can detect 84.6% of the Percentage of Correct Keypoints (PCK) and has an 83.7% Correct Detection Rate (CDR). Comparisons further illustrate the proposed model can reduce the over-detection compared with other methods. MDPI 2020-03-12 /pmc/articles/PMC7146407/ /pubmed/32178461 http://dx.doi.org/10.3390/s20061593 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Gu, Yanlei
Zhang, Huiyang
Kamijo, Shunsuke
Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network
title Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network
title_full Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network
title_fullStr Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network
title_full_unstemmed Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network
title_short Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network
title_sort multi-person pose estimation using an orientation and occlusion aware deep learning network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7146407/
https://www.ncbi.nlm.nih.gov/pubmed/32178461
http://dx.doi.org/10.3390/s20061593
work_keys_str_mv AT guyanlei multipersonposeestimationusinganorientationandocclusionawaredeeplearningnetwork
AT zhanghuiyang multipersonposeestimationusinganorientationandocclusionawaredeeplearningnetwork
AT kamijoshunsuke multipersonposeestimationusinganorientationandocclusionawaredeeplearningnetwork