Cargando…

CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation

Despite recent successes in hand pose estimation from RGB images or depth maps, inherent challenges remain. RGB-based methods suffer from heavy self-occlusions and depth ambiguity. Depth sensors rely heavily on distance and can only be used indoors, thus there are many limitations to the practical a...

Descripción completa

Detalles Bibliográficos
Autores principales: Sun, Xiaojing, Wang, Bin, Huang, Longxiang, Zhang, Qian, Zhu, Sulei, Ma, Yan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8473363/
https://www.ncbi.nlm.nih.gov/pubmed/34577302
http://dx.doi.org/10.3390/s21186095
_version_ 1784574973095968768
author Sun, Xiaojing
Wang, Bin
Huang, Longxiang
Zhang, Qian
Zhu, Sulei
Ma, Yan
author_facet Sun, Xiaojing
Wang, Bin
Huang, Longxiang
Zhang, Qian
Zhu, Sulei
Ma, Yan
author_sort Sun, Xiaojing
collection PubMed
description Despite recent successes in hand pose estimation from RGB images or depth maps, inherent challenges remain. RGB-based methods suffer from heavy self-occlusions and depth ambiguity. Depth sensors rely heavily on distance and can only be used indoors, thus there are many limitations to the practical application of depth-based methods. The aforementioned challenges have inspired us to combine the two modalities to offset the shortcomings of the other. In this paper, we propose a novel RGB and depth information fusion network to improve the accuracy of 3D hand pose estimation, which is called CrossFuNet. Specifically, the RGB image and the paired depth map are input into two different subnetworks, respectively. The feature maps are fused in the fusion module in which we propose a completely new approach to combine the information from the two modalities. Then, the common method is used to regress the 3D key-points by heatmaps. We validate our model on two public datasets and the results reveal that our model outperforms the state-of-the-art methods.
format Online
Article
Text
id pubmed-8473363
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-84733632021-09-28 CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation Sun, Xiaojing Wang, Bin Huang, Longxiang Zhang, Qian Zhu, Sulei Ma, Yan Sensors (Basel) Article Despite recent successes in hand pose estimation from RGB images or depth maps, inherent challenges remain. RGB-based methods suffer from heavy self-occlusions and depth ambiguity. Depth sensors rely heavily on distance and can only be used indoors, thus there are many limitations to the practical application of depth-based methods. The aforementioned challenges have inspired us to combine the two modalities to offset the shortcomings of the other. In this paper, we propose a novel RGB and depth information fusion network to improve the accuracy of 3D hand pose estimation, which is called CrossFuNet. Specifically, the RGB image and the paired depth map are input into two different subnetworks, respectively. The feature maps are fused in the fusion module in which we propose a completely new approach to combine the information from the two modalities. Then, the common method is used to regress the 3D key-points by heatmaps. We validate our model on two public datasets and the results reveal that our model outperforms the state-of-the-art methods. MDPI 2021-09-11 /pmc/articles/PMC8473363/ /pubmed/34577302 http://dx.doi.org/10.3390/s21186095 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Sun, Xiaojing
Wang, Bin
Huang, Longxiang
Zhang, Qian
Zhu, Sulei
Ma, Yan
CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation
title CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation
title_full CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation
title_fullStr CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation
title_full_unstemmed CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation
title_short CrossFuNet: RGB and Depth Cross-Fusion Network for Hand Pose Estimation
title_sort crossfunet: rgb and depth cross-fusion network for hand pose estimation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8473363/
https://www.ncbi.nlm.nih.gov/pubmed/34577302
http://dx.doi.org/10.3390/s21186095
work_keys_str_mv AT sunxiaojing crossfunetrgbanddepthcrossfusionnetworkforhandposeestimation
AT wangbin crossfunetrgbanddepthcrossfusionnetworkforhandposeestimation
AT huanglongxiang crossfunetrgbanddepthcrossfusionnetworkforhandposeestimation
AT zhangqian crossfunetrgbanddepthcrossfusionnetworkforhandposeestimation
AT zhusulei crossfunetrgbanddepthcrossfusionnetworkforhandposeestimation
AT mayan crossfunetrgbanddepthcrossfusionnetworkforhandposeestimation