Cargando…

6D Object Pose Estimation Based on Cross-Modality Feature Fusion

The 6D pose estimation using RGBD images plays a pivotal role in robotics applications. At present, after obtaining the RGB and depth modality information, most methods directly concatenate them without considering information interactions. This leads to the low accuracy of 6D pose estimation in occ...

Descripción completa

Detalles Bibliográficos
Autores principales: Jiang, Meng, Zhang, Liming, Wang, Xiaohua, Li, Shuang, Jiao, Yijie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575350/
https://www.ncbi.nlm.nih.gov/pubmed/37836919
http://dx.doi.org/10.3390/s23198088
_version_ 1785120901699731456
author Jiang, Meng
Zhang, Liming
Wang, Xiaohua
Li, Shuang
Jiao, Yijie
author_facet Jiang, Meng
Zhang, Liming
Wang, Xiaohua
Li, Shuang
Jiao, Yijie
author_sort Jiang, Meng
collection PubMed
description The 6D pose estimation using RGBD images plays a pivotal role in robotics applications. At present, after obtaining the RGB and depth modality information, most methods directly concatenate them without considering information interactions. This leads to the low accuracy of 6D pose estimation in occlusion and illumination changes. To solve this problem, we propose a new method to fuse RGB and depth modality features. Our method effectively uses individual information contained within each RGBD image modality and fully integrates cross-modality interactive information. Specifically, we transform depth images into point clouds, applying the PointNet++ network to extract point cloud features; RGB image features are extracted by CNNs and attention mechanisms are added to obtain context information within the single modality; then, we propose a cross-modality feature fusion module (CFFM) to obtain the cross-modality information, and introduce a feature contribution weight training module (CWTM) to allocate the different contributions of the two modalities to the target task. Finally, the result of 6D object pose estimation is obtained by the final cross-modality fusion feature. By enabling information interactions within and between modalities, the integration of the two modalities is maximized. Furthermore, considering the contribution of each modality enhances the overall robustness of the model. Our experiments indicate that the accuracy rate of our method on the LineMOD dataset can reach 96.9%, on average, using the ADD (-S) metric, while on the YCB-Video dataset, it can reach 94.7% using the ADD-S AUC metric and 96.5% using the ADD-S score (<2 cm) metric.
format Online
Article
Text
id pubmed-10575350
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105753502023-10-14 6D Object Pose Estimation Based on Cross-Modality Feature Fusion Jiang, Meng Zhang, Liming Wang, Xiaohua Li, Shuang Jiao, Yijie Sensors (Basel) Article The 6D pose estimation using RGBD images plays a pivotal role in robotics applications. At present, after obtaining the RGB and depth modality information, most methods directly concatenate them without considering information interactions. This leads to the low accuracy of 6D pose estimation in occlusion and illumination changes. To solve this problem, we propose a new method to fuse RGB and depth modality features. Our method effectively uses individual information contained within each RGBD image modality and fully integrates cross-modality interactive information. Specifically, we transform depth images into point clouds, applying the PointNet++ network to extract point cloud features; RGB image features are extracted by CNNs and attention mechanisms are added to obtain context information within the single modality; then, we propose a cross-modality feature fusion module (CFFM) to obtain the cross-modality information, and introduce a feature contribution weight training module (CWTM) to allocate the different contributions of the two modalities to the target task. Finally, the result of 6D object pose estimation is obtained by the final cross-modality fusion feature. By enabling information interactions within and between modalities, the integration of the two modalities is maximized. Furthermore, considering the contribution of each modality enhances the overall robustness of the model. Our experiments indicate that the accuracy rate of our method on the LineMOD dataset can reach 96.9%, on average, using the ADD (-S) metric, while on the YCB-Video dataset, it can reach 94.7% using the ADD-S AUC metric and 96.5% using the ADD-S score (<2 cm) metric. MDPI 2023-09-26 /pmc/articles/PMC10575350/ /pubmed/37836919 http://dx.doi.org/10.3390/s23198088 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Jiang, Meng
Zhang, Liming
Wang, Xiaohua
Li, Shuang
Jiao, Yijie
6D Object Pose Estimation Based on Cross-Modality Feature Fusion
title 6D Object Pose Estimation Based on Cross-Modality Feature Fusion
title_full 6D Object Pose Estimation Based on Cross-Modality Feature Fusion
title_fullStr 6D Object Pose Estimation Based on Cross-Modality Feature Fusion
title_full_unstemmed 6D Object Pose Estimation Based on Cross-Modality Feature Fusion
title_short 6D Object Pose Estimation Based on Cross-Modality Feature Fusion
title_sort 6d object pose estimation based on cross-modality feature fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575350/
https://www.ncbi.nlm.nih.gov/pubmed/37836919
http://dx.doi.org/10.3390/s23198088
work_keys_str_mv AT jiangmeng 6dobjectposeestimationbasedoncrossmodalityfeaturefusion
AT zhangliming 6dobjectposeestimationbasedoncrossmodalityfeaturefusion
AT wangxiaohua 6dobjectposeestimationbasedoncrossmodalityfeaturefusion
AT lishuang 6dobjectposeestimationbasedoncrossmodalityfeaturefusion
AT jiaoyijie 6dobjectposeestimationbasedoncrossmodalityfeaturefusion