Cargando…
Partition-Based Point Cloud Completion Network with Density Refinement
In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual field...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10377928/ https://www.ncbi.nlm.nih.gov/pubmed/37509965 http://dx.doi.org/10.3390/e25071018 |
Sumario: | In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual fields, which are abstractly understood as special convolution kernels. The set of point clouds in each local region is represented as a feature vector and transformed into N uniform perceptual fields as the input to our transformer model. We also designed a geometric density-aware block to better exploit the inductive bias of the point cloud’s 3D geometric structure. Our method preserves sharp edges and detailed structures that are often lost in voxel-based or point-based approaches. Experimental results demonstrate that our approach outperforms other methods in reducing the ambiguity of output results. Our proposed method has important applications in 3D computer vision and can efficiently recover complete 3D object shapes from missing point clouds. |
---|